Nov 29 06:28:07 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 06:28:07 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 06:28:07 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 06:28:07 localhost kernel: BIOS-provided physical RAM map:
Nov 29 06:28:07 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 06:28:07 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 06:28:07 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 06:28:07 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 06:28:07 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 06:28:07 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 06:28:07 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 06:28:07 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 06:28:07 localhost kernel: NX (Execute Disable) protection: active
Nov 29 06:28:07 localhost kernel: APIC: Static calls initialized
Nov 29 06:28:07 localhost kernel: SMBIOS 2.8 present.
Nov 29 06:28:07 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 06:28:07 localhost kernel: Hypervisor detected: KVM
Nov 29 06:28:07 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 06:28:07 localhost kernel: kvm-clock: using sched offset of 12910266158 cycles
Nov 29 06:28:07 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 06:28:07 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 29 06:28:07 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 29 06:28:07 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 29 06:28:07 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 06:28:07 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 06:28:07 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 06:28:07 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 06:28:07 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 06:28:07 localhost kernel: Using GB pages for direct mapping
Nov 29 06:28:07 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 06:28:07 localhost kernel: ACPI: Early table checksum verification disabled
Nov 29 06:28:07 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 06:28:07 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:28:07 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:28:07 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:28:07 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 06:28:07 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:28:07 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 06:28:07 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 06:28:07 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 06:28:07 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 06:28:07 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 06:28:07 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 06:28:07 localhost kernel: No NUMA configuration found
Nov 29 06:28:07 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 06:28:07 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 29 06:28:07 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 06:28:07 localhost kernel: Zone ranges:
Nov 29 06:28:07 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 06:28:07 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 06:28:07 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 06:28:07 localhost kernel:   Device   empty
Nov 29 06:28:07 localhost kernel: Movable zone start for each node
Nov 29 06:28:07 localhost kernel: Early memory node ranges
Nov 29 06:28:07 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 06:28:07 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 06:28:07 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 06:28:07 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 06:28:07 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 06:28:07 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 06:28:07 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 06:28:07 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 06:28:07 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 06:28:07 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 06:28:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 06:28:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 06:28:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 06:28:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 06:28:07 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 06:28:07 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 06:28:07 localhost kernel: TSC deadline timer available
Nov 29 06:28:07 localhost kernel: CPU topo: Max. logical packages:   8
Nov 29 06:28:07 localhost kernel: CPU topo: Max. logical dies:       8
Nov 29 06:28:07 localhost kernel: CPU topo: Max. dies per package:   1
Nov 29 06:28:07 localhost kernel: CPU topo: Max. threads per core:   1
Nov 29 06:28:07 localhost kernel: CPU topo: Num. cores per package:     1
Nov 29 06:28:07 localhost kernel: CPU topo: Num. threads per package:   1
Nov 29 06:28:07 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 06:28:07 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 06:28:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 06:28:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 06:28:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 06:28:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 06:28:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 06:28:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 06:28:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 06:28:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 06:28:07 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 06:28:07 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 06:28:07 localhost kernel: Booting paravirtualized kernel on KVM
Nov 29 06:28:07 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 06:28:07 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 06:28:07 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 06:28:07 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 29 06:28:07 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 29 06:28:07 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 06:28:07 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 06:28:07 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 06:28:07 localhost kernel: random: crng init done
Nov 29 06:28:07 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 06:28:07 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 06:28:07 localhost kernel: Fallback order for Node 0: 0 
Nov 29 06:28:07 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 06:28:07 localhost kernel: Policy zone: Normal
Nov 29 06:28:07 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 06:28:07 localhost kernel: software IO TLB: area num 8.
Nov 29 06:28:07 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 06:28:07 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 06:28:07 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 06:28:07 localhost kernel: Dynamic Preempt: voluntary
Nov 29 06:28:07 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 06:28:07 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 29 06:28:07 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 06:28:07 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 29 06:28:07 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 29 06:28:07 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 29 06:28:07 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 06:28:07 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 06:28:07 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 06:28:07 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 06:28:07 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 06:28:07 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 06:28:07 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 06:28:07 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 06:28:07 localhost kernel: Console: colour VGA+ 80x25
Nov 29 06:28:07 localhost kernel: printk: console [ttyS0] enabled
Nov 29 06:28:07 localhost kernel: ACPI: Core revision 20230331
Nov 29 06:28:07 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 06:28:07 localhost kernel: x2apic enabled
Nov 29 06:28:07 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 06:28:07 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 06:28:07 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 29 06:28:07 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 06:28:07 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 06:28:07 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 06:28:07 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 06:28:07 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 06:28:07 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 06:28:07 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 06:28:07 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 06:28:07 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 06:28:07 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 06:28:07 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 06:28:07 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 06:28:07 localhost kernel: x86/bugs: return thunk changed
Nov 29 06:28:07 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 06:28:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 06:28:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 06:28:07 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 06:28:07 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 06:28:07 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 06:28:07 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 29 06:28:07 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 29 06:28:07 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 06:28:07 localhost kernel: landlock: Up and running.
Nov 29 06:28:07 localhost kernel: Yama: becoming mindful.
Nov 29 06:28:07 localhost kernel: SELinux:  Initializing.
Nov 29 06:28:07 localhost kernel: LSM support for eBPF active
Nov 29 06:28:07 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 06:28:07 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 06:28:07 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 06:28:07 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 06:28:07 localhost kernel: ... version:                0
Nov 29 06:28:07 localhost kernel: ... bit width:              48
Nov 29 06:28:07 localhost kernel: ... generic registers:      6
Nov 29 06:28:07 localhost kernel: ... value mask:             0000ffffffffffff
Nov 29 06:28:07 localhost kernel: ... max period:             00007fffffffffff
Nov 29 06:28:07 localhost kernel: ... fixed-purpose events:   0
Nov 29 06:28:07 localhost kernel: ... event mask:             000000000000003f
Nov 29 06:28:07 localhost kernel: signal: max sigframe size: 1776
Nov 29 06:28:07 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 29 06:28:07 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 29 06:28:07 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 29 06:28:07 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 29 06:28:07 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 06:28:07 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 06:28:07 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 29 06:28:07 localhost kernel: node 0 deferred pages initialised in 19ms
Nov 29 06:28:07 localhost kernel: Memory: 7765920K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616264K reserved, 0K cma-reserved)
Nov 29 06:28:07 localhost kernel: devtmpfs: initialized
Nov 29 06:28:07 localhost kernel: x86/mm: Memory block size: 128MB
Nov 29 06:28:07 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 06:28:07 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 06:28:07 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 06:28:07 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 06:28:07 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 06:28:07 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 06:28:07 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 06:28:07 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 29 06:28:07 localhost kernel: audit: type=2000 audit(1764397685.062:1): state=initialized audit_enabled=0 res=1
Nov 29 06:28:07 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 06:28:07 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 06:28:07 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 06:28:07 localhost kernel: cpuidle: using governor menu
Nov 29 06:28:07 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 06:28:07 localhost kernel: PCI: Using configuration type 1 for base access
Nov 29 06:28:07 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 29 06:28:07 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 06:28:07 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 06:28:07 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 06:28:07 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 06:28:07 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 06:28:07 localhost kernel: Demotion targets for Node 0: null
Nov 29 06:28:07 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 06:28:07 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 29 06:28:07 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 29 06:28:07 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 06:28:07 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 06:28:07 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 06:28:07 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 06:28:07 localhost kernel: ACPI: Interpreter enabled
Nov 29 06:28:07 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 06:28:07 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 06:28:07 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 06:28:07 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 06:28:07 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 06:28:07 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 06:28:07 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [3] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [4] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [5] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [6] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [7] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [8] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [9] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [10] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [11] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [12] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [13] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [14] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [15] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [16] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [17] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [18] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [19] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [20] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [21] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [22] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [23] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [24] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [25] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [26] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [27] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [28] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [29] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [30] registered
Nov 29 06:28:07 localhost kernel: acpiphp: Slot [31] registered
Nov 29 06:28:07 localhost kernel: PCI host bridge to bus 0000:00
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 06:28:07 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 06:28:07 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 06:28:07 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 06:28:07 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 06:28:07 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 06:28:07 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 06:28:07 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 06:28:07 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 06:28:07 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 06:28:07 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 06:28:07 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 06:28:07 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 06:28:07 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 06:28:07 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 06:28:07 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 06:28:07 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 06:28:07 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 06:28:07 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 06:28:07 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 06:28:07 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 06:28:07 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 06:28:07 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 06:28:07 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 06:28:07 localhost kernel: iommu: Default domain type: Translated
Nov 29 06:28:07 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 06:28:07 localhost kernel: SCSI subsystem initialized
Nov 29 06:28:07 localhost kernel: ACPI: bus type USB registered
Nov 29 06:28:07 localhost kernel: usbcore: registered new interface driver usbfs
Nov 29 06:28:07 localhost kernel: usbcore: registered new interface driver hub
Nov 29 06:28:07 localhost kernel: usbcore: registered new device driver usb
Nov 29 06:28:07 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 06:28:07 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 06:28:07 localhost kernel: PTP clock support registered
Nov 29 06:28:07 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 29 06:28:07 localhost kernel: NetLabel: Initializing
Nov 29 06:28:07 localhost kernel: NetLabel:  domain hash size = 128
Nov 29 06:28:07 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 06:28:07 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 06:28:07 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 29 06:28:07 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 29 06:28:07 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 29 06:28:07 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 29 06:28:07 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 06:28:07 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 06:28:07 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 06:28:07 localhost kernel: vgaarb: loaded
Nov 29 06:28:07 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 06:28:07 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 06:28:07 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 06:28:07 localhost kernel: pnp: PnP ACPI init
Nov 29 06:28:07 localhost kernel: pnp 00:03: [dma 2]
Nov 29 06:28:07 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 29 06:28:07 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 06:28:07 localhost kernel: NET: Registered PF_INET protocol family
Nov 29 06:28:07 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 06:28:07 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 06:28:07 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 06:28:07 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 06:28:07 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 06:28:07 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 06:28:07 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 06:28:07 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 06:28:07 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 06:28:07 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 06:28:07 localhost kernel: NET: Registered PF_XDP protocol family
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 06:28:07 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 06:28:07 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 06:28:07 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 06:28:07 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 71883 usecs
Nov 29 06:28:07 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 29 06:28:07 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 06:28:07 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 06:28:07 localhost kernel: ACPI: bus type thunderbolt registered
Nov 29 06:28:07 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 29 06:28:07 localhost kernel: Initialise system trusted keyrings
Nov 29 06:28:07 localhost kernel: Key type blacklist registered
Nov 29 06:28:07 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 06:28:07 localhost kernel: zbud: loaded
Nov 29 06:28:07 localhost kernel: integrity: Platform Keyring initialized
Nov 29 06:28:07 localhost kernel: integrity: Machine keyring initialized
Nov 29 06:28:07 localhost kernel: Freeing initrd memory: 85868K
Nov 29 06:28:07 localhost kernel: NET: Registered PF_ALG protocol family
Nov 29 06:28:07 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 29 06:28:07 localhost kernel: Key type asymmetric registered
Nov 29 06:28:07 localhost kernel: Asymmetric key parser 'x509' registered
Nov 29 06:28:07 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 06:28:07 localhost kernel: io scheduler mq-deadline registered
Nov 29 06:28:07 localhost kernel: io scheduler kyber registered
Nov 29 06:28:07 localhost kernel: io scheduler bfq registered
Nov 29 06:28:07 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 06:28:07 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 06:28:07 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 06:28:07 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 29 06:28:07 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 06:28:07 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 06:28:07 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 06:28:07 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 06:28:07 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 06:28:07 localhost kernel: Non-volatile memory driver v1.3
Nov 29 06:28:07 localhost kernel: rdac: device handler registered
Nov 29 06:28:07 localhost kernel: hp_sw: device handler registered
Nov 29 06:28:07 localhost kernel: emc: device handler registered
Nov 29 06:28:07 localhost kernel: alua: device handler registered
Nov 29 06:28:07 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 06:28:07 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 06:28:07 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 06:28:07 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 06:28:07 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 06:28:07 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 06:28:07 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 29 06:28:07 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 06:28:07 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 06:28:07 localhost kernel: hub 1-0:1.0: USB hub found
Nov 29 06:28:07 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 29 06:28:07 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 06:28:07 localhost kernel: usbserial: USB Serial support registered for generic
Nov 29 06:28:07 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 06:28:07 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 06:28:07 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 06:28:07 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 06:28:07 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 06:28:07 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 06:28:07 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 06:28:07 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 06:28:07 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T06:28:06 UTC (1764397686)
Nov 29 06:28:07 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 06:28:07 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 06:28:07 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 06:28:07 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 06:28:07 localhost kernel: usbcore: registered new interface driver usbhid
Nov 29 06:28:07 localhost kernel: usbhid: USB HID core driver
Nov 29 06:28:07 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 29 06:28:07 localhost kernel: Initializing XFRM netlink socket
Nov 29 06:28:07 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 29 06:28:07 localhost kernel: Segment Routing with IPv6
Nov 29 06:28:07 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 29 06:28:07 localhost kernel: mpls_gso: MPLS GSO support
Nov 29 06:28:07 localhost kernel: IPI shorthand broadcast: enabled
Nov 29 06:28:07 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 06:28:07 localhost kernel: AES CTR mode by8 optimization enabled
Nov 29 06:28:07 localhost kernel: sched_clock: Marking stable (1920012360, 162204388)->(2386714222, -304497474)
Nov 29 06:28:07 localhost kernel: registered taskstats version 1
Nov 29 06:28:07 localhost kernel: Loading compiled-in X.509 certificates
Nov 29 06:28:07 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 06:28:07 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 06:28:07 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 06:28:07 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 06:28:07 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 06:28:07 localhost kernel: Demotion targets for Node 0: null
Nov 29 06:28:07 localhost kernel: page_owner is disabled
Nov 29 06:28:07 localhost kernel: Key type .fscrypt registered
Nov 29 06:28:07 localhost kernel: Key type fscrypt-provisioning registered
Nov 29 06:28:07 localhost kernel: Key type big_key registered
Nov 29 06:28:07 localhost kernel: Key type encrypted registered
Nov 29 06:28:07 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 06:28:07 localhost kernel: Loading compiled-in module X.509 certificates
Nov 29 06:28:07 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 06:28:07 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 29 06:28:07 localhost kernel: ima: No architecture policies found
Nov 29 06:28:07 localhost kernel: evm: Initialising EVM extended attributes:
Nov 29 06:28:07 localhost kernel: evm: security.selinux
Nov 29 06:28:07 localhost kernel: evm: security.SMACK64 (disabled)
Nov 29 06:28:07 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 06:28:07 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 06:28:07 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 06:28:07 localhost kernel: evm: security.apparmor (disabled)
Nov 29 06:28:07 localhost kernel: evm: security.ima
Nov 29 06:28:07 localhost kernel: evm: security.capability
Nov 29 06:28:07 localhost kernel: evm: HMAC attrs: 0x1
Nov 29 06:28:07 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 06:28:07 localhost kernel: Running certificate verification RSA selftest
Nov 29 06:28:07 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 06:28:07 localhost kernel: Running certificate verification ECDSA selftest
Nov 29 06:28:07 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 06:28:07 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 06:28:07 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 06:28:07 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 06:28:07 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 29 06:28:07 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 06:28:07 localhost kernel: clk: Disabling unused clocks
Nov 29 06:28:07 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 06:28:07 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 06:28:07 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 29 06:28:07 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 06:28:07 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 29 06:28:07 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 06:28:07 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 06:28:07 localhost kernel: Run /init as init process
Nov 29 06:28:07 localhost kernel:   with arguments:
Nov 29 06:28:07 localhost kernel:     /init
Nov 29 06:28:07 localhost kernel:   with environment:
Nov 29 06:28:07 localhost kernel:     HOME=/
Nov 29 06:28:07 localhost kernel:     TERM=linux
Nov 29 06:28:07 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 29 06:28:07 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 06:28:07 localhost systemd[1]: Detected virtualization kvm.
Nov 29 06:28:07 localhost systemd[1]: Detected architecture x86-64.
Nov 29 06:28:07 localhost systemd[1]: Running in initrd.
Nov 29 06:28:07 localhost systemd[1]: No hostname configured, using default hostname.
Nov 29 06:28:07 localhost systemd[1]: Hostname set to <localhost>.
Nov 29 06:28:07 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 29 06:28:07 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 29 06:28:07 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 06:28:07 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 06:28:07 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 29 06:28:07 localhost systemd[1]: Reached target Local File Systems.
Nov 29 06:28:07 localhost systemd[1]: Reached target Path Units.
Nov 29 06:28:07 localhost systemd[1]: Reached target Slice Units.
Nov 29 06:28:07 localhost systemd[1]: Reached target Swaps.
Nov 29 06:28:07 localhost systemd[1]: Reached target Timer Units.
Nov 29 06:28:07 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 06:28:07 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 29 06:28:07 localhost systemd[1]: Listening on Journal Socket.
Nov 29 06:28:07 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 06:28:07 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 06:28:07 localhost systemd[1]: Reached target Socket Units.
Nov 29 06:28:07 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 06:28:07 localhost systemd[1]: Starting Journal Service...
Nov 29 06:28:07 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 06:28:07 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 06:28:07 localhost systemd[1]: Starting Create System Users...
Nov 29 06:28:07 localhost systemd[1]: Starting Setup Virtual Console...
Nov 29 06:28:07 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 06:28:07 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 06:28:07 localhost systemd[1]: Finished Create System Users.
Nov 29 06:28:07 localhost systemd-journald[304]: Journal started
Nov 29 06:28:07 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/a4431209b14d4d8f894a1aed0bd2dae7) is 8.0M, max 153.6M, 145.6M free.
Nov 29 06:28:07 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 29 06:28:07 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 29 06:28:07 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 06:28:07 localhost systemd[1]: Started Journal Service.
Nov 29 06:28:07 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 06:28:07 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 06:28:07 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 06:28:07 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 06:28:07 localhost systemd[1]: Finished Setup Virtual Console.
Nov 29 06:28:07 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 06:28:07 localhost systemd[1]: Starting dracut cmdline hook...
Nov 29 06:28:07 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 06:28:07 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 06:28:07 localhost systemd[1]: Finished dracut cmdline hook.
Nov 29 06:28:07 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 29 06:28:07 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 06:28:07 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 29 06:28:07 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 06:28:07 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 29 06:28:07 localhost kernel: RPC: Registered udp transport module.
Nov 29 06:28:07 localhost kernel: RPC: Registered tcp transport module.
Nov 29 06:28:07 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 06:28:07 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 06:28:07 localhost rpc.statd[441]: Version 2.5.4 starting
Nov 29 06:28:07 localhost rpc.statd[441]: Initializing NSM state
Nov 29 06:28:07 localhost rpc.idmapd[446]: Setting log level to 0
Nov 29 06:28:07 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 29 06:28:07 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 06:28:07 localhost systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 06:28:08 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 06:28:08 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 29 06:28:08 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 29 06:28:08 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 06:28:08 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 29 06:28:08 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 06:28:08 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 06:28:08 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 06:28:08 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 06:28:08 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 06:28:08 localhost systemd[1]: Reached target Network.
Nov 29 06:28:08 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 06:28:08 localhost systemd[1]: Starting dracut initqueue hook...
Nov 29 06:28:08 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 29 06:28:08 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 29 06:28:08 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 06:28:08 localhost systemd[1]: Reached target System Initialization.
Nov 29 06:28:08 localhost systemd[1]: Reached target Basic System.
Nov 29 06:28:08 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 06:28:08 localhost kernel: libata version 3.00 loaded.
Nov 29 06:28:08 localhost kernel:  vda: vda1
Nov 29 06:28:08 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 29 06:28:08 localhost kernel: scsi host0: ata_piix
Nov 29 06:28:08 localhost kernel: scsi host1: ata_piix
Nov 29 06:28:08 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 06:28:08 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 06:28:08 localhost systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 06:28:08 localhost systemd[1]: Reached target Initrd Root Device.
Nov 29 06:28:08 localhost kernel: ata1: found unknown device (class 0)
Nov 29 06:28:08 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 06:28:08 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 06:28:08 localhost systemd-udevd[490]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:28:08 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 06:28:08 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 06:28:08 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 06:28:08 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 29 06:28:08 localhost systemd[1]: Finished dracut initqueue hook.
Nov 29 06:28:08 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 06:28:08 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 06:28:08 localhost systemd[1]: Reached target Remote File Systems.
Nov 29 06:28:08 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 29 06:28:08 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 29 06:28:08 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 06:28:08 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 06:28:08 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 06:28:08 localhost systemd[1]: Mounting /sysroot...
Nov 29 06:28:09 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 06:28:09 localhost kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 06:28:39 localhost kernel: XFS (vda1): Ending clean mount
Nov 29 06:29:38 localhost systemd[1]: sysroot.mount: Mounting timed out. Terminating.
Nov 29 06:29:41 localhost systemd[1]: sysroot.mount: Mount process exited, code=killed, status=15/TERM
Nov 29 06:29:41 localhost systemd[1]: Mounted /sysroot.
Nov 29 06:29:41 localhost systemd[1]: Reached target Initrd Root File System.
Nov 29 06:29:41 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 06:29:41 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 06:29:41 localhost systemd[1]: Reached target Initrd File Systems.
Nov 29 06:29:41 localhost systemd[1]: Reached target Initrd Default Target.
Nov 29 06:29:41 localhost systemd[1]: Starting dracut mount hook...
Nov 29 06:29:41 localhost systemd[1]: Finished dracut mount hook.
Nov 29 06:29:41 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 06:29:41 localhost rpc.idmapd[446]: exiting on signal 15
Nov 29 06:29:41 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 06:29:41 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 06:29:41 localhost systemd[1]: Stopped target Network.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Timer Units.
Nov 29 06:29:41 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 06:29:41 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Basic System.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Path Units.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Remote File Systems.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Slice Units.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Socket Units.
Nov 29 06:29:41 localhost systemd[1]: Stopped target System Initialization.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Local File Systems.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Swaps.
Nov 29 06:29:41 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped dracut mount hook.
Nov 29 06:29:41 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 29 06:29:41 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 06:29:41 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 06:29:41 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 29 06:29:41 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 29 06:29:41 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 06:29:41 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 06:29:41 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 06:29:41 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 06:29:41 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 29 06:29:41 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 06:29:41 localhost systemd[1]: systemd-udevd.service: Consumed 1.097s CPU time.
Nov 29 06:29:41 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Closed udev Control Socket.
Nov 29 06:29:41 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Closed udev Kernel Socket.
Nov 29 06:29:41 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 29 06:29:41 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 29 06:29:41 localhost systemd[1]: Starting Cleanup udev Database...
Nov 29 06:29:41 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 06:29:41 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 06:29:41 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Stopped Create System Users.
Nov 29 06:29:41 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 06:29:41 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 06:29:41 localhost systemd[1]: Finished Cleanup udev Database.
Nov 29 06:29:41 localhost systemd[1]: Reached target Switch Root.
Nov 29 06:29:41 localhost systemd[1]: Starting Switch Root...
Nov 29 06:29:41 localhost systemd[1]: Switching root.
Nov 29 06:29:41 localhost systemd-journald[304]: Journal stopped
Nov 29 06:29:42 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Nov 29 06:29:42 localhost kernel: audit: type=1404 audit(1764397781.636:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 06:29:42 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:29:42 localhost kernel: SELinux:  policy capability open_perms=1
Nov 29 06:29:42 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:29:42 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:29:42 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:29:42 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:29:42 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:29:42 localhost kernel: audit: type=1403 audit(1764397781.764:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 06:29:42 localhost systemd[1]: Successfully loaded SELinux policy in 131.310ms.
Nov 29 06:29:42 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.537ms.
Nov 29 06:29:42 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 06:29:42 localhost systemd[1]: Detected virtualization kvm.
Nov 29 06:29:42 localhost systemd[1]: Detected architecture x86-64.
Nov 29 06:29:42 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:29:42 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 29 06:29:42 localhost systemd[1]: Stopped Switch Root.
Nov 29 06:29:42 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 06:29:42 localhost systemd[1]: Created slice Slice /system/getty.
Nov 29 06:29:42 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 29 06:29:42 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 29 06:29:42 localhost systemd[1]: Created slice User and Session Slice.
Nov 29 06:29:42 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 06:29:42 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 29 06:29:42 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 06:29:42 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 06:29:42 localhost systemd[1]: Stopped target Switch Root.
Nov 29 06:29:42 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 29 06:29:42 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 29 06:29:42 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 29 06:29:42 localhost systemd[1]: Reached target Path Units.
Nov 29 06:29:42 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 29 06:29:42 localhost systemd[1]: Reached target Slice Units.
Nov 29 06:29:42 localhost systemd[1]: Reached target Swaps.
Nov 29 06:29:42 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 29 06:29:42 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 29 06:29:42 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 29 06:29:42 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 29 06:29:42 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 29 06:29:42 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 06:29:42 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 06:29:42 localhost systemd[1]: Mounting Huge Pages File System...
Nov 29 06:29:42 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 29 06:29:42 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 29 06:29:42 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 29 06:29:42 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 06:29:42 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 06:29:42 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 06:29:42 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 29 06:29:42 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 29 06:29:42 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 29 06:29:42 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 06:29:42 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 29 06:29:42 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 29 06:29:42 localhost systemd[1]: Stopped Journal Service.
Nov 29 06:29:42 localhost systemd[1]: Starting Journal Service...
Nov 29 06:29:42 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 06:29:42 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 29 06:29:42 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 06:29:42 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 29 06:29:42 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 06:29:42 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 06:29:42 localhost kernel: fuse: init (API version 7.37)
Nov 29 06:29:42 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 06:29:42 localhost systemd[1]: Mounted Huge Pages File System.
Nov 29 06:29:42 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 29 06:29:42 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 29 06:29:42 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:42 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 29 06:29:42 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 06:29:42 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 06:29:42 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 06:29:42 localhost systemd-journald[681]: Journal started
Nov 29 06:29:42 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 06:29:42 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 29 06:29:42 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 06:29:42 localhost systemd[1]: Started Journal Service.
Nov 29 06:29:42 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 06:29:42 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 06:29:42 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 06:29:42 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 29 06:29:42 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 06:29:42 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 06:29:42 localhost kernel: ACPI: bus type drm_connector registered
Nov 29 06:29:42 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 06:29:42 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 06:29:42 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 29 06:29:42 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 06:29:42 localhost systemd[1]: Mounting FUSE Control File System...
Nov 29 06:29:42 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 06:29:42 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 29 06:29:42 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 06:29:42 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 06:29:42 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 06:29:42 localhost systemd[1]: Starting Create System Users...
Nov 29 06:29:42 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 06:29:42 localhost systemd-journald[681]: Received client request to flush runtime journal.
Nov 29 06:29:42 localhost systemd[1]: Mounted FUSE Control File System.
Nov 29 06:29:42 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 06:29:42 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 06:29:43 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 06:29:43 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 06:29:43 localhost systemd[1]: Finished Create System Users.
Nov 29 06:29:43 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 06:29:43 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 06:29:43 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 06:29:43 localhost systemd[1]: Reached target Local File Systems.
Nov 29 06:29:43 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 06:29:43 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 06:29:43 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 06:29:43 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 06:29:43 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 06:29:43 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 06:29:43 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 06:29:43 localhost bootctl[698]: Couldn't find EFI system partition, skipping.
Nov 29 06:29:43 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 06:29:43 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 06:29:43 localhost systemd[1]: Starting Security Auditing Service...
Nov 29 06:29:43 localhost systemd[1]: Starting RPC Bind...
Nov 29 06:29:43 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 06:29:43 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 06:29:43 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 06:29:43 localhost systemd[1]: Started RPC Bind.
Nov 29 06:29:43 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 06:29:43 localhost augenrules[709]: /sbin/augenrules: No change
Nov 29 06:29:43 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 06:29:43 localhost augenrules[724]: No rules
Nov 29 06:29:43 localhost augenrules[724]: enabled 1
Nov 29 06:29:43 localhost augenrules[724]: failure 1
Nov 29 06:29:43 localhost augenrules[724]: pid 704
Nov 29 06:29:43 localhost augenrules[724]: rate_limit 0
Nov 29 06:29:43 localhost augenrules[724]: backlog_limit 8192
Nov 29 06:29:43 localhost augenrules[724]: lost 0
Nov 29 06:29:43 localhost augenrules[724]: backlog 3
Nov 29 06:29:43 localhost augenrules[724]: backlog_wait_time 60000
Nov 29 06:29:43 localhost augenrules[724]: backlog_wait_time_actual 0
Nov 29 06:29:43 localhost augenrules[724]: enabled 1
Nov 29 06:29:43 localhost augenrules[724]: failure 1
Nov 29 06:29:43 localhost augenrules[724]: pid 704
Nov 29 06:29:43 localhost augenrules[724]: rate_limit 0
Nov 29 06:29:43 localhost augenrules[724]: backlog_limit 8192
Nov 29 06:29:43 localhost augenrules[724]: lost 0
Nov 29 06:29:43 localhost augenrules[724]: backlog 2
Nov 29 06:29:43 localhost augenrules[724]: backlog_wait_time 60000
Nov 29 06:29:43 localhost augenrules[724]: backlog_wait_time_actual 0
Nov 29 06:29:43 localhost augenrules[724]: enabled 1
Nov 29 06:29:43 localhost augenrules[724]: failure 1
Nov 29 06:29:43 localhost augenrules[724]: pid 704
Nov 29 06:29:43 localhost augenrules[724]: rate_limit 0
Nov 29 06:29:43 localhost augenrules[724]: backlog_limit 8192
Nov 29 06:29:43 localhost augenrules[724]: lost 0
Nov 29 06:29:43 localhost augenrules[724]: backlog 0
Nov 29 06:29:43 localhost augenrules[724]: backlog_wait_time 60000
Nov 29 06:29:43 localhost augenrules[724]: backlog_wait_time_actual 0
Nov 29 06:29:43 localhost systemd[1]: Started Security Auditing Service.
Nov 29 06:29:43 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 06:29:43 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 06:29:43 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 29 06:29:43 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 06:29:43 localhost systemd[1]: Starting Update is Completed...
Nov 29 06:29:43 localhost systemd[1]: Finished Update is Completed.
Nov 29 06:29:43 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 06:29:43 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 06:29:43 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 06:29:43 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 06:29:44 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 06:29:44 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 06:29:44 localhost systemd[1]: Reached target System Initialization.
Nov 29 06:29:44 localhost systemd[1]: Started dnf makecache --timer.
Nov 29 06:29:44 localhost systemd[1]: Started Daily rotation of log files.
Nov 29 06:29:44 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 06:29:44 localhost systemd[1]: Reached target Timer Units.
Nov 29 06:29:44 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 06:29:44 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 06:29:44 localhost systemd[1]: Reached target Socket Units.
Nov 29 06:29:44 localhost systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:29:44 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 29 06:29:44 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 06:29:44 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 29 06:29:44 localhost systemd[1]: Reached target Basic System.
Nov 29 06:29:44 localhost dbus-broker-lau[774]: Ready
Nov 29 06:29:44 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 06:29:44 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 06:29:44 localhost kernel: Console: switching to colour dummy device 80x25
Nov 29 06:29:44 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 06:29:44 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 06:29:44 localhost kernel: [drm] features: -context_init
Nov 29 06:29:44 localhost systemd[1]: Starting NTP client/server...
Nov 29 06:29:44 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 06:29:44 localhost kernel: [drm] number of scanouts: 1
Nov 29 06:29:44 localhost kernel: [drm] number of cap sets: 0
Nov 29 06:29:44 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 06:29:44 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 06:29:44 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 06:29:44 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 06:29:44 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 06:29:44 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 29 06:29:44 localhost chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 06:29:44 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 06:29:44 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 06:29:44 localhost chronyd[795]: Loaded 0 symmetric keys
Nov 29 06:29:44 localhost chronyd[795]: Using right/UTC timezone to obtain leap second data
Nov 29 06:29:44 localhost chronyd[795]: Loaded seccomp filter (level 2)
Nov 29 06:29:44 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 06:29:44 localhost systemd[1]: Started irqbalance daemon.
Nov 29 06:29:44 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 06:29:44 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:29:44 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:29:44 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:29:44 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 29 06:29:44 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 06:29:44 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 29 06:29:44 localhost systemd[1]: Starting User Login Management...
Nov 29 06:29:44 localhost systemd[1]: Started NTP client/server.
Nov 29 06:29:44 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 06:29:44 localhost kernel: kvm_amd: TSC scaling supported
Nov 29 06:29:44 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 29 06:29:44 localhost kernel: kvm_amd: Nested Paging enabled
Nov 29 06:29:44 localhost kernel: kvm_amd: LBR virtualization supported
Nov 29 06:29:44 localhost systemd-logind[807]: New seat seat0.
Nov 29 06:29:44 localhost systemd-logind[807]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 06:29:44 localhost systemd-logind[807]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 06:29:44 localhost systemd[1]: Started User Login Management.
Nov 29 06:29:44 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 06:29:44 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 06:29:44 localhost iptables.init[797]: iptables: Applying firewall rules: [  OK  ]
Nov 29 06:29:44 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 06:29:44 localhost cloud-init[842]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 06:29:44 +0000. Up 99.96 seconds.
Nov 29 06:29:44 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 29 06:29:44 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 29 06:29:44 localhost systemd[1]: run-cloud\x2dinit-tmp-tmptwgjulv3.mount: Deactivated successfully.
Nov 29 06:29:44 localhost systemd[1]: Starting Hostname Service...
Nov 29 06:29:44 localhost systemd[1]: Started Hostname Service.
Nov 29 06:29:44 np0005539576.novalocal systemd-hostnamed[856]: Hostname set to <np0005539576.novalocal> (static)
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Reached target Preparation for Network.
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Starting Network Manager...
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.1766] NetworkManager (version 1.54.1-1.el9) is starting... (boot:d7e69dea-8152-484d-8a65-eb3d0b2e01e5)
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.1771] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.1842] manager[0x555b07785080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.1882] hostname: hostname: using hostnamed
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.1882] hostname: static hostname changed from (none) to "np0005539576.novalocal"
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.1887] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.1998] manager[0x555b07785080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2000] manager[0x555b07785080]: rfkill: WWAN hardware radio set enabled
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2116] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2117] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2119] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2120] manager: Networking is enabled by state file
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2122] settings: Loaded settings plugin: keyfile (internal)
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2134] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2154] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2166] dhcp: init: Using DHCP client 'internal'
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2168] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2182] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2191] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2199] device (lo): Activation: starting connection 'lo' (e8399e84-1c3b-44af-bf17-12b484068834)
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2209] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2213] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Started Network Manager.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2286] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Reached target Network.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2290] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2292] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2293] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2295] device (eth0): carrier: link connected
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2297] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2302] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2310] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2313] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2314] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2315] manager: NetworkManager state is now CONNECTING
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2317] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2322] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2324] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2442] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2446] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2451] device (lo): Activation: successful, device activated.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2458] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2466] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2490] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Reached target NFS client services.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2512] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2514] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2517] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2519] device (eth0): Activation: successful, device activated.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2523] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 06:29:45 np0005539576.novalocal NetworkManager[860]: <info>  [1764397785.2526] manager: startup complete
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Reached target Remote File Systems.
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 06:29:45 np0005539576.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 06:29:45 +0000. Up 101.05 seconds.
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: |  eth0  | True |         38.102.83.74         | 255.255.255.0 | global | fa:16:3e:33:c6:22 |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe33:c622/64 |       .       |  link  | fa:16:3e:33:c6:22 |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 06:29:45 np0005539576.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 06:29:50 np0005539576.novalocal chronyd[795]: Selected source 158.69.193.108 (2.centos.pool.ntp.org)
Nov 29 06:29:50 np0005539576.novalocal chronyd[795]: System clock TAI offset set to 37 seconds
Nov 29 06:29:50 np0005539576.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Nov 29 06:29:50 np0005539576.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 29 06:29:51 np0005539576.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Nov 29 06:29:51 np0005539576.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Nov 29 06:29:51 np0005539576.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Nov 29 06:29:51 np0005539576.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: Generating public/private rsa key pair.
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: The key fingerprint is:
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: SHA256:r76hopve4egyyGoPv+kpaE1fXgF2pJbLMOHP32RvQ90 root@np0005539576.novalocal
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: The key's randomart image is:
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: +---[RSA 3072]----+
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |     .  ..       |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |    . .oo.       |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |     +.+o        |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |      B ..    . .|
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |       =S .o . .E|
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |   .   ..o+ o    |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |+.o o o o... +   |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |==o*+o o o  . .  |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |==@X+...+.       |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: +----[SHA256]-----+
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: The key fingerprint is:
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: SHA256:FOEGy4Vcbz4IWVUQy3Bxd90RQ4JSvvRdFKnRGx7VPd8 root@np0005539576.novalocal
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: The key's randomart image is:
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: +---[ECDSA 256]---+
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |     ..oBoB*+.+*#|
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |     .oB B.+ o.O*|
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |      = + *o  + O|
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |       + +. o..oE|
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |        S o. . . |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |           .     |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |                 |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |                 |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |                 |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: +----[SHA256]-----+
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: The key fingerprint is:
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: SHA256:40tvmkr1UiG4MclZfkgRbALvX3NXxk8iUML07FqpSWY root@np0005539576.novalocal
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: The key's randomart image is:
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: +--[ED25519 256]--+
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |    .. .==+..    |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |     o.Bo..=   . |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |      Oo+ o + . =|
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |     . + o o o =.|
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |      o S E = . .|
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |       + O B .   |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |      . = =      |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |     . . =.      |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: |      ..+o.      |
Nov 29 06:29:53 np0005539576.novalocal cloud-init[923]: +----[SHA256]-----+
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Reached target Network is Online.
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Starting System Logging Service...
Nov 29 06:29:53 np0005539576.novalocal sm-notify[1006]: Version 2.5.4 starting
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Starting Permit User Sessions...
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Finished Permit User Sessions.
Nov 29 06:29:53 np0005539576.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Nov 29 06:29:53 np0005539576.novalocal sshd[1008]: Server listening on :: port 22.
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Started Command Scheduler.
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Started Getty on tty1.
Nov 29 06:29:53 np0005539576.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Nov 29 06:29:53 np0005539576.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 29 06:29:53 np0005539576.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 70% if used.)
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 29 06:29:53 np0005539576.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Reached target Login Prompts.
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 29 06:29:53 np0005539576.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Nov 29 06:29:53 np0005539576.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Started System Logging Service.
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Reached target Multi-User System.
Nov 29 06:29:53 np0005539576.novalocal sshd-session[1021]: Unable to negotiate with 38.102.83.114 port 53398: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 06:29:53 np0005539576.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 06:29:53 np0005539576.novalocal sshd-session[1041]: Unable to negotiate with 38.102.83.114 port 53418: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 29 06:29:53 np0005539576.novalocal sshd-session[1051]: Unable to negotiate with 38.102.83.114 port 53422: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 29 06:29:53 np0005539576.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:29:53 np0005539576.novalocal sshd-session[1012]: Connection closed by 38.102.83.114 port 53394 [preauth]
Nov 29 06:29:53 np0005539576.novalocal sshd-session[1079]: Unable to negotiate with 38.102.83.114 port 53456: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 29 06:29:53 np0005539576.novalocal kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Nov 29 06:29:53 np0005539576.novalocal kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 06:29:53 np0005539576.novalocal sshd-session[1085]: Unable to negotiate with 38.102.83.114 port 53470: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 29 06:29:53 np0005539576.novalocal sshd-session[1031]: Connection closed by 38.102.83.114 port 53406 [preauth]
Nov 29 06:29:53 np0005539576.novalocal sshd-session[1060]: Connection closed by 38.102.83.114 port 53430 [preauth]
Nov 29 06:29:53 np0005539576.novalocal sshd-session[1074]: Connection closed by 38.102.83.114 port 53446 [preauth]
Nov 29 06:29:53 np0005539576.novalocal cloud-init[1109]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 06:29:53 +0000. Up 109.09 seconds.
Nov 29 06:29:54 np0005539576.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 06:29:54 np0005539576.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 06:29:54 np0005539576.novalocal dracut[1287]: dracut-057-102.git20250818.el9
Nov 29 06:29:54 np0005539576.novalocal cloud-init[1303]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 06:29:54 +0000. Up 109.88 seconds.
Nov 29 06:29:54 np0005539576.novalocal cloud-init[1305]: #############################################################
Nov 29 06:29:54 np0005539576.novalocal cloud-init[1306]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: IRQ 25 affinity is now unmanaged
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: IRQ 31 affinity is now unmanaged
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: IRQ 28 affinity is now unmanaged
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: IRQ 32 affinity is now unmanaged
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: IRQ 30 affinity is now unmanaged
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 06:29:54 np0005539576.novalocal irqbalance[804]: IRQ 29 affinity is now unmanaged
Nov 29 06:29:54 np0005539576.novalocal cloud-init[1308]: 256 SHA256:FOEGy4Vcbz4IWVUQy3Bxd90RQ4JSvvRdFKnRGx7VPd8 root@np0005539576.novalocal (ECDSA)
Nov 29 06:29:54 np0005539576.novalocal cloud-init[1313]: 256 SHA256:40tvmkr1UiG4MclZfkgRbALvX3NXxk8iUML07FqpSWY root@np0005539576.novalocal (ED25519)
Nov 29 06:29:54 np0005539576.novalocal cloud-init[1321]: 3072 SHA256:r76hopve4egyyGoPv+kpaE1fXgF2pJbLMOHP32RvQ90 root@np0005539576.novalocal (RSA)
Nov 29 06:29:54 np0005539576.novalocal cloud-init[1322]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 06:29:54 np0005539576.novalocal cloud-init[1324]: #############################################################
Nov 29 06:29:54 np0005539576.novalocal dracut[1289]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 06:29:54 np0005539576.novalocal cloud-init[1303]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 06:29:54 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 110.08 seconds
Nov 29 06:29:54 np0005539576.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 06:29:54 np0005539576.novalocal systemd[1]: Reached target Cloud-init target.
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 06:29:55 np0005539576.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: memstrack is not available
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 06:29:55 np0005539576.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 06:29:56 np0005539576.novalocal dracut[1289]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 06:29:56 np0005539576.novalocal dracut[1289]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 06:29:56 np0005539576.novalocal dracut[1289]: memstrack is not available
Nov 29 06:29:56 np0005539576.novalocal dracut[1289]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 06:29:56 np0005539576.novalocal dracut[1289]: *** Including module: systemd ***
Nov 29 06:29:56 np0005539576.novalocal dracut[1289]: *** Including module: fips ***
Nov 29 06:29:56 np0005539576.novalocal dracut[1289]: *** Including module: systemd-initrd ***
Nov 29 06:29:56 np0005539576.novalocal dracut[1289]: *** Including module: i18n ***
Nov 29 06:29:57 np0005539576.novalocal dracut[1289]: *** Including module: drm ***
Nov 29 06:29:57 np0005539576.novalocal dracut[1289]: *** Including module: prefixdevname ***
Nov 29 06:29:57 np0005539576.novalocal dracut[1289]: *** Including module: kernel-modules ***
Nov 29 06:29:57 np0005539576.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 29 06:29:57 np0005539576.novalocal dracut[1289]: *** Including module: kernel-modules-extra ***
Nov 29 06:29:57 np0005539576.novalocal dracut[1289]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 29 06:29:57 np0005539576.novalocal dracut[1289]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 29 06:29:57 np0005539576.novalocal dracut[1289]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 29 06:29:57 np0005539576.novalocal dracut[1289]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 29 06:29:57 np0005539576.novalocal dracut[1289]: *** Including module: qemu ***
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: *** Including module: fstab-sys ***
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: *** Including module: rootfs-block ***
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: *** Including module: terminfo ***
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: *** Including module: udev-rules ***
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: Skipping udev rule: 91-permissions.rules
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: *** Including module: virtiofs ***
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: *** Including module: dracut-systemd ***
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: *** Including module: usrmount ***
Nov 29 06:29:58 np0005539576.novalocal dracut[1289]: *** Including module: base ***
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]: *** Including module: fs-lib ***
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]: *** Including module: kdumpbase ***
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:   microcode_ctl module: mangling fw_dir
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]: *** Including module: openssl ***
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]: *** Including module: shutdown ***
Nov 29 06:29:59 np0005539576.novalocal dracut[1289]: *** Including module: squash ***
Nov 29 06:30:00 np0005539576.novalocal dracut[1289]: *** Including modules done ***
Nov 29 06:30:00 np0005539576.novalocal dracut[1289]: *** Installing kernel module dependencies ***
Nov 29 06:30:00 np0005539576.novalocal dracut[1289]: *** Installing kernel module dependencies done ***
Nov 29 06:30:00 np0005539576.novalocal dracut[1289]: *** Resolving executable dependencies ***
Nov 29 06:30:02 np0005539576.novalocal dracut[1289]: *** Resolving executable dependencies done ***
Nov 29 06:30:02 np0005539576.novalocal dracut[1289]: *** Generating early-microcode cpio image ***
Nov 29 06:30:02 np0005539576.novalocal dracut[1289]: *** Store current command line parameters ***
Nov 29 06:30:02 np0005539576.novalocal dracut[1289]: Stored kernel commandline:
Nov 29 06:30:02 np0005539576.novalocal dracut[1289]: No dracut internal kernel commandline stored in the initramfs
Nov 29 06:30:02 np0005539576.novalocal dracut[1289]: *** Install squash loader ***
Nov 29 06:30:03 np0005539576.novalocal dracut[1289]: *** Squashing the files inside the initramfs ***
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: *** Squashing the files inside the initramfs done ***
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: *** Hardlinking files ***
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: Mode:           real
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: Files:          50
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: Linked:         0 files
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: Compared:       0 xattrs
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: Compared:       0 files
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: Saved:          0 B
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: Duration:       0.000403 seconds
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: *** Hardlinking files done ***
Nov 29 06:30:04 np0005539576.novalocal dracut[1289]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 06:30:05 np0005539576.novalocal kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Nov 29 06:30:05 np0005539576.novalocal kdumpctl[1018]: kdump: Starting kdump: [OK]
Nov 29 06:30:05 np0005539576.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 29 06:30:05 np0005539576.novalocal systemd[1]: Startup finished in 2.371s (kernel) + 1min 34.637s (initrd) + 23.929s (userspace) = 2min 938ms.
Nov 29 06:30:08 np0005539576.novalocal sshd-session[4297]: Accepted publickey for zuul from 38.102.83.114 port 36446 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 29 06:30:09 np0005539576.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 29 06:30:09 np0005539576.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 06:30:09 np0005539576.novalocal systemd-logind[807]: New session 1 of user zuul.
Nov 29 06:30:09 np0005539576.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 06:30:09 np0005539576.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Queued start job for default target Main User Target.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Created slice User Application Slice.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Reached target Paths.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Reached target Timers.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Starting D-Bus User Message Bus Socket...
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Starting Create User's Volatile Files and Directories...
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Listening on D-Bus User Message Bus Socket.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Finished Create User's Volatile Files and Directories.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Reached target Sockets.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Reached target Basic System.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Reached target Main User Target.
Nov 29 06:30:09 np0005539576.novalocal systemd[4301]: Startup finished in 108ms.
Nov 29 06:30:09 np0005539576.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 29 06:30:09 np0005539576.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 29 06:30:09 np0005539576.novalocal sshd-session[4297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:30:09 np0005539576.novalocal python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:30:12 np0005539576.novalocal python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:30:15 np0005539576.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:30:18 np0005539576.novalocal python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:30:19 np0005539576.novalocal python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 06:30:21 np0005539576.novalocal python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCiw3GJCn9XIWohlKn9HaYdQgNCD6RxExEh/KvozvC2PGoZ/9TaTIysYupGlQdAl4TXbis/+RD59we/0ydfTiHVi26fpEZCiDI62M6F19gTkqBXN0mbweGaI9xnQ+c90/iQU7cr3xeF4OZL/kTsthpj6TArBN0nPyrI22s0Fcb+in534OT3gdq6XgN9H5oTmEFo0uOhXCJm1aL2UKtRty77nKKc1ruh7c03n7Q1OztIXU0twXA8f7fa8kwFnw7waKrT4K3rwVksFzcq4HpPJsv3lfKyuF/RuAAA2Avu68VXXXbq6NVQW1HkVwkPW0GCU/cEdcdaGs2yOd5Z4gfr0F6pzFm9vNXq5KAvgmoM4l1Mq9DNr7o6JzqpQoRaexFDJ7rjgw9p90SqEi35zDeofOPu24dXx6rCLUjVjQSqX9zvHOHk/zEzj2D3Uk4uxBai2rrxv7nmOiww1e4hdOV2JrlB9tlSYCXq9p/poaSPdn2YpwC5S1PVOFWfG1exgLBHOU= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:21 np0005539576.novalocal python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:22 np0005539576.novalocal python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:22 np0005539576.novalocal python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397821.9127796-207-80681242713975/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=fbdc68adc7be440fa5ea822a7b07029b_id_rsa follow=False checksum=bf6257616ebff3d0f7d29944e130bac328de9abd backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:23 np0005539576.novalocal python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:23 np0005539576.novalocal python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397822.8683622-240-212910220799546/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=fbdc68adc7be440fa5ea822a7b07029b_id_rsa.pub follow=False checksum=a490accc2695f3d995656fc96dd6e97c5dcd05af backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:24 np0005539576.novalocal python3[4973]: ansible-ping Invoked with data=pong
Nov 29 06:30:25 np0005539576.novalocal python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:30:27 np0005539576.novalocal python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 06:30:28 np0005539576.novalocal python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:28 np0005539576.novalocal python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:28 np0005539576.novalocal python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:29 np0005539576.novalocal python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:29 np0005539576.novalocal python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:29 np0005539576.novalocal python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:31 np0005539576.novalocal sudo[5231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shpzozxfhvzpiernyowtxdocyzbvurgf ; /usr/bin/python3'
Nov 29 06:30:31 np0005539576.novalocal sudo[5231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:31 np0005539576.novalocal python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:31 np0005539576.novalocal sudo[5231]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:31 np0005539576.novalocal sudo[5309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjccbujroyezzypingsiyftmlsufzbno ; /usr/bin/python3'
Nov 29 06:30:31 np0005539576.novalocal sudo[5309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:31 np0005539576.novalocal python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:31 np0005539576.novalocal sudo[5309]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:32 np0005539576.novalocal sudo[5382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylrtvstnlcmswrwtcyqcpbhkmiqhkqkl ; /usr/bin/python3'
Nov 29 06:30:32 np0005539576.novalocal sudo[5382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:32 np0005539576.novalocal python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397831.3571804-21-278358901088532/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:32 np0005539576.novalocal sudo[5382]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:32 np0005539576.novalocal python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:33 np0005539576.novalocal python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:33 np0005539576.novalocal python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:33 np0005539576.novalocal python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:33 np0005539576.novalocal python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:34 np0005539576.novalocal python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:34 np0005539576.novalocal python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:34 np0005539576.novalocal python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:35 np0005539576.novalocal python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:35 np0005539576.novalocal python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:35 np0005539576.novalocal python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:35 np0005539576.novalocal python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:36 np0005539576.novalocal python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:36 np0005539576.novalocal python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:36 np0005539576.novalocal python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:37 np0005539576.novalocal python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:37 np0005539576.novalocal python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:37 np0005539576.novalocal python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:37 np0005539576.novalocal python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:38 np0005539576.novalocal python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:38 np0005539576.novalocal python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:38 np0005539576.novalocal python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:39 np0005539576.novalocal python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:39 np0005539576.novalocal python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:39 np0005539576.novalocal python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:40 np0005539576.novalocal python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:30:42 np0005539576.novalocal sudo[6056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-retemcymzdelllktqfweezlyrwjzixfa ; /usr/bin/python3'
Nov 29 06:30:42 np0005539576.novalocal sudo[6056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:42 np0005539576.novalocal python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 06:30:42 np0005539576.novalocal systemd[1]: Starting Time & Date Service...
Nov 29 06:30:43 np0005539576.novalocal systemd[1]: Started Time & Date Service.
Nov 29 06:30:43 np0005539576.novalocal systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Nov 29 06:30:43 np0005539576.novalocal sudo[6056]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:43 np0005539576.novalocal sudo[6087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcuiiosfvxjreqykdxsuhgjazzhnweme ; /usr/bin/python3'
Nov 29 06:30:43 np0005539576.novalocal sudo[6087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:43 np0005539576.novalocal python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:43 np0005539576.novalocal sudo[6087]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:44 np0005539576.novalocal python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:44 np0005539576.novalocal python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764397843.8056571-153-200581548222794/source _original_basename=tmp7_smlhbf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:44 np0005539576.novalocal python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:45 np0005539576.novalocal python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397844.6555204-183-164125351936452/source _original_basename=tmpfa61vwni follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:46 np0005539576.novalocal sudo[6507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulidxyyooxbkkszgpopzmsyeagavzqdu ; /usr/bin/python3'
Nov 29 06:30:46 np0005539576.novalocal sudo[6507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:46 np0005539576.novalocal python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:46 np0005539576.novalocal sudo[6507]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:46 np0005539576.novalocal sudo[6580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgdijztnhrmsijgosbnwswyyfhxzujzi ; /usr/bin/python3'
Nov 29 06:30:46 np0005539576.novalocal sudo[6580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:46 np0005539576.novalocal python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397845.9253817-231-75902308580441/source _original_basename=tmpj1572uge follow=False checksum=6c462e10cf6b935fb22f4386c31d576dcf4d4133 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:46 np0005539576.novalocal sudo[6580]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:47 np0005539576.novalocal python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:30:47 np0005539576.novalocal python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:30:47 np0005539576.novalocal sudo[6734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxulwsyntxhnenchpexhgjevhshmmzkq ; /usr/bin/python3'
Nov 29 06:30:47 np0005539576.novalocal sudo[6734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:47 np0005539576.novalocal python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:30:47 np0005539576.novalocal sudo[6734]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:48 np0005539576.novalocal sudo[6807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdzeiockdycacdrxpafblrhhmzmsyapx ; /usr/bin/python3'
Nov 29 06:30:48 np0005539576.novalocal sudo[6807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:48 np0005539576.novalocal python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397847.5897634-273-167535673580980/source _original_basename=tmpjv5c16qe follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:48 np0005539576.novalocal sudo[6807]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:49 np0005539576.novalocal sudo[6858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucyomihxrxyouztvetqovqzphsrxnhuy ; /usr/bin/python3'
Nov 29 06:30:49 np0005539576.novalocal sudo[6858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:49 np0005539576.novalocal python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-51a5-c718-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:30:49 np0005539576.novalocal sudo[6858]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:49 np0005539576.novalocal python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-51a5-c718-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 06:30:52 np0005539576.novalocal python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:56 np0005539576.novalocal chronyd[795]: Selected source 23.133.168.245 (2.centos.pool.ntp.org)
Nov 29 06:31:13 np0005539576.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 06:31:18 np0005539576.novalocal sshd-session[6917]: Connection closed by 66.132.153.136 port 3264 [preauth]
Nov 29 06:31:19 np0005539576.novalocal sudo[6944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobsyfwomhidbztflnkezkitzsynfqoz ; /usr/bin/python3'
Nov 29 06:31:19 np0005539576.novalocal sudo[6944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:19 np0005539576.novalocal python3[6946]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:19 np0005539576.novalocal sudo[6944]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:56 np0005539576.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 06:31:56 np0005539576.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 06:31:56 np0005539576.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 06:31:56 np0005539576.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 06:31:56 np0005539576.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 06:31:56 np0005539576.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 06:31:56 np0005539576.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 06:31:56 np0005539576.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 06:31:56 np0005539576.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 06:31:56 np0005539576.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1752] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 06:31:56 np0005539576.novalocal systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1932] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1952] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1954] device (eth1): carrier: link connected
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1956] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1960] policy: auto-activating connection 'Wired connection 1' (98e8f3fe-9552-33be-9e1b-aa79ca008d3d)
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1963] device (eth1): Activation: starting connection 'Wired connection 1' (98e8f3fe-9552-33be-9e1b-aa79ca008d3d)
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1964] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1966] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1969] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:31:56 np0005539576.novalocal NetworkManager[860]: <info>  [1764397916.1973] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:31:56 np0005539576.novalocal python3[6974]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-8400-53cb-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:31:59 np0005539576.novalocal sshd-session[6978]: error: kex_exchange_identification: read: Connection reset by peer
Nov 29 06:31:59 np0005539576.novalocal sshd-session[6978]: Connection reset by 45.140.17.97 port 47427
Nov 29 06:32:06 np0005539576.novalocal sudo[7054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrdyopjzfzqjbjfvwzishifcichymlzf ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:32:06 np0005539576.novalocal sudo[7054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:06 np0005539576.novalocal python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:32:06 np0005539576.novalocal sudo[7054]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:07 np0005539576.novalocal sudo[7127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfkfpverwrvmnqbamtxynpivpaovctla ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:32:07 np0005539576.novalocal sudo[7127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:07 np0005539576.novalocal python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397926.5183847-102-195203815373888/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=be102d872d611f82d8bcb21eebb3e82fd2371f11 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:07 np0005539576.novalocal sudo[7127]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:07 np0005539576.novalocal sudo[7177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqmlayzozfhlzzdcuskjjkztrsngxxrw ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:32:07 np0005539576.novalocal sudo[7177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:08 np0005539576.novalocal python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[860]: <info>  [1764397928.1658] caught SIGTERM, shutting down normally.
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Stopping Network Manager...
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[860]: <info>  [1764397928.1666] dhcp4 (eth0): canceled DHCP transaction
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[860]: <info>  [1764397928.1666] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[860]: <info>  [1764397928.1667] dhcp4 (eth0): state changed no lease
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[860]: <info>  [1764397928.1669] manager: NetworkManager state is now CONNECTING
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[860]: <info>  [1764397928.1740] dhcp4 (eth1): canceled DHCP transaction
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[860]: <info>  [1764397928.1741] dhcp4 (eth1): state changed no lease
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[860]: <info>  [1764397928.1874] exiting (success)
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Stopped Network Manager.
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: NetworkManager.service: Consumed 1.164s CPU time, 9.9M memory peak.
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Starting Network Manager...
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.2446] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:d7e69dea-8152-484d-8a65-eb3d0b2e01e5)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.2449] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.2519] manager[0x5591d3156070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Starting Hostname Service...
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Started Hostname Service.
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3344] hostname: hostname: using hostnamed
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3348] hostname: static hostname changed from (none) to "np0005539576.novalocal"
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3356] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3365] manager[0x5591d3156070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3365] manager[0x5591d3156070]: rfkill: WWAN hardware radio set enabled
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3411] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3411] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3412] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3413] manager: Networking is enabled by state file
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3418] settings: Loaded settings plugin: keyfile (internal)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3425] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3477] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3492] dhcp: init: Using DHCP client 'internal'
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3497] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3505] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3514] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3527] device (lo): Activation: starting connection 'lo' (e8399e84-1c3b-44af-bf17-12b484068834)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3540] device (eth0): carrier: link connected
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3548] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3556] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3556] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3568] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3580] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3588] device (eth1): carrier: link connected
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3595] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3606] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (98e8f3fe-9552-33be-9e1b-aa79ca008d3d) (indicated)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3606] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3614] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3626] device (eth1): Activation: starting connection 'Wired connection 1' (98e8f3fe-9552-33be-9e1b-aa79ca008d3d)
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Started Network Manager.
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3636] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3644] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3648] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3662] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3666] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3670] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3673] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3676] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3681] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3688] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3693] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3702] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3704] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3717] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3722] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3728] device (lo): Activation: successful, device activated.
Nov 29 06:32:08 np0005539576.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3931] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.3940] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 06:32:08 np0005539576.novalocal sudo[7177]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.4058] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.4081] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.4082] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.4086] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.4089] device (eth0): Activation: successful, device activated.
Nov 29 06:32:08 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397928.4094] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 06:32:08 np0005539576.novalocal python3[7264]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-8400-53cb-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:32:18 np0005539576.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:32:38 np0005539576.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:32:38 np0005539576.novalocal systemd[4301]: Starting Mark boot as successful...
Nov 29 06:32:38 np0005539576.novalocal systemd[4301]: Finished Mark boot as successful.
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6262] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 06:32:53 np0005539576.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:32:53 np0005539576.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6515] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6517] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6528] device (eth1): Activation: successful, device activated.
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6535] manager: startup complete
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6537] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <warn>  [1764397973.6543] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6550] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 06:32:53 np0005539576.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6720] dhcp4 (eth1): canceled DHCP transaction
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6720] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6721] dhcp4 (eth1): state changed no lease
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6736] policy: auto-activating connection 'ci-private-network' (cb70f425-ae10-57f4-84a2-262aa56d50f2)
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6741] device (eth1): Activation: starting connection 'ci-private-network' (cb70f425-ae10-57f4-84a2-262aa56d50f2)
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6742] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6744] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6750] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6757] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6979] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6982] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:32:53 np0005539576.novalocal NetworkManager[7189]: <info>  [1764397973.6990] device (eth1): Activation: successful, device activated.
Nov 29 06:33:03 np0005539576.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:33:07 np0005539576.novalocal sudo[7368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysnvyjuxblhnacgyyomfmlbhzdtupmpd ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:33:07 np0005539576.novalocal sudo[7368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:07 np0005539576.novalocal python3[7370]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:33:07 np0005539576.novalocal sudo[7368]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:08 np0005539576.novalocal sudo[7441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkbanwuozosqfaciaaujwflxjkifmlad ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 06:33:08 np0005539576.novalocal sudo[7441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:08 np0005539576.novalocal python3[7443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397987.6702394-267-81775925305784/source _original_basename=tmp4qyint8q follow=False checksum=32e66a8416a2fa12c80c0fe5eedab5d7b78f9aac backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:33:08 np0005539576.novalocal sudo[7441]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:08 np0005539576.novalocal sshd-session[4310]: Received disconnect from 38.102.83.114 port 36446:11: disconnected by user
Nov 29 06:34:08 np0005539576.novalocal sshd-session[4310]: Disconnected from user zuul 38.102.83.114 port 36446
Nov 29 06:34:08 np0005539576.novalocal sshd-session[4297]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:34:08 np0005539576.novalocal systemd-logind[807]: Session 1 logged out. Waiting for processes to exit.
Nov 29 06:34:10 np0005539576.novalocal chronyd[795]: Selected source 158.69.193.108 (2.centos.pool.ntp.org)
Nov 29 06:35:38 np0005539576.novalocal systemd[4301]: Created slice User Background Tasks Slice.
Nov 29 06:35:38 np0005539576.novalocal systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 06:35:38 np0005539576.novalocal systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 06:40:41 np0005539576.novalocal sshd-session[7473]: Accepted publickey for zuul from 38.102.83.114 port 42768 ssh2: RSA SHA256:tfSy+7i0vpEWoIgjuhzAozE3pD3UuGTXW/vm6y9qu2w
Nov 29 06:40:41 np0005539576.novalocal systemd-logind[807]: New session 3 of user zuul.
Nov 29 06:40:41 np0005539576.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 29 06:40:41 np0005539576.novalocal sshd-session[7473]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:40:42 np0005539576.novalocal sudo[7500]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zewgxkzcejicimpuezgmagsgpzxlvsvo ; /usr/bin/python3'
Nov 29 06:40:42 np0005539576.novalocal sudo[7500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:42 np0005539576.novalocal python3[7502]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-116e-31b1-000000001cd6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:42 np0005539576.novalocal sudo[7500]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:42 np0005539576.novalocal sudo[7529]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffezzjfrmbuaxchfwbeemxmhvzuqtyzt ; /usr/bin/python3'
Nov 29 06:40:42 np0005539576.novalocal sudo[7529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:42 np0005539576.novalocal python3[7531]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:42 np0005539576.novalocal sudo[7529]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:42 np0005539576.novalocal sudo[7555]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqzgtyrptdkvyfutxgjunaqvlfsdlqlh ; /usr/bin/python3'
Nov 29 06:40:42 np0005539576.novalocal sudo[7555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:42 np0005539576.novalocal python3[7557]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:42 np0005539576.novalocal sudo[7555]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:42 np0005539576.novalocal sudo[7581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wamltvshvpbkblcajdeynzvctuvdauhz ; /usr/bin/python3'
Nov 29 06:40:42 np0005539576.novalocal sudo[7581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:43 np0005539576.novalocal python3[7583]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:44 np0005539576.novalocal sudo[7581]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:44 np0005539576.novalocal sudo[7607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyktfcxgspztoexplezjebzkuobfwcvf ; /usr/bin/python3'
Nov 29 06:40:44 np0005539576.novalocal sudo[7607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:44 np0005539576.novalocal python3[7609]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:44 np0005539576.novalocal sudo[7607]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:44 np0005539576.novalocal sudo[7633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjsiowkwwktvthzedysshsmtyzgbivvx ; /usr/bin/python3'
Nov 29 06:40:44 np0005539576.novalocal sudo[7633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:44 np0005539576.novalocal python3[7635]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:44 np0005539576.novalocal sudo[7633]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:45 np0005539576.novalocal sudo[7711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbregnvbtpxokbpnwmxqfaffanaajevg ; /usr/bin/python3'
Nov 29 06:40:45 np0005539576.novalocal sudo[7711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:45 np0005539576.novalocal python3[7713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:40:45 np0005539576.novalocal sudo[7711]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:45 np0005539576.novalocal sudo[7784]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aisrfwxmrcqfvubhfbzctozohldamgve ; /usr/bin/python3'
Nov 29 06:40:45 np0005539576.novalocal sudo[7784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:45 np0005539576.novalocal python3[7786]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398444.9641478-477-133922632090828/source _original_basename=tmpwu7_1xbn follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:45 np0005539576.novalocal sudo[7784]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:46 np0005539576.novalocal sudo[7834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmywdcuzevpamyjnpkgahhbokdgiodny ; /usr/bin/python3'
Nov 29 06:40:46 np0005539576.novalocal sudo[7834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:46 np0005539576.novalocal python3[7836]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:40:46 np0005539576.novalocal systemd[1]: Reloading.
Nov 29 06:40:46 np0005539576.novalocal systemd-rc-local-generator[7857]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:40:48 np0005539576.novalocal sudo[7834]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:49 np0005539576.novalocal sudo[7891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbnanckuqwbtosgztjlodetkgiphlcox ; /usr/bin/python3'
Nov 29 06:40:49 np0005539576.novalocal sudo[7891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:49 np0005539576.novalocal python3[7893]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 06:40:49 np0005539576.novalocal sudo[7891]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:49 np0005539576.novalocal sudo[7917]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoxgytmaiizgxnjubqqymlafiprkssbg ; /usr/bin/python3'
Nov 29 06:40:49 np0005539576.novalocal sudo[7917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:49 np0005539576.novalocal python3[7919]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:49 np0005539576.novalocal sudo[7917]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:49 np0005539576.novalocal sudo[7945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkpnytqjrjqjbeeuhikurtrktrzaosop ; /usr/bin/python3'
Nov 29 06:40:49 np0005539576.novalocal sudo[7945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:49 np0005539576.novalocal python3[7947]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:49 np0005539576.novalocal sudo[7945]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:49 np0005539576.novalocal sudo[7973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uygzgriqtbwnyzlouffnlenbexvtgcdt ; /usr/bin/python3'
Nov 29 06:40:49 np0005539576.novalocal sudo[7973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:50 np0005539576.novalocal python3[7975]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:50 np0005539576.novalocal sudo[7973]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:50 np0005539576.novalocal sudo[8001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpwszmflzozeplwclrboussvstbojufm ; /usr/bin/python3'
Nov 29 06:40:50 np0005539576.novalocal sudo[8001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:50 np0005539576.novalocal python3[8003]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:50 np0005539576.novalocal sudo[8001]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:51 np0005539576.novalocal python3[8030]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-116e-31b1-000000001cdd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:51 np0005539576.novalocal python3[8059]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 06:40:53 np0005539576.novalocal sshd-session[7476]: Connection closed by 38.102.83.114 port 42768
Nov 29 06:40:53 np0005539576.novalocal sshd-session[7473]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:40:53 np0005539576.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 06:40:53 np0005539576.novalocal systemd[1]: session-3.scope: Consumed 4.267s CPU time.
Nov 29 06:40:53 np0005539576.novalocal systemd-logind[807]: Session 3 logged out. Waiting for processes to exit.
Nov 29 06:40:53 np0005539576.novalocal systemd-logind[807]: Removed session 3.
Nov 29 06:40:55 np0005539576.novalocal sshd-session[8066]: Accepted publickey for zuul from 38.102.83.114 port 45974 ssh2: RSA SHA256:tfSy+7i0vpEWoIgjuhzAozE3pD3UuGTXW/vm6y9qu2w
Nov 29 06:40:55 np0005539576.novalocal systemd-logind[807]: New session 4 of user zuul.
Nov 29 06:40:55 np0005539576.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 29 06:40:55 np0005539576.novalocal sshd-session[8066]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:40:55 np0005539576.novalocal sudo[8093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utgblsxeecwchuqppjtixaqlwjlzrusp ; /usr/bin/python3'
Nov 29 06:40:55 np0005539576.novalocal sudo[8093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:55 np0005539576.novalocal python3[8095]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 06:41:19 np0005539576.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 06:41:19 np0005539576.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:41:19 np0005539576.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:41:19 np0005539576.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:41:19 np0005539576.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:41:19 np0005539576.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:41:19 np0005539576.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:41:19 np0005539576.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:41:31 np0005539576.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 06:41:31 np0005539576.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:41:31 np0005539576.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:41:31 np0005539576.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:41:31 np0005539576.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:41:31 np0005539576.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:41:31 np0005539576.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:41:31 np0005539576.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:41:39 np0005539576.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 06:41:39 np0005539576.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:41:39 np0005539576.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:41:39 np0005539576.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:41:39 np0005539576.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:41:39 np0005539576.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:41:39 np0005539576.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:41:39 np0005539576.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:41:41 np0005539576.novalocal setsebool[8165]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 06:41:41 np0005539576.novalocal setsebool[8165]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 06:41:53 np0005539576.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 29 06:41:53 np0005539576.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:41:53 np0005539576.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 06:41:53 np0005539576.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:41:53 np0005539576.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:41:53 np0005539576.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:41:53 np0005539576.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:41:53 np0005539576.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:42:13 np0005539576.novalocal dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 06:42:13 np0005539576.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:42:13 np0005539576.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:42:13 np0005539576.novalocal systemd[1]: Reloading.
Nov 29 06:42:14 np0005539576.novalocal systemd-rc-local-generator[8921]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:42:14 np0005539576.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:42:16 np0005539576.novalocal sudo[8093]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:22 np0005539576.novalocal python3[13980]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-a15e-59da-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:42:23 np0005539576.novalocal kernel: evm: overlay not supported
Nov 29 06:42:23 np0005539576.novalocal systemd[4301]: Starting D-Bus User Message Bus...
Nov 29 06:42:23 np0005539576.novalocal dbus-broker-launch[14284]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 06:42:23 np0005539576.novalocal dbus-broker-launch[14284]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 06:42:23 np0005539576.novalocal systemd[4301]: Started D-Bus User Message Bus.
Nov 29 06:42:23 np0005539576.novalocal dbus-broker-lau[14284]: Ready
Nov 29 06:42:23 np0005539576.novalocal systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 06:42:23 np0005539576.novalocal systemd[4301]: Created slice Slice /user.
Nov 29 06:42:23 np0005539576.novalocal systemd[4301]: podman-14211.scope: unit configures an IP firewall, but not running as root.
Nov 29 06:42:23 np0005539576.novalocal systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 06:42:23 np0005539576.novalocal systemd[4301]: Started podman-14211.scope.
Nov 29 06:42:23 np0005539576.novalocal systemd[4301]: Started podman-pause-aee1f4af.scope.
Nov 29 06:42:24 np0005539576.novalocal sudo[14691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksbmkrhwxtjytsuxcriugfvyurfhgzce ; /usr/bin/python3'
Nov 29 06:42:24 np0005539576.novalocal sudo[14691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:24 np0005539576.novalocal python3[14704]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.20:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.20:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:42:24 np0005539576.novalocal python3[14704]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 06:42:24 np0005539576.novalocal sudo[14691]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:24 np0005539576.novalocal sshd-session[8069]: Connection closed by 38.102.83.114 port 45974
Nov 29 06:42:24 np0005539576.novalocal sshd-session[8066]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:42:24 np0005539576.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 06:42:24 np0005539576.novalocal systemd[1]: session-4.scope: Consumed 1min 3.222s CPU time.
Nov 29 06:42:24 np0005539576.novalocal systemd-logind[807]: Session 4 logged out. Waiting for processes to exit.
Nov 29 06:42:24 np0005539576.novalocal systemd-logind[807]: Removed session 4.
Nov 29 06:42:34 np0005539576.novalocal irqbalance[804]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 29 06:42:34 np0005539576.novalocal irqbalance[804]: IRQ 27 affinity is now unmanaged
Nov 29 06:42:43 np0005539576.novalocal sshd-session[22072]: Unable to negotiate with 38.102.83.150 port 50330: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 29 06:42:43 np0005539576.novalocal sshd-session[22074]: Connection closed by 38.102.83.150 port 50296 [preauth]
Nov 29 06:42:43 np0005539576.novalocal sshd-session[22073]: Unable to negotiate with 38.102.83.150 port 50302: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 29 06:42:43 np0005539576.novalocal sshd-session[22079]: Connection closed by 38.102.83.150 port 50294 [preauth]
Nov 29 06:42:43 np0005539576.novalocal sshd-session[22076]: Unable to negotiate with 38.102.83.150 port 50316: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 29 06:42:49 np0005539576.novalocal sshd-session[24467]: Accepted publickey for zuul from 38.102.83.114 port 40376 ssh2: RSA SHA256:tfSy+7i0vpEWoIgjuhzAozE3pD3UuGTXW/vm6y9qu2w
Nov 29 06:42:49 np0005539576.novalocal systemd-logind[807]: New session 5 of user zuul.
Nov 29 06:42:49 np0005539576.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 29 06:42:49 np0005539576.novalocal sshd-session[24467]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:42:50 np0005539576.novalocal python3[24564]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDSb90lZSTDVxdC8Prub0H9+p8ZGphA+j6UAg4b9j5WJVg9H52YNgc1bdpgI4NF5QcxIpBmH70bRdyj5AJRSKcU= zuul@np0005539575.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:42:50 np0005539576.novalocal sudo[24709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvlxrxeugwqpeozugihzgrenebynmmwa ; /usr/bin/python3'
Nov 29 06:42:50 np0005539576.novalocal sudo[24709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:50 np0005539576.novalocal python3[24720]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDSb90lZSTDVxdC8Prub0H9+p8ZGphA+j6UAg4b9j5WJVg9H52YNgc1bdpgI4NF5QcxIpBmH70bRdyj5AJRSKcU= zuul@np0005539575.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:42:50 np0005539576.novalocal sudo[24709]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:51 np0005539576.novalocal sudo[25107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agnslqjatbpicczthmmodmdvdhnijfcy ; /usr/bin/python3'
Nov 29 06:42:51 np0005539576.novalocal sudo[25107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:51 np0005539576.novalocal python3[25117]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539576.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 06:42:51 np0005539576.novalocal useradd[25192]: new group: name=cloud-admin, GID=1002
Nov 29 06:42:51 np0005539576.novalocal useradd[25192]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 29 06:42:51 np0005539576.novalocal sudo[25107]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:51 np0005539576.novalocal sudo[25314]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pusatuvpqzdcuafvwosoodrhygfpfvon ; /usr/bin/python3'
Nov 29 06:42:51 np0005539576.novalocal sudo[25314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:52 np0005539576.novalocal python3[25326]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDSb90lZSTDVxdC8Prub0H9+p8ZGphA+j6UAg4b9j5WJVg9H52YNgc1bdpgI4NF5QcxIpBmH70bRdyj5AJRSKcU= zuul@np0005539575.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 06:42:52 np0005539576.novalocal sudo[25314]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:52 np0005539576.novalocal sudo[25560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpnapdmpbpjskazdkmurirhstvxexxyg ; /usr/bin/python3'
Nov 29 06:42:52 np0005539576.novalocal sudo[25560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:52 np0005539576.novalocal python3[25571]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:42:52 np0005539576.novalocal sudo[25560]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:52 np0005539576.novalocal sudo[25844]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apphwjrusgnbmmqrfevqntmqtblgqtff ; /usr/bin/python3'
Nov 29 06:42:52 np0005539576.novalocal sudo[25844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:52 np0005539576.novalocal python3[25852]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398572.1583593-135-100541417136187/source _original_basename=tmphq4ivzu_ follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:42:52 np0005539576.novalocal sudo[25844]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:53 np0005539576.novalocal sudo[26134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhdmewgeqeeauxlfcljrvpntapuidnaj ; /usr/bin/python3'
Nov 29 06:42:53 np0005539576.novalocal sudo[26134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:53 np0005539576.novalocal python3[26144]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 29 06:42:53 np0005539576.novalocal systemd[1]: Starting Hostname Service...
Nov 29 06:42:53 np0005539576.novalocal systemd[1]: Started Hostname Service.
Nov 29 06:42:53 np0005539576.novalocal systemd-hostnamed[26255]: Changed pretty hostname to 'compute-0'
Nov 29 06:42:53 compute-0 systemd-hostnamed[26255]: Hostname set to <compute-0> (static)
Nov 29 06:42:53 compute-0 NetworkManager[7189]: <info>  [1764398573.8538] hostname: static hostname changed from "np0005539576.novalocal" to "compute-0"
Nov 29 06:42:53 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:42:53 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:42:53 compute-0 sudo[26134]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:54 compute-0 sshd-session[24506]: Connection closed by 38.102.83.114 port 40376
Nov 29 06:42:54 compute-0 sshd-session[24467]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:42:54 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 06:42:54 compute-0 systemd[1]: session-5.scope: Consumed 2.403s CPU time.
Nov 29 06:42:54 compute-0 systemd-logind[807]: Session 5 logged out. Waiting for processes to exit.
Nov 29 06:42:54 compute-0 systemd-logind[807]: Removed session 5.
Nov 29 06:43:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:43:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:43:03 compute-0 systemd[1]: man-db-cache-update.service: Consumed 58.650s CPU time.
Nov 29 06:43:03 compute-0 systemd[1]: run-rac686ec6a9014df7a175dc46c145ad23.service: Deactivated successfully.
Nov 29 06:43:03 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:43:23 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 06:43:23 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:43:23 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 06:43:23 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 06:43:23 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 06:47:16 compute-0 sshd-session[29922]: Accepted publickey for zuul from 38.102.83.150 port 53272 ssh2: RSA SHA256:tfSy+7i0vpEWoIgjuhzAozE3pD3UuGTXW/vm6y9qu2w
Nov 29 06:47:16 compute-0 systemd-logind[807]: New session 6 of user zuul.
Nov 29 06:47:16 compute-0 systemd[1]: Started Session 6 of User zuul.
Nov 29 06:47:16 compute-0 sshd-session[29922]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:47:17 compute-0 python3[29998]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:47:18 compute-0 sudo[30112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pytuothjcmbdwnmqdvksirojufmzhqxs ; /usr/bin/python3'
Nov 29 06:47:18 compute-0 sudo[30112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:18 compute-0 python3[30114]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:18 compute-0 sudo[30112]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:19 compute-0 sudo[30185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khwlpixtwroytfofklsqdhwkibhfhaag ; /usr/bin/python3'
Nov 29 06:47:19 compute-0 sudo[30185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:19 compute-0 python3[30187]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398838.4580083-33685-76599780667051/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:19 compute-0 sudo[30185]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:19 compute-0 sudo[30211]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnkqbzaycorbfhsfrinsojnpigsrjdkg ; /usr/bin/python3'
Nov 29 06:47:19 compute-0 sudo[30211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:19 compute-0 python3[30213]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:19 compute-0 sudo[30211]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:19 compute-0 sudo[30284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecekjonolmdnmrjxlwbbixxrxdgfxsuj ; /usr/bin/python3'
Nov 29 06:47:19 compute-0 sudo[30284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:21 compute-0 python3[30286]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398838.4580083-33685-76599780667051/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:21 compute-0 sudo[30284]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:21 compute-0 sudo[30310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxdommkhcrsifficyexezmamahzfzpnv ; /usr/bin/python3'
Nov 29 06:47:21 compute-0 sudo[30310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:21 compute-0 python3[30312]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:21 compute-0 sudo[30310]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:21 compute-0 sudo[30383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glfrswsqbhmsccbqwmdilvlbsokmuqcv ; /usr/bin/python3'
Nov 29 06:47:21 compute-0 sudo[30383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:21 compute-0 python3[30385]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398838.4580083-33685-76599780667051/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:21 compute-0 sudo[30383]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:21 compute-0 sudo[30409]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tccygyesbhdytukuxjqgxtefffufbsew ; /usr/bin/python3'
Nov 29 06:47:21 compute-0 sudo[30409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:22 compute-0 python3[30411]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:22 compute-0 sudo[30409]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:22 compute-0 sudo[30482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uswsiqwwvmydiojfmqmjuihrpdizwewv ; /usr/bin/python3'
Nov 29 06:47:22 compute-0 sudo[30482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:22 compute-0 python3[30484]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398838.4580083-33685-76599780667051/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:22 compute-0 sudo[30482]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:22 compute-0 sudo[30508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbbfyigtfiftimwwyvfguywrfdjuplqk ; /usr/bin/python3'
Nov 29 06:47:22 compute-0 sudo[30508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:22 compute-0 python3[30510]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:22 compute-0 sudo[30508]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:22 compute-0 sudo[30581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkdkztcsbmncwpvjjffbjryzvuaxrlwa ; /usr/bin/python3'
Nov 29 06:47:22 compute-0 sudo[30581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:22 compute-0 python3[30583]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398838.4580083-33685-76599780667051/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:22 compute-0 sudo[30581]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:23 compute-0 sudo[30607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etoqkbmoqgdekzpsvgpmctudsihfuliv ; /usr/bin/python3'
Nov 29 06:47:23 compute-0 sudo[30607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:23 compute-0 python3[30609]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:23 compute-0 sudo[30607]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:23 compute-0 sudo[30680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcqokwgclxkkqmwqizpigdwuipsvhycc ; /usr/bin/python3'
Nov 29 06:47:23 compute-0 sudo[30680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:23 compute-0 python3[30682]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398838.4580083-33685-76599780667051/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:23 compute-0 sudo[30680]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:23 compute-0 sudo[30706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmhysoddpijavgleeaejapsbjwloyjbj ; /usr/bin/python3'
Nov 29 06:47:23 compute-0 sudo[30706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:23 compute-0 python3[30708]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:47:23 compute-0 sudo[30706]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:24 compute-0 sudo[30779]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwbotuyqiygfnxzpmcpnweipckpyqhv ; /usr/bin/python3'
Nov 29 06:47:24 compute-0 sudo[30779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:47:24 compute-0 python3[30781]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398838.4580083-33685-76599780667051/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:47:24 compute-0 sudo[30779]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:26 compute-0 sshd-session[30806]: Connection closed by 192.168.122.11 port 37270 [preauth]
Nov 29 06:47:26 compute-0 sshd-session[30807]: Connection closed by 192.168.122.11 port 37274 [preauth]
Nov 29 06:47:26 compute-0 sshd-session[30808]: Unable to negotiate with 192.168.122.11 port 37290: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 29 06:47:26 compute-0 sshd-session[30809]: Unable to negotiate with 192.168.122.11 port 37296: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 29 06:47:26 compute-0 sshd-session[30810]: Unable to negotiate with 192.168.122.11 port 37304: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 29 06:47:36 compute-0 python3[30839]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:52:36 compute-0 sshd-session[29925]: Received disconnect from 38.102.83.150 port 53272:11: disconnected by user
Nov 29 06:52:36 compute-0 sshd-session[29925]: Disconnected from user zuul 38.102.83.150 port 53272
Nov 29 06:52:36 compute-0 sshd-session[29922]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:52:36 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 06:52:36 compute-0 systemd[1]: session-6.scope: Consumed 4.873s CPU time.
Nov 29 06:52:36 compute-0 systemd-logind[807]: Session 6 logged out. Waiting for processes to exit.
Nov 29 06:52:36 compute-0 systemd-logind[807]: Removed session 6.
Nov 29 07:01:01 compute-0 CROND[30849]: (root) CMD (run-parts /etc/cron.hourly)
Nov 29 07:01:01 compute-0 run-parts[30852]: (/etc/cron.hourly) starting 0anacron
Nov 29 07:01:01 compute-0 anacron[30860]: Anacron started on 2025-11-29
Nov 29 07:01:01 compute-0 anacron[30860]: Will run job `cron.daily' in 17 min.
Nov 29 07:01:01 compute-0 anacron[30860]: Will run job `cron.weekly' in 37 min.
Nov 29 07:01:01 compute-0 anacron[30860]: Will run job `cron.monthly' in 57 min.
Nov 29 07:01:01 compute-0 anacron[30860]: Jobs will be executed sequentially
Nov 29 07:01:01 compute-0 run-parts[30862]: (/etc/cron.hourly) finished 0anacron
Nov 29 07:01:01 compute-0 CROND[30848]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 29 07:01:08 compute-0 sshd-session[30863]: Accepted publickey for zuul from 192.168.122.30 port 55398 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:01:08 compute-0 systemd-logind[807]: New session 7 of user zuul.
Nov 29 07:01:08 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 29 07:01:08 compute-0 sshd-session[30863]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:01:09 compute-0 python3.9[31017]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:01:10 compute-0 sudo[31196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uauahyfldlrkkyaomgynopbqpzdmmzxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399669.954384-32-163493110609537/AnsiballZ_command.py'
Nov 29 07:01:10 compute-0 sudo[31196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:10 compute-0 python3.9[31198]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:01:21 compute-0 sudo[31196]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:21 compute-0 sshd-session[30866]: Connection closed by 192.168.122.30 port 55398
Nov 29 07:01:21 compute-0 sshd-session[30863]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:01:21 compute-0 systemd-logind[807]: Session 7 logged out. Waiting for processes to exit.
Nov 29 07:01:21 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 07:01:21 compute-0 systemd[1]: session-7.scope: Consumed 8.879s CPU time.
Nov 29 07:01:21 compute-0 systemd-logind[807]: Removed session 7.
Nov 29 07:01:38 compute-0 sshd-session[31256]: Accepted publickey for zuul from 192.168.122.30 port 47586 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:01:38 compute-0 systemd-logind[807]: New session 8 of user zuul.
Nov 29 07:01:38 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 29 07:01:38 compute-0 sshd-session[31256]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:01:39 compute-0 python3.9[31409]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 07:01:40 compute-0 python3.9[31583]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:01:41 compute-0 sudo[31733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiuppjfotmtvtryaofbcnfmxbtbrmjlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399701.0927258-45-89423311110860/AnsiballZ_command.py'
Nov 29 07:01:41 compute-0 sudo[31733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:41 compute-0 python3.9[31735]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:01:41 compute-0 sudo[31733]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:42 compute-0 sudo[31886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhgiyuqwtpvngvbdgpmfedjkgygfnrof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399702.1121788-57-248325661864285/AnsiballZ_stat.py'
Nov 29 07:01:42 compute-0 sudo[31886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:42 compute-0 python3.9[31888]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:01:42 compute-0 sudo[31886]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:43 compute-0 sudo[32038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nidoysjsunnhsrhwexorjgezdyfinyxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399702.941609-65-25383494305753/AnsiballZ_file.py'
Nov 29 07:01:43 compute-0 sudo[32038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:43 compute-0 python3.9[32040]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:01:43 compute-0 sudo[32038]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:44 compute-0 sudo[32190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yttwtwxzrhlvnubfkwijaktntqlyxvzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399703.7607036-73-24850974005351/AnsiballZ_stat.py'
Nov 29 07:01:44 compute-0 sudo[32190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:44 compute-0 python3.9[32192]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:01:44 compute-0 sudo[32190]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:44 compute-0 sudo[32313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpbrvfekrenhqvnaoimnfaaxcflyvbbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399703.7607036-73-24850974005351/AnsiballZ_copy.py'
Nov 29 07:01:44 compute-0 sudo[32313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:44 compute-0 python3.9[32315]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399703.7607036-73-24850974005351/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:01:44 compute-0 sudo[32313]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:45 compute-0 sudo[32465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaemmwdyfwgdolmvyqopvrehygcclvov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399705.1017702-88-10343054773420/AnsiballZ_setup.py'
Nov 29 07:01:45 compute-0 sudo[32465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:45 compute-0 python3.9[32467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:01:45 compute-0 sudo[32465]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:46 compute-0 sudo[32621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jetodynghiomcbqreastktkveylkcazg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399706.0416539-96-106502393348941/AnsiballZ_file.py'
Nov 29 07:01:46 compute-0 sudo[32621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:46 compute-0 python3.9[32623]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:01:46 compute-0 sudo[32621]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:47 compute-0 sudo[32773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cstnbxvtkejnztyicftuwugvcfjulzwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399706.782519-105-163439782815478/AnsiballZ_file.py'
Nov 29 07:01:47 compute-0 sudo[32773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:47 compute-0 python3.9[32775]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:01:47 compute-0 sudo[32773]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:48 compute-0 python3.9[32925]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:01:52 compute-0 python3.9[33178]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:01:53 compute-0 python3.9[33328]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:01:54 compute-0 python3.9[33482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:01:55 compute-0 sudo[33638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buqobwujnluaybygenuryeetwzdmnyev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399715.2851634-153-131010162512914/AnsiballZ_setup.py'
Nov 29 07:01:55 compute-0 sudo[33638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:55 compute-0 python3.9[33640]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:01:56 compute-0 sudo[33638]: pam_unix(sudo:session): session closed for user root
Nov 29 07:01:56 compute-0 sudo[33722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlhukbghlrxsvbpckpwrbblaawhnprlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399715.2851634-153-131010162512914/AnsiballZ_dnf.py'
Nov 29 07:01:56 compute-0 sudo[33722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:01:56 compute-0 python3.9[33724]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:02:04 compute-0 irqbalance[804]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 29 07:02:04 compute-0 irqbalance[804]: IRQ 26 affinity is now unmanaged
Nov 29 07:02:43 compute-0 systemd[1]: Reloading.
Nov 29 07:02:44 compute-0 systemd-rc-local-generator[33923]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:02:44 compute-0 systemd[1]: Starting dnf makecache...
Nov 29 07:02:44 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 07:02:44 compute-0 dnf[33935]: Failed determining last makecache time.
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-barbican-42b4c41831408a8e323 151 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 197 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-cinder-1c00d6490d88e436f26ef 190 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-python-stevedore-c4acc5639fd2329372142 167 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 systemd[1]: Reloading.
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-python-cloudkitty-tests-tempest-2c80f8 191 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-os-net-config-9758ab42364673d01bc5014e 153 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 151 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 systemd-rc-local-generator[33972]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-python-designate-tests-tempest-347fdbc 155 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-glance-1fd12c29b339f30fe823e 195 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 177 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-manila-3c01b7181572c95dac462 184 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-python-whitebox-neutron-tests-tempest- 167 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-octavia-ba397f07a7331190208c 162 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-watcher-c014f81a8647287f6dcc 136 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-python-tcib-1124124ec06aadbac34f0d340b 164 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 169 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-swift-dc98a8463506ac520c469a 177 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-python-tempestconf-8515371b7cceebd4282 141 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 systemd[1]: Reloading.
Nov 29 07:02:44 compute-0 dnf[33935]: delorean-openstack-heat-ui-013accbfd179753bc3f0 166 kB/s | 3.0 kB     00:00
Nov 29 07:02:44 compute-0 systemd-rc-local-generator[34024]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:02:45 compute-0 dnf[33935]: CentOS Stream 9 - BaseOS                         47 kB/s | 7.3 kB     00:00
Nov 29 07:02:45 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 07:02:45 compute-0 dnf[33935]: CentOS Stream 9 - AppStream                      82 kB/s | 7.4 kB     00:00
Nov 29 07:02:45 compute-0 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Nov 29 07:02:45 compute-0 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Nov 29 07:02:45 compute-0 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Nov 29 07:02:45 compute-0 dnf[33935]: CentOS Stream 9 - CRB                            54 kB/s | 7.2 kB     00:00
Nov 29 07:02:45 compute-0 dnf[33935]: CentOS Stream 9 - Extras packages                70 kB/s | 8.3 kB     00:00
Nov 29 07:02:45 compute-0 dnf[33935]: dlrn-antelope-testing                           111 kB/s | 3.0 kB     00:00
Nov 29 07:02:45 compute-0 dnf[33935]: dlrn-antelope-build-deps                        135 kB/s | 3.0 kB     00:00
Nov 29 07:02:45 compute-0 dnf[33935]: centos9-rabbitmq                                113 kB/s | 3.0 kB     00:00
Nov 29 07:02:45 compute-0 dnf[33935]: centos9-storage                                 116 kB/s | 3.0 kB     00:00
Nov 29 07:02:45 compute-0 dnf[33935]: centos9-opstools                                108 kB/s | 3.0 kB     00:00
Nov 29 07:02:45 compute-0 dnf[33935]: NFV SIG OpenvSwitch                             103 kB/s | 3.0 kB     00:00
Nov 29 07:02:45 compute-0 dnf[33935]: repo-setup-centos-appstream                     157 kB/s | 4.4 kB     00:00
Nov 29 07:02:46 compute-0 dnf[33935]: repo-setup-centos-baseos                         53 kB/s | 3.9 kB     00:00
Nov 29 07:02:46 compute-0 dnf[33935]: repo-setup-centos-highavailability              184 kB/s | 3.9 kB     00:00
Nov 29 07:02:46 compute-0 dnf[33935]: repo-setup-centos-powertools                    185 kB/s | 4.3 kB     00:00
Nov 29 07:02:46 compute-0 dnf[33935]: Extra Packages for Enterprise Linux 9 - x86_64   90 kB/s |  33 kB     00:00
Nov 29 07:02:47 compute-0 dnf[33935]: Metadata cache created.
Nov 29 07:02:47 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 07:02:47 compute-0 systemd[1]: Finished dnf makecache.
Nov 29 07:02:47 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.996s CPU time.
Nov 29 07:04:18 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Nov 29 07:04:18 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:04:18 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:04:18 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:04:18 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:04:18 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:04:18 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:04:18 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:04:18 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 07:04:18 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:04:18 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:04:18 compute-0 systemd[1]: Reloading.
Nov 29 07:04:18 compute-0 systemd-rc-local-generator[34369]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:04:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:04:19 compute-0 sudo[33722]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:19 compute-0 sudo[35277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fszhbrrficwvrkugfnllbncnzxfyewba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399859.3885493-165-93728597183327/AnsiballZ_command.py'
Nov 29 07:04:19 compute-0 sudo[35277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:19 compute-0 python3.9[35279]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:04:20 compute-0 sudo[35277]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:21 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:04:21 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:04:21 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.048s CPU time.
Nov 29 07:04:21 compute-0 systemd[1]: run-rb2cf1e819a11430abe6fdb4b460559a9.service: Deactivated successfully.
Nov 29 07:04:21 compute-0 sudo[35559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzjplzakuhtwnlhnmsffwmgdpsasxqei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399860.9414508-173-112877849230237/AnsiballZ_selinux.py'
Nov 29 07:04:21 compute-0 sudo[35559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:21 compute-0 python3.9[35561]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 07:04:21 compute-0 sudo[35559]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:22 compute-0 sudo[35711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjngtnjkidnnmjpsbzkttgcfjgullvgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399862.2633662-184-45295672334975/AnsiballZ_command.py'
Nov 29 07:04:22 compute-0 sudo[35711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:22 compute-0 python3.9[35713]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 07:04:23 compute-0 sudo[35711]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:24 compute-0 sudo[35865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beyycojkxjttaqvwnewwgrlfvdmhklqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399864.1460001-192-20866750966804/AnsiballZ_file.py'
Nov 29 07:04:24 compute-0 sudo[35865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:29 compute-0 python3.9[35867]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:29 compute-0 sudo[35865]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:30 compute-0 sudo[36017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inoqeluvogcbytvtyedrprnutybgyhbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399869.7670012-200-42718845596636/AnsiballZ_mount.py'
Nov 29 07:04:30 compute-0 sudo[36017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:32 compute-0 python3.9[36019]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 07:04:32 compute-0 sudo[36017]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:33 compute-0 sudo[36169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqlkdtdoylyvtlmvdmesjxyloypkahfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399873.182331-228-260357427640132/AnsiballZ_file.py'
Nov 29 07:04:33 compute-0 sudo[36169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:35 compute-0 python3.9[36171]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:04:35 compute-0 sudo[36169]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:36 compute-0 sudo[36321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fadxorrwkxnwtrayvxtfowmtapgymiox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399876.1252367-236-87892702104504/AnsiballZ_stat.py'
Nov 29 07:04:36 compute-0 sudo[36321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:36 compute-0 python3.9[36323]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:04:36 compute-0 sudo[36321]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:37 compute-0 sudo[36444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqcfflelamteptgqeixizrspigahdexy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399876.1252367-236-87892702104504/AnsiballZ_copy.py'
Nov 29 07:04:37 compute-0 sudo[36444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:37 compute-0 python3.9[36446]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399876.1252367-236-87892702104504/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bda240d6fa5d122e3a0e28b9ac9ad93e386be357 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:37 compute-0 sudo[36444]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:37 compute-0 sudo[36596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkehqncezwtsnyvlivbtayzbccfkjbte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399877.6855743-260-223853999356840/AnsiballZ_stat.py'
Nov 29 07:04:37 compute-0 sudo[36596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:38 compute-0 python3.9[36598]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:04:38 compute-0 sudo[36596]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:38 compute-0 sudo[36748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouwevqhdgsdfvimbtvvggupwstqqfctr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399878.3695831-268-20237258405982/AnsiballZ_command.py'
Nov 29 07:04:38 compute-0 sudo[36748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:38 compute-0 python3.9[36750]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:04:39 compute-0 sudo[36748]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:39 compute-0 sudo[36901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxrysfcqhqhpoezqtxxgaramgageuszy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399879.2198026-276-41534374523863/AnsiballZ_file.py'
Nov 29 07:04:39 compute-0 sudo[36901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:39 compute-0 python3.9[36903]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:04:39 compute-0 sudo[36901]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:40 compute-0 sudo[37053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsoupzxmvcknaypdsdcxgikujkrfbwoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399880.1323931-287-40645653805867/AnsiballZ_getent.py'
Nov 29 07:04:40 compute-0 sudo[37053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:40 compute-0 python3.9[37055]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 07:04:40 compute-0 sudo[37053]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:40 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:04:40 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:04:41 compute-0 sudo[37207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypjzncekgmjorffoaeupbneopvpghfyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399880.996753-295-118458658385520/AnsiballZ_group.py'
Nov 29 07:04:41 compute-0 sudo[37207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:41 compute-0 python3.9[37209]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:04:41 compute-0 groupadd[37210]: group added to /etc/group: name=qemu, GID=107
Nov 29 07:04:41 compute-0 groupadd[37210]: group added to /etc/gshadow: name=qemu
Nov 29 07:04:41 compute-0 groupadd[37210]: new group: name=qemu, GID=107
Nov 29 07:04:41 compute-0 sudo[37207]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:42 compute-0 sudo[37365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwgtvzqnfjibmrfhxqjsolystkokgjjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399881.8639479-303-244505046827130/AnsiballZ_user.py'
Nov 29 07:04:42 compute-0 sudo[37365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:42 compute-0 python3.9[37367]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:04:42 compute-0 useradd[37369]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 07:04:43 compute-0 sudo[37365]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:43 compute-0 sudo[37525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvnkcgasgocifevwuhivhrpvyjakxdts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399883.407332-311-177172123974005/AnsiballZ_getent.py'
Nov 29 07:04:43 compute-0 sudo[37525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:43 compute-0 python3.9[37527]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 07:04:43 compute-0 sudo[37525]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:44 compute-0 sudo[37678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpmnjqikwtltdkocwuwxjdbxokimplfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399883.9770484-319-186127955608020/AnsiballZ_group.py'
Nov 29 07:04:44 compute-0 sudo[37678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:44 compute-0 python3.9[37680]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:04:45 compute-0 groupadd[37681]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 29 07:04:45 compute-0 groupadd[37681]: group added to /etc/gshadow: name=hugetlbfs
Nov 29 07:04:45 compute-0 groupadd[37681]: new group: name=hugetlbfs, GID=42477
Nov 29 07:04:45 compute-0 sudo[37678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:46 compute-0 sudo[37836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntfshthiupucblcbvxfflnkzaugmuhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399886.089437-328-257278714701462/AnsiballZ_file.py'
Nov 29 07:04:46 compute-0 sudo[37836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:46 compute-0 python3.9[37838]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 07:04:46 compute-0 sudo[37836]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:47 compute-0 sudo[37988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odpviaiqshklegczmdhumzhoykbwvrvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399886.8695035-339-5951636613889/AnsiballZ_dnf.py'
Nov 29 07:04:47 compute-0 sudo[37988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:47 compute-0 python3.9[37990]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:04:52 compute-0 sudo[37988]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:52 compute-0 sudo[38142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixiryoyfjnhlvrqmcxppbjhvrqyvjgdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399892.353198-347-225882968920964/AnsiballZ_file.py'
Nov 29 07:04:52 compute-0 sudo[38142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:52 compute-0 python3.9[38144]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:04:52 compute-0 sudo[38142]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:53 compute-0 sudo[38294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnvynqnshzwgevqfijciauxiyqjplqqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399892.9973176-355-139820630377501/AnsiballZ_stat.py'
Nov 29 07:04:53 compute-0 sudo[38294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:53 compute-0 python3.9[38296]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:04:53 compute-0 sudo[38294]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:53 compute-0 sudo[38417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edgbphhccxjbkfogkgdwesqvxhrkeocd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399892.9973176-355-139820630377501/AnsiballZ_copy.py'
Nov 29 07:04:53 compute-0 sudo[38417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:54 compute-0 python3.9[38419]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399892.9973176-355-139820630377501/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:04:54 compute-0 sudo[38417]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:54 compute-0 sudo[38569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awvwklcqivwwkpfhzhqbxwpkymdjyqsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399894.2110465-370-176580208629328/AnsiballZ_systemd.py'
Nov 29 07:04:54 compute-0 sudo[38569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:55 compute-0 python3.9[38571]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:04:55 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 07:04:55 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 07:04:55 compute-0 kernel: Bridge firewalling registered
Nov 29 07:04:55 compute-0 systemd-modules-load[38575]: Inserted module 'br_netfilter'
Nov 29 07:04:55 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 07:04:55 compute-0 sudo[38569]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:55 compute-0 sudo[38728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvwkenwowkqnunetefyyuweflbbnvdau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399895.4329343-378-132578896777538/AnsiballZ_stat.py'
Nov 29 07:04:55 compute-0 sudo[38728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:55 compute-0 python3.9[38730]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:04:55 compute-0 sudo[38728]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:56 compute-0 sudo[38851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukgnebtdvzmlqwtawzxuhwqvlhklrlqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399895.4329343-378-132578896777538/AnsiballZ_copy.py'
Nov 29 07:04:56 compute-0 sudo[38851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:56 compute-0 python3.9[38853]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399895.4329343-378-132578896777538/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:04:56 compute-0 sudo[38851]: pam_unix(sudo:session): session closed for user root
Nov 29 07:04:57 compute-0 sudo[39003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhxjoooaijcdfwlbbxantwqfacihokvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399896.831434-396-184571286007707/AnsiballZ_dnf.py'
Nov 29 07:04:57 compute-0 sudo[39003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:04:57 compute-0 python3.9[39005]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:05:03 compute-0 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Nov 29 07:05:03 compute-0 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Nov 29 07:05:04 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:05:04 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:05:04 compute-0 systemd[1]: Reloading.
Nov 29 07:05:04 compute-0 systemd-rc-local-generator[39066]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:05:04 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:05:04 compute-0 sudo[39003]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:05 compute-0 python3.9[40187]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:05:06 compute-0 python3.9[41526]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 07:05:07 compute-0 python3.9[42295]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:05:08 compute-0 sudo[43139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewobzmcidwebgzytejqntauyhknkyykl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399907.9064765-435-30268634254053/AnsiballZ_command.py'
Nov 29 07:05:08 compute-0 sudo[43139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:08 compute-0 python3.9[43144]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:05:08 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 07:05:08 compute-0 systemd[1]: Starting Authorization Manager...
Nov 29 07:05:09 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 07:05:09 compute-0 polkitd[43449]: Started polkitd version 0.117
Nov 29 07:05:09 compute-0 polkitd[43449]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 07:05:09 compute-0 polkitd[43449]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 07:05:09 compute-0 polkitd[43449]: Finished loading, compiling and executing 2 rules
Nov 29 07:05:09 compute-0 systemd[1]: Started Authorization Manager.
Nov 29 07:05:09 compute-0 polkitd[43449]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 29 07:05:09 compute-0 sudo[43139]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:09 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:05:09 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:05:09 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.959s CPU time.
Nov 29 07:05:09 compute-0 systemd[1]: run-rb4bbdb8b2f7b4638a1f1ac2a7cb6c335.service: Deactivated successfully.
Nov 29 07:05:09 compute-0 sudo[43618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfhnihujghuwuvymqhkzdwqeofazubrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399909.445478-444-266161449924386/AnsiballZ_systemd.py'
Nov 29 07:05:09 compute-0 sudo[43618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:10 compute-0 python3.9[43620]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:05:10 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 07:05:10 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 07:05:10 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 07:05:10 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 07:05:10 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 07:05:10 compute-0 sudo[43618]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:11 compute-0 python3.9[43782]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 07:05:13 compute-0 sudo[43932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrlkkzyjjdqlicgjkzfbpobghgmfmyxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399913.176771-501-117757165098421/AnsiballZ_systemd.py'
Nov 29 07:05:13 compute-0 sudo[43932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:13 compute-0 python3.9[43934]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:05:13 compute-0 systemd[1]: Reloading.
Nov 29 07:05:13 compute-0 systemd-rc-local-generator[43963]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:05:14 compute-0 sudo[43932]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:14 compute-0 sudo[44120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkzbkeefrhphunkziqncbeyylgggguky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399914.1928918-501-193035571321491/AnsiballZ_systemd.py'
Nov 29 07:05:14 compute-0 sudo[44120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:14 compute-0 python3.9[44122]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:05:14 compute-0 systemd[1]: Reloading.
Nov 29 07:05:14 compute-0 systemd-rc-local-generator[44149]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:05:15 compute-0 sudo[44120]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:15 compute-0 sudo[44309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxbqgpilpfbbvtyukgkbuyjkcgcomzlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399915.3298197-517-113606353770674/AnsiballZ_command.py'
Nov 29 07:05:15 compute-0 sudo[44309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:15 compute-0 python3.9[44311]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:05:15 compute-0 sudo[44309]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:16 compute-0 sudo[44462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mynwntcmowjbxcenlwarrbjwlpjixiox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399916.1535125-525-228329885100785/AnsiballZ_command.py'
Nov 29 07:05:16 compute-0 sudo[44462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:16 compute-0 python3.9[44464]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:05:16 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 07:05:16 compute-0 sudo[44462]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:17 compute-0 sudo[44615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdxomfzsohfyvlobvimngignnegjhbre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399916.983899-533-254444348306989/AnsiballZ_command.py'
Nov 29 07:05:17 compute-0 sudo[44615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:17 compute-0 python3.9[44617]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:05:19 compute-0 sudo[44615]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:19 compute-0 sudo[44777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivmqfkeuzsdxpurtkalxzolokwkgnsqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399919.2446272-541-64292719010575/AnsiballZ_command.py'
Nov 29 07:05:19 compute-0 sudo[44777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:19 compute-0 python3.9[44779]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:05:19 compute-0 sudo[44777]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:20 compute-0 sudo[44930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pggszrhrsdnoepndlbpfujljefhneyof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399920.1315706-549-79681908218405/AnsiballZ_systemd.py'
Nov 29 07:05:20 compute-0 sudo[44930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:20 compute-0 python3.9[44932]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:05:20 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 07:05:20 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 07:05:20 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 07:05:20 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 29 07:05:20 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 07:05:20 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 29 07:05:21 compute-0 sudo[44930]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:22 compute-0 sshd-session[31259]: Connection closed by 192.168.122.30 port 47586
Nov 29 07:05:22 compute-0 sshd-session[31256]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:05:22 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 07:05:22 compute-0 systemd[1]: session-8.scope: Consumed 2min 18.506s CPU time.
Nov 29 07:05:22 compute-0 systemd-logind[807]: Session 8 logged out. Waiting for processes to exit.
Nov 29 07:05:22 compute-0 systemd-logind[807]: Removed session 8.
Nov 29 07:05:31 compute-0 sshd-session[44962]: Accepted publickey for zuul from 192.168.122.30 port 43538 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:05:31 compute-0 systemd-logind[807]: New session 9 of user zuul.
Nov 29 07:05:31 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 29 07:05:31 compute-0 sshd-session[44962]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:05:32 compute-0 python3.9[45115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:05:33 compute-0 sudo[45269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqcvjpnzzyiqkipgvkmcaobamxdiaphx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399932.8043616-36-63117976581854/AnsiballZ_getent.py'
Nov 29 07:05:33 compute-0 sudo[45269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:33 compute-0 python3.9[45271]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 07:05:33 compute-0 sudo[45269]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:34 compute-0 sudo[45422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edzmprpfxwoqasonrdfbnizatypdcesr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399933.6934192-44-125141732975634/AnsiballZ_group.py'
Nov 29 07:05:34 compute-0 sudo[45422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:34 compute-0 python3.9[45424]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:05:34 compute-0 groupadd[45425]: group added to /etc/group: name=openvswitch, GID=42476
Nov 29 07:05:34 compute-0 groupadd[45425]: group added to /etc/gshadow: name=openvswitch
Nov 29 07:05:34 compute-0 groupadd[45425]: new group: name=openvswitch, GID=42476
Nov 29 07:05:34 compute-0 sudo[45422]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:35 compute-0 sudo[45580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxjemfsdrihsappnxzhleocqfuwrkpzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399934.6994684-52-86863964568782/AnsiballZ_user.py'
Nov 29 07:05:35 compute-0 sudo[45580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:35 compute-0 python3.9[45582]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:05:35 compute-0 useradd[45584]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 07:05:35 compute-0 useradd[45584]: add 'openvswitch' to group 'hugetlbfs'
Nov 29 07:05:35 compute-0 useradd[45584]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 29 07:05:35 compute-0 sudo[45580]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:36 compute-0 sudo[45740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbcsmwwugmmdblgkzvzuitkhpivevgjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399935.9421067-62-251021286883439/AnsiballZ_setup.py'
Nov 29 07:05:36 compute-0 sudo[45740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:36 compute-0 python3.9[45742]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:05:36 compute-0 sudo[45740]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:37 compute-0 sudo[45824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taqiftrvovkucrnvhknyufeqwxypacas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399935.9421067-62-251021286883439/AnsiballZ_dnf.py'
Nov 29 07:05:37 compute-0 sudo[45824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:37 compute-0 python3.9[45826]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:05:40 compute-0 sudo[45824]: pam_unix(sudo:session): session closed for user root
Nov 29 07:05:41 compute-0 sudo[45988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcbinnsjbiosbpnhefzrropkfhtaroxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399940.7337182-76-133647579044055/AnsiballZ_dnf.py'
Nov 29 07:05:41 compute-0 sudo[45988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:05:41 compute-0 python3.9[45990]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:05:58 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 07:05:58 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:05:58 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:05:58 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:05:58 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:05:58 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:05:58 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:05:58 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:05:59 compute-0 groupadd[46013]: group added to /etc/group: name=unbound, GID=993
Nov 29 07:05:59 compute-0 groupadd[46013]: group added to /etc/gshadow: name=unbound
Nov 29 07:05:59 compute-0 groupadd[46013]: new group: name=unbound, GID=993
Nov 29 07:05:59 compute-0 useradd[46020]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 29 07:05:59 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 07:05:59 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 07:06:02 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:06:02 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:06:02 compute-0 systemd[1]: Reloading.
Nov 29 07:06:02 compute-0 systemd-sysv-generator[46522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:06:02 compute-0 systemd-rc-local-generator[46518]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:06:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:06:03 compute-0 sudo[45988]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:05 compute-0 sudo[47086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehnnuucjhzlrgbvyjajnqlwlipniddcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399964.5991225-84-8825768210484/AnsiballZ_systemd.py'
Nov 29 07:06:05 compute-0 sudo[47086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:05 compute-0 python3.9[47088]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:06:05 compute-0 systemd[1]: Reloading.
Nov 29 07:06:05 compute-0 systemd-sysv-generator[47119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:06:05 compute-0 systemd-rc-local-generator[47111]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:06:05 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 07:06:05 compute-0 chown[47130]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 07:06:06 compute-0 ovs-ctl[47135]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 07:06:06 compute-0 ovs-ctl[47135]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 07:06:06 compute-0 ovs-ctl[47135]: Starting ovsdb-server [  OK  ]
Nov 29 07:06:06 compute-0 ovs-vsctl[47184]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 07:06:06 compute-0 ovs-vsctl[47200]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"df234f2c-4343-4c91-861d-13d184c56aa0\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 07:06:06 compute-0 ovs-ctl[47135]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 07:06:06 compute-0 ovs-vsctl[47210]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 07:06:06 compute-0 ovs-ctl[47135]: Enabling remote OVSDB managers [  OK  ]
Nov 29 07:06:06 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 07:06:07 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 07:06:07 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 07:06:07 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 07:06:07 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 07:06:07 compute-0 ovs-ctl[47255]: Inserting openvswitch module [  OK  ]
Nov 29 07:06:07 compute-0 ovs-ctl[47224]: Starting ovs-vswitchd [  OK  ]
Nov 29 07:06:07 compute-0 ovs-vsctl[47272]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 07:06:07 compute-0 ovs-ctl[47224]: Enabling remote OVSDB managers [  OK  ]
Nov 29 07:06:07 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 07:06:07 compute-0 systemd[1]: Starting Open vSwitch...
Nov 29 07:06:07 compute-0 systemd[1]: Finished Open vSwitch.
Nov 29 07:06:07 compute-0 sudo[47086]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:09 compute-0 python3.9[47424]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:06:09 compute-0 sudo[47574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxmhzlwvbxotgtfgiukysguujcfojeot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399969.468151-102-133124201560932/AnsiballZ_sefcontext.py'
Nov 29 07:06:09 compute-0 sudo[47574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:10 compute-0 python3.9[47576]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 07:06:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:06:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:06:10 compute-0 systemd[1]: run-r0d97b0a3ed4a4e7d874ec12f781c21a9.service: Deactivated successfully.
Nov 29 07:06:13 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 07:06:13 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:06:13 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:06:13 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:06:13 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:06:13 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:06:13 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:06:13 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:06:13 compute-0 sudo[47574]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:14 compute-0 python3.9[47732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:06:15 compute-0 sudo[47888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plrvmodrrzypjslamsmlzwxaemnjkawa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399974.9627376-120-136758664333444/AnsiballZ_dnf.py'
Nov 29 07:06:15 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 07:06:15 compute-0 sudo[47888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:15 compute-0 python3.9[47890]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:06:17 compute-0 sudo[47888]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:17 compute-0 sudo[48041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jncsqcspmtzkpysfuoobmsxmxawyanfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399977.2365232-128-180415564560449/AnsiballZ_command.py'
Nov 29 07:06:17 compute-0 sudo[48041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:17 compute-0 python3.9[48043]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:06:18 compute-0 sudo[48041]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:19 compute-0 sudo[48328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexwcenvkojyfzlcvikxbhzylkwzcqva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399978.9077833-136-58984617565057/AnsiballZ_file.py'
Nov 29 07:06:19 compute-0 sudo[48328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:19 compute-0 python3.9[48330]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:06:19 compute-0 sudo[48328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:20 compute-0 python3.9[48480]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:06:20 compute-0 sudo[48632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvroqfakajapxzabvgedsngokzxjsuam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399980.6768754-152-44045180743173/AnsiballZ_dnf.py'
Nov 29 07:06:20 compute-0 sudo[48632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:21 compute-0 python3.9[48634]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:06:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:06:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:06:24 compute-0 systemd[1]: Reloading.
Nov 29 07:06:24 compute-0 systemd-rc-local-generator[48668]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:06:24 compute-0 systemd-sysv-generator[48671]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:06:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:06:25 compute-0 sudo[48632]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:06:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:06:25 compute-0 systemd[1]: run-rd6f834cac7ab46a7b0e78836a6a9258c.service: Deactivated successfully.
Nov 29 07:06:26 compute-0 sudo[48949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dajwuxmyfvefhhhpxvvecbhplyhfwkup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399985.8833735-160-49800124536555/AnsiballZ_systemd.py'
Nov 29 07:06:26 compute-0 sudo[48949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:26 compute-0 python3.9[48951]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:06:26 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 07:06:26 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 07:06:26 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 07:06:26 compute-0 systemd[1]: Stopping Network Manager...
Nov 29 07:06:26 compute-0 NetworkManager[7189]: <info>  [1764399986.4945] caught SIGTERM, shutting down normally.
Nov 29 07:06:26 compute-0 NetworkManager[7189]: <info>  [1764399986.4958] dhcp4 (eth0): canceled DHCP transaction
Nov 29 07:06:26 compute-0 NetworkManager[7189]: <info>  [1764399986.4958] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 07:06:26 compute-0 NetworkManager[7189]: <info>  [1764399986.4958] dhcp4 (eth0): state changed no lease
Nov 29 07:06:26 compute-0 NetworkManager[7189]: <info>  [1764399986.4961] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 07:06:26 compute-0 NetworkManager[7189]: <info>  [1764399986.5068] exiting (success)
Nov 29 07:06:26 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 07:06:26 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 07:06:26 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 07:06:26 compute-0 systemd[1]: Stopped Network Manager.
Nov 29 07:06:26 compute-0 systemd[1]: NetworkManager.service: Consumed 14.536s CPU time, 4.1M memory peak, read 0B from disk, written 17.0K to disk.
Nov 29 07:06:26 compute-0 systemd[1]: Starting Network Manager...
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.5684] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:d7e69dea-8152-484d-8a65-eb3d0b2e01e5)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.5685] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.5739] manager[0x55e374cbb090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 07:06:26 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 07:06:26 compute-0 systemd[1]: Started Hostname Service.
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6689] hostname: hostname: using hostnamed
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6690] hostname: static hostname changed from (none) to "compute-0"
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6693] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6698] manager[0x55e374cbb090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6698] manager[0x55e374cbb090]: rfkill: WWAN hardware radio set enabled
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6716] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6723] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6724] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6724] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6725] manager: Networking is enabled by state file
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6727] settings: Loaded settings plugin: keyfile (internal)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6731] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6754] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6762] dhcp: init: Using DHCP client 'internal'
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6764] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6769] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6773] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6780] device (lo): Activation: starting connection 'lo' (e8399e84-1c3b-44af-bf17-12b484068834)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6786] device (eth0): carrier: link connected
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6790] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6794] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6794] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6798] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6804] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6809] device (eth1): carrier: link connected
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6813] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6816] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (cb70f425-ae10-57f4-84a2-262aa56d50f2) (indicated)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6817] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6821] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6825] device (eth1): Activation: starting connection 'ci-private-network' (cb70f425-ae10-57f4-84a2-262aa56d50f2)
Nov 29 07:06:26 compute-0 systemd[1]: Started Network Manager.
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6831] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6836] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6838] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6839] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6840] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6842] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6843] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6845] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6847] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6852] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6854] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6863] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6883] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6893] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6897] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6900] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6908] device (lo): Activation: successful, device activated.
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6922] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 07:06:26 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.6995] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7004] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7006] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7011] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7015] device (eth1): Activation: successful, device activated.
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7031] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7034] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7038] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7042] device (eth0): Activation: successful, device activated.
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7048] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 07:06:26 compute-0 NetworkManager[48962]: <info>  [1764399986.7052] manager: startup complete
Nov 29 07:06:26 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 29 07:06:26 compute-0 sudo[48949]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:27 compute-0 sudo[49175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuberxpewonprgymwhkuxfetcndsrnrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764399986.8968568-168-267464934798695/AnsiballZ_dnf.py'
Nov 29 07:06:27 compute-0 sudo[49175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:27 compute-0 python3.9[49177]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:06:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:06:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:06:34 compute-0 systemd[1]: Reloading.
Nov 29 07:06:34 compute-0 systemd-rc-local-generator[49222]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:06:34 compute-0 systemd-sysv-generator[49229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:06:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:06:36 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 07:06:41 compute-0 sudo[49175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:41 compute-0 sudo[49634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brqlchffduupgdlfezenjzogiuvbwnln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400001.4405863-180-267084632694210/AnsiballZ_stat.py'
Nov 29 07:06:41 compute-0 sudo[49634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:41 compute-0 python3.9[49636]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:06:41 compute-0 sudo[49634]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:42 compute-0 sudo[49786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmotetcsppmfnofhuhdlaobtevmmfans ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400002.126424-189-73717588907674/AnsiballZ_ini_file.py'
Nov 29 07:06:42 compute-0 sudo[49786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:42 compute-0 python3.9[49788]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:42 compute-0 sudo[49786]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:06:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:06:43 compute-0 systemd[1]: run-reb20b4be7ca54f8cb81595af04aa240c.service: Deactivated successfully.
Nov 29 07:06:43 compute-0 sudo[49941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuzoexwbdtnkfxpwiianpzvlvxxgpldr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400003.1456883-199-222707362042156/AnsiballZ_ini_file.py'
Nov 29 07:06:43 compute-0 sudo[49941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:43 compute-0 python3.9[49943]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:43 compute-0 sudo[49941]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:44 compute-0 sudo[50093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlntnvetwxtbskjmlllpmwxeuqggksrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400003.784597-199-141911486318179/AnsiballZ_ini_file.py'
Nov 29 07:06:44 compute-0 sudo[50093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:44 compute-0 python3.9[50095]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:44 compute-0 sudo[50093]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:44 compute-0 sudo[50245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-limxtwxkttfiafnhkpujulucbikwlrca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400004.434461-214-241456924280714/AnsiballZ_ini_file.py'
Nov 29 07:06:44 compute-0 sudo[50245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:44 compute-0 python3.9[50247]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:44 compute-0 sudo[50245]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:45 compute-0 sudo[50397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sodilnfzwdiaiejsrijvtfvdkqdwevpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400005.0650454-214-200093619003274/AnsiballZ_ini_file.py'
Nov 29 07:06:45 compute-0 sudo[50397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:45 compute-0 python3.9[50399]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:45 compute-0 sudo[50397]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:46 compute-0 sudo[50549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruhonzukhblzmuejeolkgohiaojseefg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400005.7292542-229-157348738550301/AnsiballZ_stat.py'
Nov 29 07:06:46 compute-0 sudo[50549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:46 compute-0 python3.9[50551]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:06:46 compute-0 sudo[50549]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:46 compute-0 sudo[50672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmramjynwczgihdixesvlhvouonnkbis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400005.7292542-229-157348738550301/AnsiballZ_copy.py'
Nov 29 07:06:46 compute-0 sudo[50672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:46 compute-0 python3.9[50674]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400005.7292542-229-157348738550301/.source _original_basename=.fwwr28xq follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:46 compute-0 sudo[50672]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:47 compute-0 sudo[50824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrxnyjjefrhuasixxbrnipqnpnehefvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400007.1604593-244-88105067420299/AnsiballZ_file.py'
Nov 29 07:06:47 compute-0 sudo[50824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:47 compute-0 python3.9[50826]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:47 compute-0 sudo[50824]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:48 compute-0 sudo[50976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzpgmeirwbzajfnymsnyfdzpkarwbnps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400007.8201036-252-91155128106304/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 29 07:06:48 compute-0 sudo[50976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:48 compute-0 python3.9[50978]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 07:06:48 compute-0 sudo[50976]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:48 compute-0 sudo[51128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpspdfpncaelshfpazvwijpxtaoncmvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400008.687402-261-36997703474529/AnsiballZ_file.py'
Nov 29 07:06:48 compute-0 sudo[51128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:49 compute-0 python3.9[51130]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:06:49 compute-0 sudo[51128]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:49 compute-0 sudo[51280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksbhykvajkdxqimfuyfndxgkybdrmuzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400009.5093193-271-241977198141554/AnsiballZ_stat.py'
Nov 29 07:06:49 compute-0 sudo[51280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:50 compute-0 sudo[51280]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:50 compute-0 sudo[51403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztqamvavjfszpbxgxrhxcsktanwojujz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400009.5093193-271-241977198141554/AnsiballZ_copy.py'
Nov 29 07:06:50 compute-0 sudo[51403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:51 compute-0 sudo[51403]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:51 compute-0 sudo[51555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhigtcuqfldmiqsprgphtxpnqffpuzlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400011.1891944-286-96525581857877/AnsiballZ_slurp.py'
Nov 29 07:06:51 compute-0 sudo[51555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:51 compute-0 python3.9[51557]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 07:06:51 compute-0 sudo[51555]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:52 compute-0 sudo[51730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpfxpatznyjsahuhrcpynuoqrsvvepzr ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400012.038296-295-107807630634442/async_wrapper.py j141221950369 300 /home/zuul/.ansible/tmp/ansible-tmp-1764400012.038296-295-107807630634442/AnsiballZ_edpm_os_net_config.py _'
Nov 29 07:06:52 compute-0 sudo[51730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:52 compute-0 ansible-async_wrapper.py[51732]: Invoked with j141221950369 300 /home/zuul/.ansible/tmp/ansible-tmp-1764400012.038296-295-107807630634442/AnsiballZ_edpm_os_net_config.py _
Nov 29 07:06:52 compute-0 ansible-async_wrapper.py[51735]: Starting module and watcher
Nov 29 07:06:52 compute-0 ansible-async_wrapper.py[51735]: Start watching 51736 (300)
Nov 29 07:06:52 compute-0 ansible-async_wrapper.py[51736]: Start module (51736)
Nov 29 07:06:52 compute-0 ansible-async_wrapper.py[51732]: Return async_wrapper task started.
Nov 29 07:06:52 compute-0 sudo[51730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:53 compute-0 python3.9[51737]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 07:06:53 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 07:06:53 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 07:06:53 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 07:06:53 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 07:06:53 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.0962] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.0977] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1573] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1575] audit: op="connection-add" uuid="d4b3d4a4-6e4d-46b8-9232-548c124ab345" name="br-ex-br" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1591] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1593] audit: op="connection-add" uuid="cc13aec0-1df3-4c23-945c-a6f4e55d05f8" name="br-ex-port" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1606] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1607] audit: op="connection-add" uuid="cb2b9093-abf1-4ef4-a44b-f865a1e0622b" name="eth1-port" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1619] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1621] audit: op="connection-add" uuid="ccc03938-720a-47ab-a6cf-482f748b8e72" name="vlan20-port" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1633] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1635] audit: op="connection-add" uuid="65cd7c73-7089-41e0-a221-d2b7460fa9cf" name="vlan21-port" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1647] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1648] audit: op="connection-add" uuid="41ede288-5c01-4862-943f-0a605290491f" name="vlan22-port" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1660] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1661] audit: op="connection-add" uuid="c62d2594-b165-4668-9c4c-3f22bf18c586" name="vlan23-port" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1683] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1705] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1707] audit: op="connection-add" uuid="a5af896e-7251-4821-82c8-d1dd1eb9ffe8" name="br-ex-if" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1874] audit: op="connection-update" uuid="cb70f425-ae10-57f4-84a2-262aa56d50f2" name="ci-private-network" args="ovs-interface.type,ipv6.dns,ipv6.addresses,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.routing-rules,ipv4.dns,ipv4.addresses,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.routing-rules,connection.slave-type,connection.master,connection.port-type,connection.controller,connection.timestamp,ovs-external-ids.data" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1897] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1900] audit: op="connection-add" uuid="6f9e9d57-421d-4c4a-8c32-e284c8cdac1d" name="vlan20-if" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1922] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1924] audit: op="connection-add" uuid="9ca8f8e7-f53e-459a-a6f9-b93ffc8fb564" name="vlan21-if" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1945] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1947] audit: op="connection-add" uuid="bdec60ab-0787-48dd-9cdd-33a2d6608c74" name="vlan22-if" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1966] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1968] audit: op="connection-add" uuid="75f34c5e-c45c-49fa-b0a2-73c97840fa25" name="vlan23-if" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.1985] audit: op="connection-delete" uuid="98e8f3fe-9552-33be-9e1b-aa79ca008d3d" name="Wired connection 1" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2000] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2011] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2016] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (d4b3d4a4-6e4d-46b8-9232-548c124ab345)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2017] audit: op="connection-activate" uuid="d4b3d4a4-6e4d-46b8-9232-548c124ab345" name="br-ex-br" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2019] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2027] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2031] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (cc13aec0-1df3-4c23-945c-a6f4e55d05f8)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2033] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2040] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2046] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (cb2b9093-abf1-4ef4-a44b-f865a1e0622b)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2048] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2055] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2059] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (ccc03938-720a-47ab-a6cf-482f748b8e72)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2061] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2069] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2073] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (65cd7c73-7089-41e0-a221-d2b7460fa9cf)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2075] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2082] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2086] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (41ede288-5c01-4862-943f-0a605290491f)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2088] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2096] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2100] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (c62d2594-b165-4668-9c4c-3f22bf18c586)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2101] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2103] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2105] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2112] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2118] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2123] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a5af896e-7251-4821-82c8-d1dd1eb9ffe8)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2124] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2128] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2131] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2132] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2134] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2145] device (eth1): disconnecting for new activation request.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2146] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2169] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2178] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2180] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2186] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2193] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2200] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (6f9e9d57-421d-4c4a-8c32-e284c8cdac1d)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2201] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2206] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2210] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2212] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2216] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2225] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2231] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (9ca8f8e7-f53e-459a-a6f9-b93ffc8fb564)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2232] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2235] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2238] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2241] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2245] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2252] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2259] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (bdec60ab-0787-48dd-9cdd-33a2d6608c74)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2261] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2266] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2269] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2272] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2278] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2285] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2292] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (75f34c5e-c45c-49fa-b0a2-73c97840fa25)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2293] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2297] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2301] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2302] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2305] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2325] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2329] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2335] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2338] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2348] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2354] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2360] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2367] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2371] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2380] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2389] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2395] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2399] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2409] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 kernel: Timeout policy base is empty
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2416] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2422] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2424] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 systemd-udevd[51744]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2431] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2437] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2441] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2443] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2450] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2456] dhcp4 (eth0): canceled DHCP transaction
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2456] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2456] dhcp4 (eth0): state changed no lease
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2459] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2487] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2491] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51738 uid=0 result="fail" reason="Device is not activated"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2500] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2508] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2514] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2639] device (eth1): Activation: starting connection 'ci-private-network' (cb70f425-ae10-57f4-84a2-262aa56d50f2)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2645] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2647] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2648] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2650] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2651] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2653] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2655] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2657] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2669] device (eth1): disconnecting for new activation request.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2670] audit: op="connection-activate" uuid="cb70f425-ae10-57f4-84a2-262aa56d50f2" name="ci-private-network" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2676] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2690] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2697] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2712] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2719] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2726] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2732] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2738] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2742] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2746] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2752] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2755] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2759] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2763] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2767] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2771] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.2776] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 kernel: br-ex: entered promiscuous mode
Nov 29 07:06:55 compute-0 kernel: vlan22: entered promiscuous mode
Nov 29 07:06:55 compute-0 systemd-udevd[51743]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:06:55 compute-0 kernel: vlan23: entered promiscuous mode
Nov 29 07:06:55 compute-0 kernel: vlan20: entered promiscuous mode
Nov 29 07:06:55 compute-0 systemd-udevd[51742]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:06:55 compute-0 kernel: vlan21: entered promiscuous mode
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3569] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3580] device (eth1): Activation: starting connection 'ci-private-network' (cb70f425-ae10-57f4-84a2-262aa56d50f2)
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3584] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51738 uid=0 result="success"
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3601] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3613] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3622] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3630] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3638] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3643] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3648] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3667] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3689] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 07:06:55 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3721] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3729] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3735] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3742] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3749] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3764] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3774] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3777] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3780] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3790] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3796] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3803] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3810] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3816] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3824] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3832] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3834] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3836] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3850] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3856] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3863] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3872] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3879] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 07:06:55 compute-0 NetworkManager[48962]: <info>  [1764400015.3887] device (eth1): Activation: successful, device activated.
Nov 29 07:06:56 compute-0 sudo[52099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rehzfutiqxyupufflfccqjxctnimqvce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400016.059188-295-92138831818455/AnsiballZ_async_status.py'
Nov 29 07:06:56 compute-0 sudo[52099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:06:56 compute-0 NetworkManager[48962]: <info>  [1764400016.6600] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51738 uid=0 result="success"
Nov 29 07:06:56 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 07:06:56 compute-0 NetworkManager[48962]: <info>  [1764400016.8366] checkpoint[0x55e374c91950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 07:06:56 compute-0 NetworkManager[48962]: <info>  [1764400016.8368] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51738 uid=0 result="success"
Nov 29 07:06:56 compute-0 python3.9[52101]: ansible-ansible.legacy.async_status Invoked with jid=j141221950369.51732 mode=status _async_dir=/root/.ansible_async
Nov 29 07:06:56 compute-0 sudo[52099]: pam_unix(sudo:session): session closed for user root
Nov 29 07:06:57 compute-0 NetworkManager[48962]: <info>  [1764400017.1230] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51738 uid=0 result="success"
Nov 29 07:06:57 compute-0 NetworkManager[48962]: <info>  [1764400017.1243] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51738 uid=0 result="success"
Nov 29 07:06:57 compute-0 NetworkManager[48962]: <info>  [1764400017.8048] audit: op="networking-control" arg="global-dns-configuration" pid=51738 uid=0 result="success"
Nov 29 07:06:57 compute-0 ansible-async_wrapper.py[51735]: 51736 still running (300)
Nov 29 07:06:58 compute-0 NetworkManager[48962]: <info>  [1764400018.1038] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 07:06:58 compute-0 NetworkManager[48962]: <info>  [1764400018.2341] audit: op="networking-control" arg="global-dns-configuration" pid=51738 uid=0 result="success"
Nov 29 07:06:58 compute-0 NetworkManager[48962]: <info>  [1764400018.2384] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51738 uid=0 result="success"
Nov 29 07:06:58 compute-0 NetworkManager[48962]: <info>  [1764400018.4297] checkpoint[0x55e374c91a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 07:06:58 compute-0 NetworkManager[48962]: <info>  [1764400018.4300] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51738 uid=0 result="success"
Nov 29 07:06:58 compute-0 ansible-async_wrapper.py[51736]: Module complete (51736)
Nov 29 07:07:00 compute-0 sudo[52208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlymrsdtihoytrafqjqhemodtweqzbpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400016.059188-295-92138831818455/AnsiballZ_async_status.py'
Nov 29 07:07:00 compute-0 sudo[52208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:00 compute-0 python3.9[52210]: ansible-ansible.legacy.async_status Invoked with jid=j141221950369.51732 mode=status _async_dir=/root/.ansible_async
Nov 29 07:07:00 compute-0 sudo[52208]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:00 compute-0 sudo[52308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsvvddckjoauhrhigflremeryldpsazn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400016.059188-295-92138831818455/AnsiballZ_async_status.py'
Nov 29 07:07:00 compute-0 sudo[52308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:00 compute-0 python3.9[52310]: ansible-ansible.legacy.async_status Invoked with jid=j141221950369.51732 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 07:07:00 compute-0 sudo[52308]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:01 compute-0 sudo[52460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqcjyqifivbbjigjlvqsdhfltkvbijcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400021.210207-322-12281375980773/AnsiballZ_stat.py'
Nov 29 07:07:01 compute-0 sudo[52460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:01 compute-0 python3.9[52462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:07:01 compute-0 sudo[52460]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:01 compute-0 sudo[52583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuqrwldpisauxyqyfadrgcbxauqcbczd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400021.210207-322-12281375980773/AnsiballZ_copy.py'
Nov 29 07:07:01 compute-0 sudo[52583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:02 compute-0 python3.9[52585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400021.210207-322-12281375980773/.source.returncode _original_basename=.va4dt__1 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:07:02 compute-0 sudo[52583]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:02 compute-0 sudo[52735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pftdaunkqiyvbluxpqtzgtbzxytsrezd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400022.4058695-338-9397345466026/AnsiballZ_stat.py'
Nov 29 07:07:02 compute-0 sudo[52735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:02 compute-0 ansible-async_wrapper.py[51735]: Done in kid B.
Nov 29 07:07:02 compute-0 python3.9[52737]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:07:02 compute-0 sudo[52735]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:03 compute-0 sudo[52858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acbqocacwenwkgfsliuiopqqdzpwqxls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400022.4058695-338-9397345466026/AnsiballZ_copy.py'
Nov 29 07:07:03 compute-0 sudo[52858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:03 compute-0 python3.9[52860]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400022.4058695-338-9397345466026/.source.cfg _original_basename=.55da8rr0 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:07:03 compute-0 sudo[52858]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:04 compute-0 sudo[53011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlckfybunzqkfaizltjpfeylkcanxtye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400023.7200842-353-49006500537832/AnsiballZ_systemd.py'
Nov 29 07:07:04 compute-0 sudo[53011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:04 compute-0 python3.9[53013]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:07:04 compute-0 systemd[1]: Reloading Network Manager...
Nov 29 07:07:04 compute-0 NetworkManager[48962]: <info>  [1764400024.3898] audit: op="reload" arg="0" pid=53017 uid=0 result="success"
Nov 29 07:07:04 compute-0 NetworkManager[48962]: <info>  [1764400024.3907] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 07:07:04 compute-0 systemd[1]: Reloaded Network Manager.
Nov 29 07:07:04 compute-0 sudo[53011]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:04 compute-0 sshd-session[44965]: Connection closed by 192.168.122.30 port 43538
Nov 29 07:07:04 compute-0 sshd-session[44962]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:07:04 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 07:07:04 compute-0 systemd[1]: session-9.scope: Consumed 53.061s CPU time.
Nov 29 07:07:04 compute-0 systemd-logind[807]: Session 9 logged out. Waiting for processes to exit.
Nov 29 07:07:04 compute-0 systemd-logind[807]: Removed session 9.
Nov 29 07:07:12 compute-0 sshd-session[53048]: Accepted publickey for zuul from 192.168.122.30 port 38726 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:07:12 compute-0 systemd-logind[807]: New session 10 of user zuul.
Nov 29 07:07:12 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 29 07:07:12 compute-0 sshd-session[53048]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:07:13 compute-0 python3.9[53201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:07:14 compute-0 python3.9[53356]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:07:14 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 07:07:15 compute-0 python3.9[53550]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:07:16 compute-0 sshd-session[53051]: Connection closed by 192.168.122.30 port 38726
Nov 29 07:07:16 compute-0 sshd-session[53048]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:07:16 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 07:07:16 compute-0 systemd[1]: session-10.scope: Consumed 2.459s CPU time.
Nov 29 07:07:16 compute-0 systemd-logind[807]: Session 10 logged out. Waiting for processes to exit.
Nov 29 07:07:16 compute-0 systemd-logind[807]: Removed session 10.
Nov 29 07:07:23 compute-0 sshd-session[53579]: Accepted publickey for zuul from 192.168.122.30 port 44866 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:07:23 compute-0 systemd-logind[807]: New session 11 of user zuul.
Nov 29 07:07:23 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 29 07:07:23 compute-0 sshd-session[53579]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:07:24 compute-0 python3.9[53732]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:07:25 compute-0 python3.9[53886]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:07:26 compute-0 sudo[54041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utnnbvcwfkyrkbzwwxhecvttsoqhupfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400046.1758509-40-165913694752499/AnsiballZ_setup.py'
Nov 29 07:07:26 compute-0 sudo[54041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:26 compute-0 python3.9[54043]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:07:27 compute-0 sudo[54041]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:27 compute-0 sudo[54125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdekgoyecfjnqekqhblbtgtwpvsowhfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400046.1758509-40-165913694752499/AnsiballZ_dnf.py'
Nov 29 07:07:27 compute-0 sudo[54125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:27 compute-0 python3.9[54127]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:07:29 compute-0 sudo[54125]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:29 compute-0 sudo[54278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytfcytffbpebrhqbsirmjnhwwtzdyuwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400049.5240195-52-4529497809846/AnsiballZ_setup.py'
Nov 29 07:07:29 compute-0 sudo[54278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:30 compute-0 python3.9[54280]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:07:30 compute-0 sudo[54278]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:31 compute-0 sudo[54473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfnrrtqmcbmzoinbenwteloucbkmflbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400050.743741-63-168990041961566/AnsiballZ_file.py'
Nov 29 07:07:31 compute-0 sudo[54473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:31 compute-0 python3.9[54475]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:07:31 compute-0 sudo[54473]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:32 compute-0 sudo[54625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzdfautviyxuosbmvhtevzoilisqxlzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400051.6194663-71-52936819358640/AnsiballZ_command.py'
Nov 29 07:07:32 compute-0 sudo[54625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:32 compute-0 python3.9[54627]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat630972792-merged.mount: Deactivated successfully.
Nov 29 07:07:32 compute-0 podman[54628]: 2025-11-29 07:07:32.427566652 +0000 UTC m=+0.115635377 system refresh
Nov 29 07:07:32 compute-0 sudo[54625]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:33 compute-0 sudo[54787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cinjtfuqyttxochngvjxkrphgijyhiqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400052.6777563-79-30775277598551/AnsiballZ_stat.py'
Nov 29 07:07:33 compute-0 sudo[54787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:07:33 compute-0 python3.9[54789]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:07:33 compute-0 sudo[54787]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:34 compute-0 sudo[54910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iytsmhefzmjomnxzyazralunaiwlgybu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400052.6777563-79-30775277598551/AnsiballZ_copy.py'
Nov 29 07:07:34 compute-0 sudo[54910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:34 compute-0 python3.9[54912]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400052.6777563-79-30775277598551/.source.json follow=False _original_basename=podman_network_config.j2 checksum=a6a275f49fa6aa5ed5bf870133a0afccfd86ae4c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:07:34 compute-0 sudo[54910]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:34 compute-0 sudo[55062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqsnmalkywynflaxevrdzqwvsruazpii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400054.3744981-94-65649498715927/AnsiballZ_stat.py'
Nov 29 07:07:34 compute-0 sudo[55062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:35 compute-0 python3.9[55064]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:07:35 compute-0 sudo[55062]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:35 compute-0 sudo[55185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifirpedpcticwcxmznpprkyqrrpomhzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400054.3744981-94-65649498715927/AnsiballZ_copy.py'
Nov 29 07:07:35 compute-0 sudo[55185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:35 compute-0 python3.9[55187]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400054.3744981-94-65649498715927/.source.conf follow=False _original_basename=registries.conf.j2 checksum=75cbff578cac25096c07a1fc71278e69a134eb3a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:07:35 compute-0 sudo[55185]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:36 compute-0 sudo[55337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlrcnwbplkupuomskmfcsqglktppolpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400055.9132748-110-133167694993600/AnsiballZ_ini_file.py'
Nov 29 07:07:36 compute-0 sudo[55337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:36 compute-0 python3.9[55339]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:07:36 compute-0 sudo[55337]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:37 compute-0 sudo[55489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxnsbccuifbtkiizopoouvdutpovwprh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400056.743449-110-237231907954580/AnsiballZ_ini_file.py'
Nov 29 07:07:37 compute-0 sudo[55489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:37 compute-0 python3.9[55491]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:07:37 compute-0 sudo[55489]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:37 compute-0 sudo[55641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfrdwwpxtmjfzsceoqeenhpvzjsknpdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400057.4656959-110-1705983688422/AnsiballZ_ini_file.py'
Nov 29 07:07:37 compute-0 sudo[55641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:38 compute-0 python3.9[55643]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:07:38 compute-0 sudo[55641]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:38 compute-0 sudo[55793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzutijtdoxuobdhsnbxfbtqwgtrlbvvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400058.1944637-110-256086142253591/AnsiballZ_ini_file.py'
Nov 29 07:07:38 compute-0 sudo[55793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:38 compute-0 python3.9[55795]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:07:38 compute-0 sudo[55793]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:39 compute-0 sudo[55945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcqdxhkkksyrbuzsaockliheteqmtqsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400058.9865575-141-204021418719440/AnsiballZ_dnf.py'
Nov 29 07:07:39 compute-0 sudo[55945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:39 compute-0 python3.9[55947]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:07:42 compute-0 sudo[55945]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:42 compute-0 sudo[56098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzrzzjbefvfrhjuklgazfilsyxvrgyjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400062.47659-152-171110269987388/AnsiballZ_setup.py'
Nov 29 07:07:42 compute-0 sudo[56098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:43 compute-0 python3.9[56100]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:07:43 compute-0 sudo[56098]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:43 compute-0 sudo[56252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uypubddtcshtjwdzbbkxupxpslddbqrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400063.2930505-160-36632760815316/AnsiballZ_stat.py'
Nov 29 07:07:43 compute-0 sudo[56252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:43 compute-0 python3.9[56254]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:07:43 compute-0 sudo[56252]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:44 compute-0 sudo[56404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndikllgdhocxlxxbqjnqrvmiwbbgiefu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400064.0987122-169-228511356897779/AnsiballZ_stat.py'
Nov 29 07:07:44 compute-0 sudo[56404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:44 compute-0 python3.9[56406]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:07:44 compute-0 sudo[56404]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:45 compute-0 sudo[56556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oawtpftoxjnkanzcpmldpdmvelruxefy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400064.8741498-179-260289308933580/AnsiballZ_command.py'
Nov 29 07:07:45 compute-0 sudo[56556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:45 compute-0 python3.9[56558]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:07:45 compute-0 sudo[56556]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:46 compute-0 sudo[56709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onbmhyusytugzqqipzvkdjspealggzcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400065.6336744-189-266212762775919/AnsiballZ_service_facts.py'
Nov 29 07:07:46 compute-0 sudo[56709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:46 compute-0 python3.9[56711]: ansible-service_facts Invoked
Nov 29 07:07:46 compute-0 network[56728]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:07:46 compute-0 network[56729]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:07:46 compute-0 network[56730]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:07:48 compute-0 sudo[56709]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:50 compute-0 sudo[57013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzsfiladosfopxuuinjfvwrefwibiktm ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764400069.73109-204-221299225719671/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764400069.73109-204-221299225719671/args'
Nov 29 07:07:50 compute-0 sudo[57013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:50 compute-0 sudo[57013]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:50 compute-0 sudo[57180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjvtbutqyorqyfplbzimhspanjgjthsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400070.4431171-215-239680349442834/AnsiballZ_dnf.py'
Nov 29 07:07:50 compute-0 sudo[57180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:51 compute-0 python3.9[57182]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:07:52 compute-0 sudo[57180]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:53 compute-0 sudo[57333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bveznsmhagffgafxcibbxmldnvinbrjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400072.9105701-228-263030432262787/AnsiballZ_package_facts.py'
Nov 29 07:07:53 compute-0 sudo[57333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:54 compute-0 python3.9[57335]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 07:07:54 compute-0 sudo[57333]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:55 compute-0 sudo[57485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rweehhblhmyxpzzxhfjilruhnofqfoue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400074.7441714-238-107279020783960/AnsiballZ_stat.py'
Nov 29 07:07:55 compute-0 sudo[57485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:55 compute-0 python3.9[57487]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:07:55 compute-0 sudo[57485]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:55 compute-0 sudo[57610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfudbkcbxwpaydukypajfldpyxqlgjxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400074.7441714-238-107279020783960/AnsiballZ_copy.py'
Nov 29 07:07:55 compute-0 sudo[57610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:55 compute-0 python3.9[57612]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400074.7441714-238-107279020783960/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:07:55 compute-0 sudo[57610]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:56 compute-0 sudo[57764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrywlgsktpjxjgngnfsqfksinjqfeesh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400076.1424913-253-211630138494316/AnsiballZ_stat.py'
Nov 29 07:07:56 compute-0 sudo[57764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:56 compute-0 python3.9[57766]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:07:56 compute-0 sudo[57764]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:57 compute-0 sudo[57889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgzyzhiusgpnvhcauexaxwfdsbujqnns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400076.1424913-253-211630138494316/AnsiballZ_copy.py'
Nov 29 07:07:57 compute-0 sudo[57889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:57 compute-0 python3.9[57891]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400076.1424913-253-211630138494316/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:07:57 compute-0 sudo[57889]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:58 compute-0 sudo[58043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isogilhgltsnmyfiiasdpgrygxqywect ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400077.771773-274-63012365669356/AnsiballZ_lineinfile.py'
Nov 29 07:07:58 compute-0 sudo[58043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:58 compute-0 python3.9[58045]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:07:58 compute-0 sudo[58043]: pam_unix(sudo:session): session closed for user root
Nov 29 07:07:59 compute-0 sudo[58197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qudakvpfzfefezkubhdehwqzotuyaywy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400079.1536818-289-120286213755911/AnsiballZ_setup.py'
Nov 29 07:07:59 compute-0 sudo[58197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:07:59 compute-0 python3.9[58199]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:07:59 compute-0 sudo[58197]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:00 compute-0 sudo[58281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nojwoxdktllnovacrulerluibtvnnjqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400079.1536818-289-120286213755911/AnsiballZ_systemd.py'
Nov 29 07:08:00 compute-0 sudo[58281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:00 compute-0 python3.9[58283]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:08:01 compute-0 sudo[58281]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:01 compute-0 sudo[58435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uydpjtslilaurylzjjpteyazjdybnuyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400081.5989568-305-99182815096788/AnsiballZ_setup.py'
Nov 29 07:08:01 compute-0 sudo[58435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:02 compute-0 python3.9[58437]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:08:02 compute-0 sudo[58435]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:02 compute-0 sudo[58519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rivfavfvclodkjyyatzlnckskanfeqwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400081.5989568-305-99182815096788/AnsiballZ_systemd.py'
Nov 29 07:08:02 compute-0 sudo[58519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:03 compute-0 python3.9[58521]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:08:03 compute-0 chronyd[795]: chronyd exiting
Nov 29 07:08:03 compute-0 systemd[1]: Stopping NTP client/server...
Nov 29 07:08:03 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 07:08:03 compute-0 systemd[1]: Stopped NTP client/server.
Nov 29 07:08:03 compute-0 systemd[1]: Starting NTP client/server...
Nov 29 07:08:03 compute-0 chronyd[58530]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 07:08:03 compute-0 chronyd[58530]: Frequency -26.814 +/- 0.204 ppm read from /var/lib/chrony/drift
Nov 29 07:08:03 compute-0 chronyd[58530]: Loaded seccomp filter (level 2)
Nov 29 07:08:03 compute-0 systemd[1]: Started NTP client/server.
Nov 29 07:08:03 compute-0 sudo[58519]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:03 compute-0 sshd-session[53582]: Connection closed by 192.168.122.30 port 44866
Nov 29 07:08:03 compute-0 sshd-session[53579]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:08:03 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 07:08:03 compute-0 systemd[1]: session-11.scope: Consumed 27.390s CPU time.
Nov 29 07:08:03 compute-0 systemd-logind[807]: Session 11 logged out. Waiting for processes to exit.
Nov 29 07:08:03 compute-0 systemd-logind[807]: Removed session 11.
Nov 29 07:08:18 compute-0 sshd-session[58556]: Accepted publickey for zuul from 192.168.122.30 port 39850 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:08:18 compute-0 systemd-logind[807]: New session 12 of user zuul.
Nov 29 07:08:18 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 29 07:08:18 compute-0 sshd-session[58556]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:08:19 compute-0 sudo[58709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyikyjgjidbnojdaajbhrhcbmkfftity ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400098.8994663-22-261939328414570/AnsiballZ_file.py'
Nov 29 07:08:19 compute-0 sudo[58709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:19 compute-0 python3.9[58711]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:19 compute-0 sudo[58709]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:20 compute-0 sudo[58861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihqkrdiemidbhauundxalcbebwybmecs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400099.798833-34-101932569043272/AnsiballZ_stat.py'
Nov 29 07:08:20 compute-0 sudo[58861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:20 compute-0 python3.9[58863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:20 compute-0 sudo[58861]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:21 compute-0 sudo[58984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opkuwzpollnddonoombwvybcjnpczyqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400099.798833-34-101932569043272/AnsiballZ_copy.py'
Nov 29 07:08:21 compute-0 sudo[58984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:21 compute-0 python3.9[58986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400099.798833-34-101932569043272/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:21 compute-0 sudo[58984]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:21 compute-0 sshd-session[58559]: Connection closed by 192.168.122.30 port 39850
Nov 29 07:08:21 compute-0 sshd-session[58556]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:08:21 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 07:08:21 compute-0 systemd[1]: session-12.scope: Consumed 1.680s CPU time.
Nov 29 07:08:21 compute-0 systemd-logind[807]: Session 12 logged out. Waiting for processes to exit.
Nov 29 07:08:21 compute-0 systemd-logind[807]: Removed session 12.
Nov 29 07:08:29 compute-0 sshd-session[59011]: Accepted publickey for zuul from 192.168.122.30 port 57178 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:08:29 compute-0 systemd-logind[807]: New session 13 of user zuul.
Nov 29 07:08:29 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 29 07:08:29 compute-0 sshd-session[59011]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:08:30 compute-0 python3.9[59164]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:08:31 compute-0 sudo[59318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylpxtyckwyagephuqgwvwpheomaonqly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400111.3332515-33-201800242824628/AnsiballZ_file.py'
Nov 29 07:08:31 compute-0 sudo[59318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:32 compute-0 python3.9[59320]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:32 compute-0 sudo[59318]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:33 compute-0 sudo[59493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiekdbeivctlhzjtpccnxsmmdfeujjlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400112.6559367-41-47963063395362/AnsiballZ_stat.py'
Nov 29 07:08:33 compute-0 sudo[59493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:33 compute-0 python3.9[59495]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:33 compute-0 sudo[59493]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:34 compute-0 sudo[59616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocebhfngfzpfottnsrmtnnnjzvsncolx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400112.6559367-41-47963063395362/AnsiballZ_copy.py'
Nov 29 07:08:34 compute-0 sudo[59616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:34 compute-0 python3.9[59618]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764400112.6559367-41-47963063395362/.source.json _original_basename=.0z62gll8 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:34 compute-0 sudo[59616]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:35 compute-0 sudo[59768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqfhgaajgfhkmlwfovrkksjstnyigltf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400114.9132628-64-14008001194712/AnsiballZ_stat.py'
Nov 29 07:08:35 compute-0 sudo[59768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:35 compute-0 python3.9[59770]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:35 compute-0 sudo[59768]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:35 compute-0 sudo[59891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yacwrfsupzryyvpqhxvnuntiosnfghmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400114.9132628-64-14008001194712/AnsiballZ_copy.py'
Nov 29 07:08:35 compute-0 sudo[59891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:35 compute-0 python3.9[59893]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400114.9132628-64-14008001194712/.source _original_basename=.jm1hpkrj follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:36 compute-0 sudo[59891]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:36 compute-0 sudo[60043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuowqxbyiowlzgbblmlrsqopxcaroxrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400116.2304878-80-35366416095888/AnsiballZ_file.py'
Nov 29 07:08:36 compute-0 sudo[60043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:36 compute-0 python3.9[60045]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:08:36 compute-0 sudo[60043]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:37 compute-0 sudo[60195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tasixzsmwhcqzytzmhpefgtydivmxacb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400116.9876935-88-98748442715416/AnsiballZ_stat.py'
Nov 29 07:08:37 compute-0 sudo[60195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:37 compute-0 python3.9[60197]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:37 compute-0 sudo[60195]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:37 compute-0 sudo[60318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czwcmwxfmobiffnffqsylifqedhvudqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400116.9876935-88-98748442715416/AnsiballZ_copy.py'
Nov 29 07:08:37 compute-0 sudo[60318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:38 compute-0 python3.9[60320]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400116.9876935-88-98748442715416/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:08:38 compute-0 sudo[60318]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:38 compute-0 sudo[60470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngvzueydnngibjaryroyvvwsnvcyqibi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400118.2060368-88-222637474717722/AnsiballZ_stat.py'
Nov 29 07:08:38 compute-0 sudo[60470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:38 compute-0 python3.9[60472]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:38 compute-0 sudo[60470]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:39 compute-0 sudo[60593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdsgzotahonodrvmfbqbelbdawftmfss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400118.2060368-88-222637474717722/AnsiballZ_copy.py'
Nov 29 07:08:39 compute-0 sudo[60593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:39 compute-0 python3.9[60595]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400118.2060368-88-222637474717722/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:08:39 compute-0 sudo[60593]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:39 compute-0 sudo[60745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjfnljtbgiwhijoamdpcrnkmsntbmhxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400119.4876661-117-121751409699225/AnsiballZ_file.py'
Nov 29 07:08:39 compute-0 sudo[60745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:39 compute-0 python3.9[60747]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:40 compute-0 sudo[60745]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:40 compute-0 sudo[60897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmfhcswulhdzdhwxmzjhxkwsegxqtfbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400120.2079675-125-65947884387730/AnsiballZ_stat.py'
Nov 29 07:08:40 compute-0 sudo[60897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:40 compute-0 python3.9[60899]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:40 compute-0 sudo[60897]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:41 compute-0 sudo[61021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acmpbempftdveduwrsntyuajuhwqrdus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400120.2079675-125-65947884387730/AnsiballZ_copy.py'
Nov 29 07:08:41 compute-0 sudo[61021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:41 compute-0 python3.9[61023]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400120.2079675-125-65947884387730/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:41 compute-0 sudo[61021]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:41 compute-0 sudo[61173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heymyruwisrmknyhciewgplezwatcknc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400121.4434414-140-186362748717255/AnsiballZ_stat.py'
Nov 29 07:08:41 compute-0 sudo[61173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:41 compute-0 python3.9[61175]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:41 compute-0 sudo[61173]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:42 compute-0 sudo[61296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfdjarfhwhernuhiiztginyfmqjnqzvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400121.4434414-140-186362748717255/AnsiballZ_copy.py'
Nov 29 07:08:42 compute-0 sudo[61296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:42 compute-0 python3.9[61298]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400121.4434414-140-186362748717255/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:42 compute-0 sudo[61296]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:43 compute-0 sudo[61448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzeuyydsibcvvskrswfxvxqdbtikhdoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400122.6649904-155-7592690502654/AnsiballZ_systemd.py'
Nov 29 07:08:43 compute-0 sudo[61448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:43 compute-0 python3.9[61450]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:08:43 compute-0 systemd[1]: Reloading.
Nov 29 07:08:43 compute-0 systemd-sysv-generator[61481]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:08:43 compute-0 systemd-rc-local-generator[61474]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:08:44 compute-0 systemd[1]: Reloading.
Nov 29 07:08:44 compute-0 systemd-rc-local-generator[61511]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:08:44 compute-0 systemd-sysv-generator[61514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:08:44 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 07:08:44 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 07:08:44 compute-0 sudo[61448]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:44 compute-0 sudo[61676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzvoceioxalltzujrdevfxvkrahhntkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400124.6434631-163-135891947861945/AnsiballZ_stat.py'
Nov 29 07:08:44 compute-0 sudo[61676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:45 compute-0 python3.9[61678]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:45 compute-0 sudo[61676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:45 compute-0 sudo[61799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfdfchnchofqcfuweovhnyoeradmsgtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400124.6434631-163-135891947861945/AnsiballZ_copy.py'
Nov 29 07:08:45 compute-0 sudo[61799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:45 compute-0 python3.9[61801]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400124.6434631-163-135891947861945/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:45 compute-0 sudo[61799]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:46 compute-0 sudo[61951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twvqwdzxfktsyrlvmnlcfvurslyqctqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400125.8587925-178-158218275343383/AnsiballZ_stat.py'
Nov 29 07:08:46 compute-0 sudo[61951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:46 compute-0 python3.9[61953]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:46 compute-0 sudo[61951]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:46 compute-0 sudo[62074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcduudyziwtcdfenusqbikaylhqrtmdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400125.8587925-178-158218275343383/AnsiballZ_copy.py'
Nov 29 07:08:46 compute-0 sudo[62074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:46 compute-0 python3.9[62076]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400125.8587925-178-158218275343383/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:46 compute-0 sudo[62074]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:47 compute-0 sudo[62226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwsdjwvrxgadqqqwmffabcybktvkrmfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400127.090587-193-233234231486430/AnsiballZ_systemd.py'
Nov 29 07:08:47 compute-0 sudo[62226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:47 compute-0 systemd-rc-local-generator[62251]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:08:47 compute-0 systemd-sysv-generator[62256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:08:47 compute-0 python3.9[62228]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:08:47 compute-0 systemd[1]: Reloading.
Nov 29 07:08:48 compute-0 systemd[1]: Reloading.
Nov 29 07:08:48 compute-0 systemd-sysv-generator[62296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:08:48 compute-0 systemd-rc-local-generator[62292]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:08:48 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:08:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:08:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:08:48 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:08:48 compute-0 sudo[62226]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:49 compute-0 python3.9[62455]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:08:49 compute-0 network[62472]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:08:49 compute-0 network[62473]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:08:49 compute-0 network[62474]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:08:52 compute-0 sudo[62734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfattrbuufyzsvmyhqgorbujarxkhfou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400132.244587-209-219815810603382/AnsiballZ_systemd.py'
Nov 29 07:08:52 compute-0 sudo[62734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:53 compute-0 python3.9[62736]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:08:53 compute-0 systemd[1]: Reloading.
Nov 29 07:08:53 compute-0 systemd-rc-local-generator[62766]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:08:53 compute-0 systemd-sysv-generator[62770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:08:53 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 07:08:53 compute-0 iptables.init[62778]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 07:08:53 compute-0 iptables.init[62778]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 07:08:53 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 07:08:53 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 07:08:53 compute-0 sudo[62734]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:54 compute-0 sudo[62972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxwpwerxitrwgcmknhhmsprekmzfgnnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400133.877919-209-278889818458955/AnsiballZ_systemd.py'
Nov 29 07:08:54 compute-0 sudo[62972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:54 compute-0 python3.9[62974]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:08:54 compute-0 sudo[62972]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:55 compute-0 sudo[63126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imytedxnxvnqugawktdxzpcqqgwvxxjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400134.7843158-225-177870830845351/AnsiballZ_systemd.py'
Nov 29 07:08:55 compute-0 sudo[63126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:55 compute-0 python3.9[63128]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:08:56 compute-0 systemd[1]: Reloading.
Nov 29 07:08:56 compute-0 systemd-sysv-generator[63159]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:08:56 compute-0 systemd-rc-local-generator[63155]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:08:57 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 29 07:08:57 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 29 07:08:57 compute-0 sudo[63126]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:57 compute-0 sudo[63319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olwbzbqktfruxbybhjguplgctwqekrpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400137.314749-233-235345102299200/AnsiballZ_command.py'
Nov 29 07:08:57 compute-0 sudo[63319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:58 compute-0 python3.9[63321]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:08:58 compute-0 sudo[63319]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:58 compute-0 sudo[63472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxwdbhbhteiwcelazauywxpzubkodjbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400138.4526088-247-232576273169168/AnsiballZ_stat.py'
Nov 29 07:08:58 compute-0 sudo[63472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:58 compute-0 python3.9[63474]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:08:58 compute-0 sudo[63472]: pam_unix(sudo:session): session closed for user root
Nov 29 07:08:59 compute-0 sudo[63597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edmydzjjmeixtzydjdbxrraqsojfvwcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400138.4526088-247-232576273169168/AnsiballZ_copy.py'
Nov 29 07:08:59 compute-0 sudo[63597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:08:59 compute-0 python3.9[63599]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400138.4526088-247-232576273169168/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:08:59 compute-0 sudo[63597]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:00 compute-0 sudo[63750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnlkvhbadybqtuxygujvvderyljykbyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400139.7804453-262-279064948227626/AnsiballZ_systemd.py'
Nov 29 07:09:00 compute-0 sudo[63750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:00 compute-0 python3.9[63752]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:09:00 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 07:09:00 compute-0 sshd[1008]: Received SIGHUP; restarting.
Nov 29 07:09:00 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 07:09:00 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Nov 29 07:09:00 compute-0 sshd[1008]: Server listening on :: port 22.
Nov 29 07:09:00 compute-0 sudo[63750]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:00 compute-0 sudo[63906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smmwdivsrufuvfeapddwrfoxmewmqxef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400140.6434474-270-247536997350435/AnsiballZ_file.py'
Nov 29 07:09:00 compute-0 sudo[63906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:01 compute-0 python3.9[63908]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:01 compute-0 sudo[63906]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:01 compute-0 sudo[64058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beysnysjvkgxwxtkqgdwenybmjmrczbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400141.314806-278-231764889238953/AnsiballZ_stat.py'
Nov 29 07:09:01 compute-0 sudo[64058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:01 compute-0 python3.9[64060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:09:01 compute-0 sudo[64058]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:02 compute-0 sudo[64181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltcutjdozfckixuenkzzikzkjbdimzwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400141.314806-278-231764889238953/AnsiballZ_copy.py'
Nov 29 07:09:02 compute-0 sudo[64181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:02 compute-0 python3.9[64183]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400141.314806-278-231764889238953/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:02 compute-0 sudo[64181]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:03 compute-0 sudo[64333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quvhxnbdbciewaiyqnpuzzfcrvrdnfpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400142.6689885-296-251544336962878/AnsiballZ_timezone.py'
Nov 29 07:09:03 compute-0 sudo[64333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:03 compute-0 python3.9[64335]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 07:09:03 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 07:09:03 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 07:09:03 compute-0 sudo[64333]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:04 compute-0 sudo[64489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsvtxzvxnrqakxuksmxaxkidieoigfcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400143.825036-305-262215472959650/AnsiballZ_file.py'
Nov 29 07:09:04 compute-0 sudo[64489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:04 compute-0 python3.9[64491]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:04 compute-0 sudo[64489]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:04 compute-0 sudo[64641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibbmtvfxowonivcqqwmzleoyxemftiyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400144.4568658-313-129409527369652/AnsiballZ_stat.py'
Nov 29 07:09:04 compute-0 sudo[64641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:04 compute-0 python3.9[64643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:09:04 compute-0 sudo[64641]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:05 compute-0 sudo[64764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqvqzanrtnocqlgumydbhfvjfgyjtgnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400144.4568658-313-129409527369652/AnsiballZ_copy.py'
Nov 29 07:09:05 compute-0 sudo[64764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:05 compute-0 python3.9[64766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400144.4568658-313-129409527369652/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:05 compute-0 sudo[64764]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:06 compute-0 sudo[64916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofbetsmkijzdrftiulykvnftoohyifti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400145.8526344-328-34279281440595/AnsiballZ_stat.py'
Nov 29 07:09:06 compute-0 sudo[64916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:06 compute-0 python3.9[64918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:09:06 compute-0 sudo[64916]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:06 compute-0 sudo[65039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mreglbrfequgemshsrlknouyqvtgdfmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400145.8526344-328-34279281440595/AnsiballZ_copy.py'
Nov 29 07:09:06 compute-0 sudo[65039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:06 compute-0 python3.9[65041]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400145.8526344-328-34279281440595/.source.yaml _original_basename=.njy86sut follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:06 compute-0 sudo[65039]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:07 compute-0 sudo[65191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaxgifiainagncafoiizwbejgwjsbhyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400147.0441465-343-101901954351855/AnsiballZ_stat.py'
Nov 29 07:09:07 compute-0 sudo[65191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:07 compute-0 python3.9[65193]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:09:07 compute-0 sudo[65191]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:07 compute-0 sudo[65314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahywitsyjeeukfsqvexoztnbvkgpjvsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400147.0441465-343-101901954351855/AnsiballZ_copy.py'
Nov 29 07:09:07 compute-0 sudo[65314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:08 compute-0 python3.9[65316]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400147.0441465-343-101901954351855/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:08 compute-0 sudo[65314]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:08 compute-0 sudo[65466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxwtwnkwputbtvjogaqexirdwrvjnbkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400148.2435968-358-226009741331167/AnsiballZ_command.py'
Nov 29 07:09:08 compute-0 sudo[65466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:08 compute-0 python3.9[65468]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:09:08 compute-0 sudo[65466]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:09 compute-0 sudo[65619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxrflkpkysbkpmsfwfpswaxgxhweptrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400148.8786352-366-3068643897383/AnsiballZ_command.py'
Nov 29 07:09:09 compute-0 sudo[65619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:09 compute-0 python3.9[65621]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:09:09 compute-0 sudo[65619]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:10 compute-0 sudo[65772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srrjosjvmgqwtmifxcjqnyqpukmchxbz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764400149.5210364-374-206203187447422/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:09:10 compute-0 sudo[65772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:10 compute-0 python3[65774]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:09:10 compute-0 sudo[65772]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:11 compute-0 sudo[65924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buroujexjqcwhhamfmmfiviczimzblry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400150.803349-382-263756062760003/AnsiballZ_stat.py'
Nov 29 07:09:11 compute-0 sudo[65924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:11 compute-0 python3.9[65926]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:09:11 compute-0 sudo[65924]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:11 compute-0 sudo[66047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdfeqousdkqvokccbujploysxwsjmudb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400150.803349-382-263756062760003/AnsiballZ_copy.py'
Nov 29 07:09:11 compute-0 sudo[66047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:11 compute-0 python3.9[66049]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400150.803349-382-263756062760003/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:11 compute-0 sudo[66047]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:12 compute-0 sudo[66199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vodfhvglgdzhnkuelwqqfslqnbygclwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400152.0639498-397-238495652510695/AnsiballZ_stat.py'
Nov 29 07:09:12 compute-0 sudo[66199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:12 compute-0 python3.9[66201]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:09:12 compute-0 sudo[66199]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:13 compute-0 sudo[66322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wydwxpxoopbvpbksoafpchotbmezjwcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400152.0639498-397-238495652510695/AnsiballZ_copy.py'
Nov 29 07:09:13 compute-0 sudo[66322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:13 compute-0 python3.9[66324]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400152.0639498-397-238495652510695/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:13 compute-0 sudo[66322]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:13 compute-0 sudo[66474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsqvapvhlihalvchwaihgilgtkkozhee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400153.4667315-412-170470586136593/AnsiballZ_stat.py'
Nov 29 07:09:13 compute-0 sudo[66474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:14 compute-0 python3.9[66476]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:09:14 compute-0 sudo[66474]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:14 compute-0 sudo[66597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onvzksorwnbdkflbjdgswbyidejqannx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400153.4667315-412-170470586136593/AnsiballZ_copy.py'
Nov 29 07:09:14 compute-0 sudo[66597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:14 compute-0 python3.9[66599]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400153.4667315-412-170470586136593/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:14 compute-0 sudo[66597]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:15 compute-0 sudo[66749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubcxywriacbjwpybvtzcebjmlaxwemqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400154.7611387-427-95833928769191/AnsiballZ_stat.py'
Nov 29 07:09:15 compute-0 sudo[66749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:15 compute-0 python3.9[66751]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:09:15 compute-0 sudo[66749]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:15 compute-0 sudo[66872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcsihzwjuibrbfmuuizyhxrpeiimzho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400154.7611387-427-95833928769191/AnsiballZ_copy.py'
Nov 29 07:09:15 compute-0 sudo[66872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:15 compute-0 python3.9[66874]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400154.7611387-427-95833928769191/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:15 compute-0 sudo[66872]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:16 compute-0 sudo[67024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmckfjcwvywwpbrnfflobyfoqdwxkxxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400156.0459263-442-23503061631107/AnsiballZ_stat.py'
Nov 29 07:09:16 compute-0 sudo[67024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:16 compute-0 python3.9[67026]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:09:16 compute-0 sudo[67024]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:16 compute-0 sudo[67147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tesvxqvbxlmwrylboveczkgegnuzfptn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400156.0459263-442-23503061631107/AnsiballZ_copy.py'
Nov 29 07:09:16 compute-0 sudo[67147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:17 compute-0 python3.9[67149]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400156.0459263-442-23503061631107/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:17 compute-0 sudo[67147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:17 compute-0 sudo[67299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kloifzzyvqenlfonmhqdzgerplxqfnvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400157.3764007-457-158313869930616/AnsiballZ_file.py'
Nov 29 07:09:17 compute-0 sudo[67299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:17 compute-0 python3.9[67301]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:17 compute-0 sudo[67299]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:18 compute-0 sudo[67451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdtuzcbnddalixeopnnazobegwrkruxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400158.065912-465-179289939680049/AnsiballZ_command.py'
Nov 29 07:09:18 compute-0 sudo[67451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:18 compute-0 python3.9[67453]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:09:18 compute-0 sudo[67451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:19 compute-0 sudo[67610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbpoaufoqyfomamjqooaehzkjdyeejtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400158.779938-473-270470636004792/AnsiballZ_blockinfile.py'
Nov 29 07:09:19 compute-0 sudo[67610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:19 compute-0 python3.9[67612]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:19 compute-0 sudo[67610]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:20 compute-0 sudo[67763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eszxwitlmqmgsbdejhgaafhwnohmbfsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400159.7794883-482-276461662126125/AnsiballZ_file.py'
Nov 29 07:09:20 compute-0 sudo[67763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:20 compute-0 python3.9[67765]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:20 compute-0 sudo[67763]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:20 compute-0 sudo[67915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiwnkymxpetjagrbiivwmttfzfoixiwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400160.5148869-482-82423782960205/AnsiballZ_file.py'
Nov 29 07:09:20 compute-0 sudo[67915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:21 compute-0 python3.9[67917]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:21 compute-0 sudo[67915]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:21 compute-0 sudo[68067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wikhqgtttchqtsfuyxziaufbgllnfjdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400161.3384125-497-116114176804965/AnsiballZ_mount.py'
Nov 29 07:09:21 compute-0 sudo[68067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:22 compute-0 python3.9[68069]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:09:22 compute-0 sudo[68067]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:22 compute-0 sudo[68220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytexbjhdhzagyyzfwmdasqhzisywqyph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400162.2468212-497-135323975718156/AnsiballZ_mount.py'
Nov 29 07:09:22 compute-0 sudo[68220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:22 compute-0 python3.9[68222]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:09:22 compute-0 sudo[68220]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:23 compute-0 sshd-session[59014]: Connection closed by 192.168.122.30 port 57178
Nov 29 07:09:23 compute-0 sshd-session[59011]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:09:23 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 07:09:23 compute-0 systemd[1]: session-13.scope: Consumed 36.744s CPU time.
Nov 29 07:09:23 compute-0 systemd-logind[807]: Session 13 logged out. Waiting for processes to exit.
Nov 29 07:09:23 compute-0 systemd-logind[807]: Removed session 13.
Nov 29 07:09:30 compute-0 sshd-session[68248]: Accepted publickey for zuul from 192.168.122.30 port 54766 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:09:30 compute-0 systemd-logind[807]: New session 14 of user zuul.
Nov 29 07:09:30 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 29 07:09:30 compute-0 sshd-session[68248]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:09:30 compute-0 sudo[68401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smnffgrvcavqnevusfbokyujoodusywi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400170.2016466-16-216915721689259/AnsiballZ_tempfile.py'
Nov 29 07:09:30 compute-0 sudo[68401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:30 compute-0 python3.9[68403]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 07:09:30 compute-0 sudo[68401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:31 compute-0 sudo[68553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfeklqxdkhvleselrxbtuzqvemqojjja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400171.171577-28-108962001680888/AnsiballZ_stat.py'
Nov 29 07:09:31 compute-0 sudo[68553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:31 compute-0 python3.9[68555]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:09:31 compute-0 sudo[68553]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:32 compute-0 sudo[68705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oztwtzgkwhcsdzmyegvqpksrptzotszg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400172.0897813-38-230575849187400/AnsiballZ_setup.py'
Nov 29 07:09:32 compute-0 sudo[68705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:33 compute-0 python3.9[68707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:09:33 compute-0 sudo[68705]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:33 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 07:09:33 compute-0 sudo[68859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzziznfymxhuuewtxlvfefyrmyceduir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400173.3215864-47-129629890220132/AnsiballZ_blockinfile.py'
Nov 29 07:09:33 compute-0 sudo[68859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:34 compute-0 python3.9[68861]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCza5ScvnoM/dqQpaH+pvxwwKnNah93wNZa7JkYjYwcf0yzvTcgB7IrdaPpAf5eKVndtUyXmuruiZQSyBMhatW+OmlsmvNubCZeHO9GtMqkyN6eHYYmdkMmu+vGtio3ULiYYvbsjqJATEfYAvDYeme2YoH1RXQ1e1EY+kTGZoeI6Y9V85ZNO2094ciXmznQ14DqBuxwwYqByOmXgdicstYSeSDC8EXEB68Ext+sts+Gw0ac6A/wBdccTwvepraCPwR5AfJgg4oep7I5WiAld6KhDkFGkd4vknjxrvfMFbBvNRE90+ta7JcTzkloX8FHnQxlePa9UiN6/wH7Lmk7E3EzrvWkkQmx3t4kwZ5w5cxBXMKrRjQ3QnrM7G4Z5IC5ZzFGbr1tDqPmw3UoE2+0P97Ak9c02uhCosskOFkSnL7WBSvxMqjT0bJsL8YX6DMJEpty+w7cMhBlxWiJt4xdb+fIOOfor9NfiqCqgzcq6VDHZ3fFxG+qSWqTrLqBgtmZ528=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBUl1dE8VCsfZqAat9Qop5dua4RQ4wkN+XwdjeNkxaB
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAqjt56KYMdHywRK1fsT+jfYkSKLb885ExLCF7SqvFQibCZB692C/0zfgTGmaA0M2XuwDg5/jNkNgmlrs4vcqr4=
                                             create=True mode=0644 path=/tmp/ansible.891h6aln state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:34 compute-0 sudo[68859]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:34 compute-0 sudo[69011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdaklrfrxodmajopyrfdpzcdfmaqbrjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400174.187199-55-251920828247878/AnsiballZ_command.py'
Nov 29 07:09:34 compute-0 sudo[69011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:34 compute-0 python3.9[69013]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.891h6aln' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:09:34 compute-0 sudo[69011]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:35 compute-0 sudo[69165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfuilcjiydwencohtgmvoidzqukcsxat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400175.0397847-63-166713650393122/AnsiballZ_file.py'
Nov 29 07:09:35 compute-0 sudo[69165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:35 compute-0 python3.9[69167]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.891h6aln state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:35 compute-0 sudo[69165]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:36 compute-0 sshd-session[68251]: Connection closed by 192.168.122.30 port 54766
Nov 29 07:09:36 compute-0 sshd-session[68248]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:09:36 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 07:09:36 compute-0 systemd[1]: session-14.scope: Consumed 3.709s CPU time.
Nov 29 07:09:36 compute-0 systemd-logind[807]: Session 14 logged out. Waiting for processes to exit.
Nov 29 07:09:36 compute-0 systemd-logind[807]: Removed session 14.
Nov 29 07:09:42 compute-0 sshd-session[69192]: Accepted publickey for zuul from 192.168.122.30 port 34974 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:09:42 compute-0 systemd-logind[807]: New session 15 of user zuul.
Nov 29 07:09:42 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 29 07:09:42 compute-0 sshd-session[69192]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:09:43 compute-0 python3.9[69345]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:09:44 compute-0 sudo[69499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypkqqwaydhhvmmkcgvtbsrrffoupvoeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400184.269522-32-173127394332127/AnsiballZ_systemd.py'
Nov 29 07:09:44 compute-0 sudo[69499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:45 compute-0 python3.9[69501]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 07:09:45 compute-0 sudo[69499]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:45 compute-0 sudo[69653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfwayxkuzmzokubkrkzzdjaclwpjcehq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400185.485655-40-152072519854987/AnsiballZ_systemd.py'
Nov 29 07:09:45 compute-0 sudo[69653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:46 compute-0 python3.9[69655]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:09:46 compute-0 sudo[69653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:46 compute-0 sudo[69806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdndsqdsxksqamcgkfelnobmwxaakyre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400186.3072875-49-78557051920357/AnsiballZ_command.py'
Nov 29 07:09:46 compute-0 sudo[69806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:46 compute-0 python3.9[69808]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:09:47 compute-0 sudo[69806]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:47 compute-0 sudo[69959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ribloczhsxzndojxnrthnideiugdzsoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400187.1607103-57-66158009115331/AnsiballZ_stat.py'
Nov 29 07:09:47 compute-0 sudo[69959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:47 compute-0 python3.9[69961]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:09:47 compute-0 sudo[69959]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:48 compute-0 sudo[70113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxgxjalhyczeeyrvntrmdognitqfptfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400188.0852902-65-217530300189881/AnsiballZ_command.py'
Nov 29 07:09:48 compute-0 sudo[70113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:48 compute-0 python3.9[70115]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:09:48 compute-0 sudo[70113]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:49 compute-0 sudo[70268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yalfkojevfqenvgfbxhpzcztunwzoldx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400188.7527568-73-104926065687973/AnsiballZ_file.py'
Nov 29 07:09:49 compute-0 sudo[70268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:09:49 compute-0 python3.9[70270]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:09:49 compute-0 sudo[70268]: pam_unix(sudo:session): session closed for user root
Nov 29 07:09:49 compute-0 sshd-session[69195]: Connection closed by 192.168.122.30 port 34974
Nov 29 07:09:49 compute-0 sshd-session[69192]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:09:50 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 07:09:50 compute-0 systemd[1]: session-15.scope: Consumed 4.260s CPU time.
Nov 29 07:09:50 compute-0 systemd-logind[807]: Session 15 logged out. Waiting for processes to exit.
Nov 29 07:09:50 compute-0 systemd-logind[807]: Removed session 15.
Nov 29 07:09:57 compute-0 sshd-session[70295]: Accepted publickey for zuul from 192.168.122.30 port 59306 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:09:57 compute-0 systemd-logind[807]: New session 16 of user zuul.
Nov 29 07:09:57 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 29 07:09:57 compute-0 sshd-session[70295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:09:58 compute-0 python3.9[70448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:09:59 compute-0 sudo[70602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqtlcqjoasycvreitvvbkqfhdejrhsqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400199.3941793-34-280668269177217/AnsiballZ_setup.py'
Nov 29 07:09:59 compute-0 sudo[70602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:00 compute-0 python3.9[70604]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:10:00 compute-0 sudo[70602]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:00 compute-0 sudo[70686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dclrfkjuomedmxdocovivzkovxgavyws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400199.3941793-34-280668269177217/AnsiballZ_dnf.py'
Nov 29 07:10:00 compute-0 sudo[70686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:01 compute-0 python3.9[70688]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:10:02 compute-0 sudo[70686]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:03 compute-0 python3.9[70839]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:10:04 compute-0 python3.9[70990]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:10:05 compute-0 python3.9[71140]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:10:05 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:10:06 compute-0 python3.9[71291]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:10:06 compute-0 sshd-session[70298]: Connection closed by 192.168.122.30 port 59306
Nov 29 07:10:06 compute-0 sshd-session[70295]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:10:06 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 07:10:06 compute-0 systemd[1]: session-16.scope: Consumed 6.349s CPU time.
Nov 29 07:10:06 compute-0 systemd-logind[807]: Session 16 logged out. Waiting for processes to exit.
Nov 29 07:10:06 compute-0 systemd-logind[807]: Removed session 16.
Nov 29 07:10:13 compute-0 chronyd[58530]: Selected source 23.133.168.246 (pool.ntp.org)
Nov 29 07:10:15 compute-0 sshd-session[71316]: Accepted publickey for zuul from 38.102.83.150 port 36968 ssh2: RSA SHA256:tfSy+7i0vpEWoIgjuhzAozE3pD3UuGTXW/vm6y9qu2w
Nov 29 07:10:15 compute-0 systemd-logind[807]: New session 17 of user zuul.
Nov 29 07:10:15 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 29 07:10:15 compute-0 sshd-session[71316]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:10:15 compute-0 sudo[71392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyojnnvqbhhwxjhmcfaqhzyjmqapsvbd ; /usr/bin/python3'
Nov 29 07:10:15 compute-0 sudo[71392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:17 compute-0 useradd[71396]: new group: name=ceph-admin, GID=42478
Nov 29 07:10:17 compute-0 useradd[71396]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 29 07:10:19 compute-0 sudo[71392]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:19 compute-0 sudo[71478]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gesewldwjwkigqnphdltioinwawngxxe ; /usr/bin/python3'
Nov 29 07:10:19 compute-0 sudo[71478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:19 compute-0 sudo[71478]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:20 compute-0 sudo[71551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yafwuhvaviptyiavjwrxmpwkheluuxls ; /usr/bin/python3'
Nov 29 07:10:20 compute-0 sudo[71551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:20 compute-0 sudo[71551]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:20 compute-0 sudo[71601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wilxfkimdakchiaahthxtovmlnfiowty ; /usr/bin/python3'
Nov 29 07:10:20 compute-0 sudo[71601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:20 compute-0 sudo[71601]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:20 compute-0 sudo[71627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pszzzohbwlpcszxxzrhfewjexnbzaixb ; /usr/bin/python3'
Nov 29 07:10:20 compute-0 sudo[71627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:21 compute-0 sudo[71627]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:21 compute-0 sudo[71653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aanoonamhqdjzvoqldikdrtifkhdtfep ; /usr/bin/python3'
Nov 29 07:10:21 compute-0 sudo[71653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:21 compute-0 sudo[71653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:21 compute-0 sudo[71679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwixzuncrehzvkpsgspjgypsfvurlraq ; /usr/bin/python3'
Nov 29 07:10:21 compute-0 sudo[71679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:21 compute-0 sudo[71679]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:22 compute-0 sudo[71757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frqmaptvzqyrngyasmrxdbvmvybulbkh ; /usr/bin/python3'
Nov 29 07:10:22 compute-0 sudo[71757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:22 compute-0 sudo[71757]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:22 compute-0 sudo[71830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yokqwijtrjtnxvwmxsouxvxoxzxxymiz ; /usr/bin/python3'
Nov 29 07:10:22 compute-0 sudo[71830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:22 compute-0 sudo[71830]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:23 compute-0 sudo[71932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwnsvbgcamcbvizdsgpplwygdufrnmwr ; /usr/bin/python3'
Nov 29 07:10:23 compute-0 sudo[71932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:23 compute-0 sudo[71932]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:23 compute-0 sudo[72005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpupgyaalnufbcnipxjltgxqdmfeluwl ; /usr/bin/python3'
Nov 29 07:10:23 compute-0 sudo[72005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:23 compute-0 sudo[72005]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:24 compute-0 sudo[72055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlkdxwrpiprbhdqgxkzcicodycogctlq ; /usr/bin/python3'
Nov 29 07:10:24 compute-0 sudo[72055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:24 compute-0 python3[72057]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:10:25 compute-0 sudo[72055]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:25 compute-0 sudo[72150]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxyhkdkyzabsjlxwsyheqbdhcxjdnsex ; /usr/bin/python3'
Nov 29 07:10:25 compute-0 sudo[72150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:25 compute-0 python3[72152]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 07:10:27 compute-0 sudo[72150]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:27 compute-0 sudo[72177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qanaybakzrofhwiislpnefmjtvdkdamu ; /usr/bin/python3'
Nov 29 07:10:27 compute-0 sudo[72177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:27 compute-0 python3[72179]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:10:27 compute-0 sudo[72177]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:27 compute-0 sudo[72203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxhdfqebjsbsmpqjcbrtgzskhkzbnxfk ; /usr/bin/python3'
Nov 29 07:10:27 compute-0 sudo[72203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:27 compute-0 python3[72205]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:10:28 compute-0 kernel: loop: module loaded
Nov 29 07:10:28 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 29 07:10:28 compute-0 sudo[72203]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:28 compute-0 sudo[72238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcbxxxdmezmrdwzyqylcljkyfiwhtkjd ; /usr/bin/python3'
Nov 29 07:10:28 compute-0 sudo[72238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:28 compute-0 python3[72240]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:10:28 compute-0 lvm[72243]: PV /dev/loop3 not used.
Nov 29 07:10:28 compute-0 lvm[72252]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:10:28 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 07:10:28 compute-0 sudo[72238]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:28 compute-0 lvm[72254]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 07:10:28 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 07:10:28 compute-0 sudo[72330]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpqhznglmgtgcpawddcyseyziovrguoh ; /usr/bin/python3'
Nov 29 07:10:28 compute-0 sudo[72330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:29 compute-0 python3[72332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:10:29 compute-0 sudo[72330]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:29 compute-0 sudo[72403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuqrtpvtldfufpxxlnuapjpvdxqvbnky ; /usr/bin/python3'
Nov 29 07:10:29 compute-0 sudo[72403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:29 compute-0 python3[72405]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400228.7880483-36224-203257001581191/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:10:29 compute-0 sudo[72403]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:29 compute-0 sudo[72453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvvmaplnugfmydoihhsmisyvbcvclysv ; /usr/bin/python3'
Nov 29 07:10:29 compute-0 sudo[72453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:30 compute-0 python3[72455]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:10:30 compute-0 systemd[1]: Reloading.
Nov 29 07:10:30 compute-0 systemd-rc-local-generator[72486]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:10:30 compute-0 systemd-sysv-generator[72489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:10:30 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 07:10:30 compute-0 bash[72495]: /dev/loop3: [64513]:4327939 (/var/lib/ceph-osd-0.img)
Nov 29 07:10:30 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 07:10:30 compute-0 lvm[72496]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:10:30 compute-0 lvm[72496]: VG ceph_vg0 finished
Nov 29 07:10:30 compute-0 sudo[72453]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:30 compute-0 sudo[72520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uihjgoutxvweonlrchexewadlpqzpnqf ; /usr/bin/python3'
Nov 29 07:10:30 compute-0 sudo[72520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:30 compute-0 python3[72522]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 07:10:32 compute-0 sudo[72520]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:32 compute-0 sudo[72547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hncaxdlgebrnjprwxqsgwhsjheengyvg ; /usr/bin/python3'
Nov 29 07:10:32 compute-0 sudo[72547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:32 compute-0 python3[72549]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:10:32 compute-0 sudo[72547]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:32 compute-0 sudo[72573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opzvkzwfpchewpcjkboevsbabdmfdqda ; /usr/bin/python3'
Nov 29 07:10:32 compute-0 sudo[72573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:32 compute-0 python3[72575]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:10:32 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 29 07:10:32 compute-0 sudo[72573]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:33 compute-0 sudo[72605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xplctcinlougneyyfxutqjettpbhjpvk ; /usr/bin/python3'
Nov 29 07:10:33 compute-0 sudo[72605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:33 compute-0 python3[72607]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:10:33 compute-0 lvm[72610]: PV /dev/loop4 not used.
Nov 29 07:10:33 compute-0 lvm[72612]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 07:10:33 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 29 07:10:33 compute-0 lvm[72623]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 07:10:33 compute-0 lvm[72623]: VG ceph_vg1 finished
Nov 29 07:10:33 compute-0 lvm[72621]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 29 07:10:33 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 29 07:10:33 compute-0 sudo[72605]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:33 compute-0 sudo[72699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzxiltfffunfpqcatkkuhbskgnbelnhy ; /usr/bin/python3'
Nov 29 07:10:33 compute-0 sudo[72699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:33 compute-0 python3[72701]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:10:33 compute-0 sudo[72699]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:34 compute-0 sudo[72772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unacyjcljgewtdpcoetduuicbpjezyir ; /usr/bin/python3'
Nov 29 07:10:34 compute-0 sudo[72772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:34 compute-0 python3[72774]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400233.6556294-36251-48521319496024/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:10:34 compute-0 sudo[72772]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:34 compute-0 sudo[72822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaqdtvwjmxhtrzqiizagipjfnhuewzfx ; /usr/bin/python3'
Nov 29 07:10:34 compute-0 sudo[72822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:34 compute-0 python3[72824]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:10:34 compute-0 systemd[1]: Reloading.
Nov 29 07:10:34 compute-0 systemd-rc-local-generator[72854]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:10:34 compute-0 systemd-sysv-generator[72857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:10:35 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 07:10:35 compute-0 bash[72864]: /dev/loop4: [64513]:4327981 (/var/lib/ceph-osd-1.img)
Nov 29 07:10:35 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 07:10:35 compute-0 lvm[72865]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 07:10:35 compute-0 lvm[72865]: VG ceph_vg1 finished
Nov 29 07:10:35 compute-0 sudo[72822]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:35 compute-0 sudo[72889]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zosiabwjzaovmqbvhslrgvtknvnihvrx ; /usr/bin/python3'
Nov 29 07:10:35 compute-0 sudo[72889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:35 compute-0 python3[72891]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 07:10:36 compute-0 sudo[72889]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:37 compute-0 sudo[72916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgbqmptjqzjyhqdzeeqjdrtsliqytwas ; /usr/bin/python3'
Nov 29 07:10:37 compute-0 sudo[72916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:37 compute-0 python3[72918]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:10:37 compute-0 sudo[72916]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:37 compute-0 sudo[72942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kphywujhaacpukvwgmdelhyyddyxhjhe ; /usr/bin/python3'
Nov 29 07:10:37 compute-0 sudo[72942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:37 compute-0 python3[72944]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:10:37 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 29 07:10:37 compute-0 sudo[72942]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:37 compute-0 sudo[72975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqasrneuyoasmfprqlnwgbhgwtooykbu ; /usr/bin/python3'
Nov 29 07:10:37 compute-0 sudo[72975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:37 compute-0 python3[72977]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:10:37 compute-0 lvm[72980]: PV /dev/loop5 not used.
Nov 29 07:10:38 compute-0 lvm[72982]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:10:38 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 29 07:10:38 compute-0 lvm[72991]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 29 07:10:38 compute-0 lvm[72993]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:10:38 compute-0 lvm[72993]: VG ceph_vg2 finished
Nov 29 07:10:38 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 29 07:10:38 compute-0 sudo[72975]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:38 compute-0 sudo[73069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usjyfexoutyapjywtzbxmbippvxhxurb ; /usr/bin/python3'
Nov 29 07:10:38 compute-0 sudo[73069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:38 compute-0 python3[73071]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:10:38 compute-0 sudo[73069]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:39 compute-0 sudo[73142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kocrdwerbtyuchyvunryhtsriyefxnto ; /usr/bin/python3'
Nov 29 07:10:39 compute-0 sudo[73142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:39 compute-0 python3[73144]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400238.622773-36278-74308716042255/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:10:39 compute-0 sudo[73142]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:39 compute-0 sudo[73192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnuhddvqtryvvztpoxcdudpqpvdcyfhn ; /usr/bin/python3'
Nov 29 07:10:39 compute-0 sudo[73192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:39 compute-0 python3[73194]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:10:39 compute-0 systemd[1]: Reloading.
Nov 29 07:10:39 compute-0 systemd-rc-local-generator[73226]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:10:39 compute-0 systemd-sysv-generator[73230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:10:40 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 07:10:40 compute-0 bash[73235]: /dev/loop5: [64513]:4328581 (/var/lib/ceph-osd-2.img)
Nov 29 07:10:40 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 07:10:40 compute-0 lvm[73236]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:10:40 compute-0 lvm[73236]: VG ceph_vg2 finished
Nov 29 07:10:40 compute-0 sudo[73192]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:42 compute-0 python3[73260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:10:44 compute-0 sudo[73351]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqposfiflffqziurosrscqlrbiqpcvrr ; /usr/bin/python3'
Nov 29 07:10:44 compute-0 sudo[73351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:44 compute-0 python3[73353]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 07:10:46 compute-0 groupadd[73359]: group added to /etc/group: name=cephadm, GID=992
Nov 29 07:10:46 compute-0 groupadd[73359]: group added to /etc/gshadow: name=cephadm
Nov 29 07:10:46 compute-0 groupadd[73359]: new group: name=cephadm, GID=992
Nov 29 07:10:46 compute-0 useradd[73366]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 29 07:10:46 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:10:46 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:10:46 compute-0 sudo[73351]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:10:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:10:46 compute-0 systemd[1]: run-r6c8a8f4405534c0faa31ee07e4c79bb7.service: Deactivated successfully.
Nov 29 07:10:46 compute-0 sudo[73462]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltqlwyiodxjvdlevnglssoipzbmiqkhe ; /usr/bin/python3'
Nov 29 07:10:46 compute-0 sudo[73462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:46 compute-0 python3[73464]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:10:47 compute-0 sudo[73462]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:47 compute-0 sudo[73490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkqmcplhanzajvdlqmgqlzixvriyisec ; /usr/bin/python3'
Nov 29 07:10:47 compute-0 sudo[73490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:47 compute-0 python3[73492]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:47 compute-0 sudo[73490]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:48 compute-0 sudo[73552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioxbtqnfdpwhexlyrprgqckzcrkckgig ; /usr/bin/python3'
Nov 29 07:10:48 compute-0 sudo[73552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:48 compute-0 python3[73554]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:10:48 compute-0 sudo[73552]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:48 compute-0 sudo[73578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwyzuwyoixfbnxktmwpekwaapvemtafd ; /usr/bin/python3'
Nov 29 07:10:48 compute-0 sudo[73578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:48 compute-0 python3[73580]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:10:48 compute-0 sudo[73578]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:49 compute-0 sudo[73656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfzmlbioqpwrneoikelhwkmjqmfwxres ; /usr/bin/python3'
Nov 29 07:10:49 compute-0 sudo[73656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:49 compute-0 python3[73658]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:10:49 compute-0 sudo[73656]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:49 compute-0 sudo[73729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phkorrgpvmtexasiqglgqjaziuznbuhk ; /usr/bin/python3'
Nov 29 07:10:49 compute-0 sudo[73729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:49 compute-0 python3[73731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400248.9081845-36425-248470831491802/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:10:49 compute-0 sudo[73729]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:50 compute-0 sudo[73831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eukjnirlwvftxqpibodwzdvapkbbdbsn ; /usr/bin/python3'
Nov 29 07:10:50 compute-0 sudo[73831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:50 compute-0 python3[73833]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:10:50 compute-0 sudo[73831]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:50 compute-0 sudo[73904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxclmeflbyrydptsllllcgeeloebodtu ; /usr/bin/python3'
Nov 29 07:10:50 compute-0 sudo[73904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:50 compute-0 python3[73906]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400250.148355-36443-89198309984478/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:10:50 compute-0 sudo[73904]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:51 compute-0 sudo[73954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxpiwcqvohohaddfagrrnmrvwsqnposp ; /usr/bin/python3'
Nov 29 07:10:51 compute-0 sudo[73954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:51 compute-0 python3[73956]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:10:51 compute-0 sudo[73954]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:51 compute-0 sudo[73982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxthgupcbrxdfahhudyrdvhydxvhvuwp ; /usr/bin/python3'
Nov 29 07:10:51 compute-0 sudo[73982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:51 compute-0 python3[73984]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:10:51 compute-0 sudo[73982]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:52 compute-0 sudo[74010]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdrlrgnaznjhyurefigjdtqobhfyrtat ; /usr/bin/python3'
Nov 29 07:10:52 compute-0 sudo[74010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:52 compute-0 python3[74012]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:10:52 compute-0 sudo[74010]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:52 compute-0 sudo[74038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bavvhugcoembxaobatwaeonhqbnpmvvr ; /usr/bin/python3'
Nov 29 07:10:52 compute-0 sudo[74038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:10:52 compute-0 python3[74040]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:10:52 compute-0 sshd-session[74056]: Accepted publickey for ceph-admin from 192.168.122.100 port 60858 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:10:52 compute-0 systemd-logind[807]: New session 18 of user ceph-admin.
Nov 29 07:10:52 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 07:10:52 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 07:10:52 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 07:10:52 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 29 07:10:52 compute-0 systemd[74060]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:10:53 compute-0 systemd[74060]: Queued start job for default target Main User Target.
Nov 29 07:10:53 compute-0 systemd[74060]: Created slice User Application Slice.
Nov 29 07:10:53 compute-0 systemd[74060]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 07:10:53 compute-0 systemd[74060]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 07:10:53 compute-0 systemd[74060]: Reached target Paths.
Nov 29 07:10:53 compute-0 systemd[74060]: Reached target Timers.
Nov 29 07:10:53 compute-0 systemd[74060]: Starting D-Bus User Message Bus Socket...
Nov 29 07:10:53 compute-0 systemd[74060]: Starting Create User's Volatile Files and Directories...
Nov 29 07:10:53 compute-0 systemd[74060]: Finished Create User's Volatile Files and Directories.
Nov 29 07:10:53 compute-0 systemd[74060]: Listening on D-Bus User Message Bus Socket.
Nov 29 07:10:53 compute-0 systemd[74060]: Reached target Sockets.
Nov 29 07:10:53 compute-0 systemd[74060]: Reached target Basic System.
Nov 29 07:10:53 compute-0 systemd[74060]: Reached target Main User Target.
Nov 29 07:10:53 compute-0 systemd[74060]: Startup finished in 134ms.
Nov 29 07:10:53 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 29 07:10:53 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Nov 29 07:10:53 compute-0 sshd-session[74056]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:10:53 compute-0 sudo[74077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 29 07:10:53 compute-0 sudo[74077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:10:53 compute-0 sudo[74077]: pam_unix(sudo:session): session closed for user root
Nov 29 07:10:53 compute-0 sshd-session[74076]: Received disconnect from 192.168.122.100 port 60858:11: disconnected by user
Nov 29 07:10:53 compute-0 sshd-session[74076]: Disconnected from user ceph-admin 192.168.122.100 port 60858
Nov 29 07:10:53 compute-0 sshd-session[74056]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 29 07:10:53 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 07:10:53 compute-0 systemd-logind[807]: Session 18 logged out. Waiting for processes to exit.
Nov 29 07:10:53 compute-0 systemd-logind[807]: Removed session 18.
Nov 29 07:10:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1033492672-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 07:11:03 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 29 07:11:03 compute-0 systemd[74060]: Activating special unit Exit the Session...
Nov 29 07:11:03 compute-0 systemd[74060]: Stopped target Main User Target.
Nov 29 07:11:03 compute-0 systemd[74060]: Stopped target Basic System.
Nov 29 07:11:03 compute-0 systemd[74060]: Stopped target Paths.
Nov 29 07:11:03 compute-0 systemd[74060]: Stopped target Sockets.
Nov 29 07:11:03 compute-0 systemd[74060]: Stopped target Timers.
Nov 29 07:11:03 compute-0 systemd[74060]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 07:11:03 compute-0 systemd[74060]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 07:11:03 compute-0 systemd[74060]: Closed D-Bus User Message Bus Socket.
Nov 29 07:11:03 compute-0 systemd[74060]: Stopped Create User's Volatile Files and Directories.
Nov 29 07:11:03 compute-0 systemd[74060]: Removed slice User Application Slice.
Nov 29 07:11:03 compute-0 systemd[74060]: Reached target Shutdown.
Nov 29 07:11:03 compute-0 systemd[74060]: Finished Exit the Session.
Nov 29 07:11:03 compute-0 systemd[74060]: Reached target Exit the Session.
Nov 29 07:11:03 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 29 07:11:03 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 29 07:11:03 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 29 07:11:03 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 29 07:11:03 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 29 07:11:03 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 29 07:11:03 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 29 07:11:10 compute-0 podman[74114]: 2025-11-29 07:11:10.037396778 +0000 UTC m=+16.753038004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:10 compute-0 podman[74176]: 2025-11-29 07:11:10.117484995 +0000 UTC m=+0.050653517 container create d625caa45b6b38ba8246aa034eddb35dc066816e770fcfb5a2559176e92e4af5 (image=quay.io/ceph/ceph:v18, name=jovial_bardeen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3325488484-merged.mount: Deactivated successfully.
Nov 29 07:11:10 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 07:11:10 compute-0 systemd[1]: Started libpod-conmon-d625caa45b6b38ba8246aa034eddb35dc066816e770fcfb5a2559176e92e4af5.scope.
Nov 29 07:11:10 compute-0 podman[74176]: 2025-11-29 07:11:10.094464089 +0000 UTC m=+0.027632661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:10 compute-0 podman[74176]: 2025-11-29 07:11:10.240461057 +0000 UTC m=+0.173629609 container init d625caa45b6b38ba8246aa034eddb35dc066816e770fcfb5a2559176e92e4af5 (image=quay.io/ceph/ceph:v18, name=jovial_bardeen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:10 compute-0 podman[74176]: 2025-11-29 07:11:10.254524659 +0000 UTC m=+0.187693191 container start d625caa45b6b38ba8246aa034eddb35dc066816e770fcfb5a2559176e92e4af5 (image=quay.io/ceph/ceph:v18, name=jovial_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:10 compute-0 podman[74176]: 2025-11-29 07:11:10.258803415 +0000 UTC m=+0.191971967 container attach d625caa45b6b38ba8246aa034eddb35dc066816e770fcfb5a2559176e92e4af5 (image=quay.io/ceph/ceph:v18, name=jovial_bardeen, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:11:10 compute-0 jovial_bardeen[74193]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 07:11:10 compute-0 systemd[1]: libpod-d625caa45b6b38ba8246aa034eddb35dc066816e770fcfb5a2559176e92e4af5.scope: Deactivated successfully.
Nov 29 07:11:10 compute-0 podman[74176]: 2025-11-29 07:11:10.568003935 +0000 UTC m=+0.501172497 container died d625caa45b6b38ba8246aa034eddb35dc066816e770fcfb5a2559176e92e4af5 (image=quay.io/ceph/ceph:v18, name=jovial_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:10 compute-0 podman[74176]: 2025-11-29 07:11:10.61932712 +0000 UTC m=+0.552495642 container remove d625caa45b6b38ba8246aa034eddb35dc066816e770fcfb5a2559176e92e4af5 (image=quay.io/ceph/ceph:v18, name=jovial_bardeen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:10 compute-0 systemd[1]: libpod-conmon-d625caa45b6b38ba8246aa034eddb35dc066816e770fcfb5a2559176e92e4af5.scope: Deactivated successfully.
Nov 29 07:11:10 compute-0 podman[74210]: 2025-11-29 07:11:10.692897019 +0000 UTC m=+0.043635947 container create 1a7ce2e1547bb53143c09492f66a503444dc1b6b187d54bed20ce37b157972fc (image=quay.io/ceph/ceph:v18, name=musing_euler, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:11:10 compute-0 systemd[1]: Started libpod-conmon-1a7ce2e1547bb53143c09492f66a503444dc1b6b187d54bed20ce37b157972fc.scope.
Nov 29 07:11:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:10 compute-0 podman[74210]: 2025-11-29 07:11:10.768072602 +0000 UTC m=+0.118811550 container init 1a7ce2e1547bb53143c09492f66a503444dc1b6b187d54bed20ce37b157972fc (image=quay.io/ceph/ceph:v18, name=musing_euler, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:10 compute-0 podman[74210]: 2025-11-29 07:11:10.675034654 +0000 UTC m=+0.025773582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:10 compute-0 podman[74210]: 2025-11-29 07:11:10.778013111 +0000 UTC m=+0.128752079 container start 1a7ce2e1547bb53143c09492f66a503444dc1b6b187d54bed20ce37b157972fc (image=quay.io/ceph/ceph:v18, name=musing_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:10 compute-0 musing_euler[74226]: 167 167
Nov 29 07:11:10 compute-0 podman[74210]: 2025-11-29 07:11:10.78161005 +0000 UTC m=+0.132348978 container attach 1a7ce2e1547bb53143c09492f66a503444dc1b6b187d54bed20ce37b157972fc (image=quay.io/ceph/ceph:v18, name=musing_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:11:10 compute-0 systemd[1]: libpod-1a7ce2e1547bb53143c09492f66a503444dc1b6b187d54bed20ce37b157972fc.scope: Deactivated successfully.
Nov 29 07:11:10 compute-0 podman[74210]: 2025-11-29 07:11:10.783834489 +0000 UTC m=+0.134573437 container died 1a7ce2e1547bb53143c09492f66a503444dc1b6b187d54bed20ce37b157972fc (image=quay.io/ceph/ceph:v18, name=musing_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:11:10 compute-0 podman[74210]: 2025-11-29 07:11:10.825308116 +0000 UTC m=+0.176047084 container remove 1a7ce2e1547bb53143c09492f66a503444dc1b6b187d54bed20ce37b157972fc (image=quay.io/ceph/ceph:v18, name=musing_euler, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:10 compute-0 systemd[1]: libpod-conmon-1a7ce2e1547bb53143c09492f66a503444dc1b6b187d54bed20ce37b157972fc.scope: Deactivated successfully.
Nov 29 07:11:10 compute-0 podman[74244]: 2025-11-29 07:11:10.915570619 +0000 UTC m=+0.053807683 container create 730d570ac7490226afd5b9b9e787902f36cc8bb1669d7cb5a9660550cefca67c (image=quay.io/ceph/ceph:v18, name=relaxed_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:11:10 compute-0 systemd[1]: Started libpod-conmon-730d570ac7490226afd5b9b9e787902f36cc8bb1669d7cb5a9660550cefca67c.scope.
Nov 29 07:11:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:10 compute-0 podman[74244]: 2025-11-29 07:11:10.979340951 +0000 UTC m=+0.117578035 container init 730d570ac7490226afd5b9b9e787902f36cc8bb1669d7cb5a9660550cefca67c (image=quay.io/ceph/ceph:v18, name=relaxed_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:11:10 compute-0 podman[74244]: 2025-11-29 07:11:10.988491941 +0000 UTC m=+0.126729025 container start 730d570ac7490226afd5b9b9e787902f36cc8bb1669d7cb5a9660550cefca67c (image=quay.io/ceph/ceph:v18, name=relaxed_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:10 compute-0 podman[74244]: 2025-11-29 07:11:10.992363746 +0000 UTC m=+0.130600800 container attach 730d570ac7490226afd5b9b9e787902f36cc8bb1669d7cb5a9660550cefca67c (image=quay.io/ceph/ceph:v18, name=relaxed_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:11:10 compute-0 podman[74244]: 2025-11-29 07:11:10.898639618 +0000 UTC m=+0.036876702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:11 compute-0 relaxed_heyrovsky[74260]: AQCPnCppEQjcABAAcQ9xUcHU9gTDDZTIfb1a9A==
Nov 29 07:11:11 compute-0 systemd[1]: libpod-730d570ac7490226afd5b9b9e787902f36cc8bb1669d7cb5a9660550cefca67c.scope: Deactivated successfully.
Nov 29 07:11:11 compute-0 podman[74244]: 2025-11-29 07:11:11.019234816 +0000 UTC m=+0.157471880 container died 730d570ac7490226afd5b9b9e787902f36cc8bb1669d7cb5a9660550cefca67c (image=quay.io/ceph/ceph:v18, name=relaxed_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-045d4fa01f090b56e85f671fdb9c2a054230c06b1da6ccb71430cb84ba7d3dc2-merged.mount: Deactivated successfully.
Nov 29 07:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5bfec359101eafb430171d92b84d1225236351cef1868c6fc8267276b435e96-merged.mount: Deactivated successfully.
Nov 29 07:11:11 compute-0 podman[74244]: 2025-11-29 07:11:11.054871583 +0000 UTC m=+0.193108647 container remove 730d570ac7490226afd5b9b9e787902f36cc8bb1669d7cb5a9660550cefca67c (image=quay.io/ceph/ceph:v18, name=relaxed_heyrovsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:11 compute-0 systemd[1]: libpod-conmon-730d570ac7490226afd5b9b9e787902f36cc8bb1669d7cb5a9660550cefca67c.scope: Deactivated successfully.
Nov 29 07:11:11 compute-0 podman[74280]: 2025-11-29 07:11:11.125175984 +0000 UTC m=+0.047893863 container create 6d639f3834497ceaebc7f94cabb7331e8d0307f1ab74ca4d8da6d959af8da282 (image=quay.io/ceph/ceph:v18, name=compassionate_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:11:11 compute-0 systemd[1]: Started libpod-conmon-6d639f3834497ceaebc7f94cabb7331e8d0307f1ab74ca4d8da6d959af8da282.scope.
Nov 29 07:11:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:11 compute-0 podman[74280]: 2025-11-29 07:11:11.105564831 +0000 UTC m=+0.028282730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:11 compute-0 podman[74280]: 2025-11-29 07:11:11.200178192 +0000 UTC m=+0.122896081 container init 6d639f3834497ceaebc7f94cabb7331e8d0307f1ab74ca4d8da6d959af8da282 (image=quay.io/ceph/ceph:v18, name=compassionate_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:11 compute-0 podman[74280]: 2025-11-29 07:11:11.207653815 +0000 UTC m=+0.130371704 container start 6d639f3834497ceaebc7f94cabb7331e8d0307f1ab74ca4d8da6d959af8da282 (image=quay.io/ceph/ceph:v18, name=compassionate_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:11 compute-0 podman[74280]: 2025-11-29 07:11:11.211925411 +0000 UTC m=+0.134643270 container attach 6d639f3834497ceaebc7f94cabb7331e8d0307f1ab74ca4d8da6d959af8da282 (image=quay.io/ceph/ceph:v18, name=compassionate_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:11:11 compute-0 compassionate_shaw[74297]: AQCPnCppn4CzDRAA6iCyLiYzoIYN5UDGQRXFLg==
Nov 29 07:11:11 compute-0 systemd[1]: libpod-6d639f3834497ceaebc7f94cabb7331e8d0307f1ab74ca4d8da6d959af8da282.scope: Deactivated successfully.
Nov 29 07:11:11 compute-0 podman[74280]: 2025-11-29 07:11:11.233820606 +0000 UTC m=+0.156538485 container died 6d639f3834497ceaebc7f94cabb7331e8d0307f1ab74ca4d8da6d959af8da282 (image=quay.io/ceph/ceph:v18, name=compassionate_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:11 compute-0 podman[74280]: 2025-11-29 07:11:11.281389258 +0000 UTC m=+0.204107147 container remove 6d639f3834497ceaebc7f94cabb7331e8d0307f1ab74ca4d8da6d959af8da282 (image=quay.io/ceph/ceph:v18, name=compassionate_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:11 compute-0 systemd[1]: libpod-conmon-6d639f3834497ceaebc7f94cabb7331e8d0307f1ab74ca4d8da6d959af8da282.scope: Deactivated successfully.
Nov 29 07:11:11 compute-0 podman[74316]: 2025-11-29 07:11:11.36356528 +0000 UTC m=+0.054115911 container create ffaa8577af373110471a47b67e4364a40258831624259b61dd55cae3eca9436e (image=quay.io/ceph/ceph:v18, name=sweet_lewin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:11:11 compute-0 systemd[1]: Started libpod-conmon-ffaa8577af373110471a47b67e4364a40258831624259b61dd55cae3eca9436e.scope.
Nov 29 07:11:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:11 compute-0 podman[74316]: 2025-11-29 07:11:11.343247959 +0000 UTC m=+0.033798620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:11 compute-0 podman[74316]: 2025-11-29 07:11:11.447048219 +0000 UTC m=+0.137598860 container init ffaa8577af373110471a47b67e4364a40258831624259b61dd55cae3eca9436e (image=quay.io/ceph/ceph:v18, name=sweet_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:11:11 compute-0 podman[74316]: 2025-11-29 07:11:11.456134395 +0000 UTC m=+0.146685016 container start ffaa8577af373110471a47b67e4364a40258831624259b61dd55cae3eca9436e (image=quay.io/ceph/ceph:v18, name=sweet_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:11:11 compute-0 podman[74316]: 2025-11-29 07:11:11.459443366 +0000 UTC m=+0.149994007 container attach ffaa8577af373110471a47b67e4364a40258831624259b61dd55cae3eca9436e (image=quay.io/ceph/ceph:v18, name=sweet_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:11 compute-0 sweet_lewin[74332]: AQCPnCppDIwqHBAA1ZuFR6iUoDOGVFqf0zb+9w==
Nov 29 07:11:11 compute-0 systemd[1]: libpod-ffaa8577af373110471a47b67e4364a40258831624259b61dd55cae3eca9436e.scope: Deactivated successfully.
Nov 29 07:11:11 compute-0 podman[74316]: 2025-11-29 07:11:11.476480199 +0000 UTC m=+0.167030820 container died ffaa8577af373110471a47b67e4364a40258831624259b61dd55cae3eca9436e (image=quay.io/ceph/ceph:v18, name=sweet_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:11 compute-0 podman[74316]: 2025-11-29 07:11:11.511809398 +0000 UTC m=+0.202360019 container remove ffaa8577af373110471a47b67e4364a40258831624259b61dd55cae3eca9436e (image=quay.io/ceph/ceph:v18, name=sweet_lewin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:11:11 compute-0 systemd[1]: libpod-conmon-ffaa8577af373110471a47b67e4364a40258831624259b61dd55cae3eca9436e.scope: Deactivated successfully.
Nov 29 07:11:11 compute-0 podman[74351]: 2025-11-29 07:11:11.586878708 +0000 UTC m=+0.050570715 container create 68e06e0f41fa962bdb798846bbdfb766a0b8a96f536d2d4f6ae1e6490cbc088b (image=quay.io/ceph/ceph:v18, name=zen_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:11 compute-0 systemd[1]: Started libpod-conmon-68e06e0f41fa962bdb798846bbdfb766a0b8a96f536d2d4f6ae1e6490cbc088b.scope.
Nov 29 07:11:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d931a310569d9cec58a9824e045ca6555c0d2b8c0bb3f2f608519a1e33074a1a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:11 compute-0 podman[74351]: 2025-11-29 07:11:11.563956576 +0000 UTC m=+0.027648653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:11 compute-0 podman[74351]: 2025-11-29 07:11:11.668496365 +0000 UTC m=+0.132188462 container init 68e06e0f41fa962bdb798846bbdfb766a0b8a96f536d2d4f6ae1e6490cbc088b (image=quay.io/ceph/ceph:v18, name=zen_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:11 compute-0 podman[74351]: 2025-11-29 07:11:11.680017028 +0000 UTC m=+0.143709045 container start 68e06e0f41fa962bdb798846bbdfb766a0b8a96f536d2d4f6ae1e6490cbc088b (image=quay.io/ceph/ceph:v18, name=zen_davinci, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:11 compute-0 podman[74351]: 2025-11-29 07:11:11.6852086 +0000 UTC m=+0.148900687 container attach 68e06e0f41fa962bdb798846bbdfb766a0b8a96f536d2d4f6ae1e6490cbc088b (image=quay.io/ceph/ceph:v18, name=zen_davinci, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:11:11 compute-0 zen_davinci[74366]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 29 07:11:11 compute-0 zen_davinci[74366]: setting min_mon_release = pacific
Nov 29 07:11:11 compute-0 zen_davinci[74366]: /usr/bin/monmaptool: set fsid to 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:11:11 compute-0 zen_davinci[74366]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 29 07:11:11 compute-0 systemd[1]: libpod-68e06e0f41fa962bdb798846bbdfb766a0b8a96f536d2d4f6ae1e6490cbc088b.scope: Deactivated successfully.
Nov 29 07:11:11 compute-0 podman[74351]: 2025-11-29 07:11:11.72462421 +0000 UTC m=+0.188316227 container died 68e06e0f41fa962bdb798846bbdfb766a0b8a96f536d2d4f6ae1e6490cbc088b (image=quay.io/ceph/ceph:v18, name=zen_davinci, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:11:11 compute-0 podman[74351]: 2025-11-29 07:11:11.762909131 +0000 UTC m=+0.226601118 container remove 68e06e0f41fa962bdb798846bbdfb766a0b8a96f536d2d4f6ae1e6490cbc088b (image=quay.io/ceph/ceph:v18, name=zen_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:11:11 compute-0 systemd[1]: libpod-conmon-68e06e0f41fa962bdb798846bbdfb766a0b8a96f536d2d4f6ae1e6490cbc088b.scope: Deactivated successfully.
Nov 29 07:11:11 compute-0 podman[74388]: 2025-11-29 07:11:11.847933381 +0000 UTC m=+0.061390929 container create dd955ecc717f485d3fca2a229323e100419241b10ff669a178bca4803e4e57fa (image=quay.io/ceph/ceph:v18, name=vigilant_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:11 compute-0 systemd[1]: Started libpod-conmon-dd955ecc717f485d3fca2a229323e100419241b10ff669a178bca4803e4e57fa.scope.
Nov 29 07:11:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:11 compute-0 podman[74388]: 2025-11-29 07:11:11.815905781 +0000 UTC m=+0.029363409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdbb64c1833869e3f92a01e0a8b24a41cdcbfe8e62bfdebaa4fb92ac3a9e09de/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdbb64c1833869e3f92a01e0a8b24a41cdcbfe8e62bfdebaa4fb92ac3a9e09de/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdbb64c1833869e3f92a01e0a8b24a41cdcbfe8e62bfdebaa4fb92ac3a9e09de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdbb64c1833869e3f92a01e0a8b24a41cdcbfe8e62bfdebaa4fb92ac3a9e09de/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:11 compute-0 podman[74388]: 2025-11-29 07:11:11.921448899 +0000 UTC m=+0.134906477 container init dd955ecc717f485d3fca2a229323e100419241b10ff669a178bca4803e4e57fa (image=quay.io/ceph/ceph:v18, name=vigilant_elion, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:11 compute-0 podman[74388]: 2025-11-29 07:11:11.927138543 +0000 UTC m=+0.140596091 container start dd955ecc717f485d3fca2a229323e100419241b10ff669a178bca4803e4e57fa (image=quay.io/ceph/ceph:v18, name=vigilant_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:11 compute-0 podman[74388]: 2025-11-29 07:11:11.931480131 +0000 UTC m=+0.144937679 container attach dd955ecc717f485d3fca2a229323e100419241b10ff669a178bca4803e4e57fa (image=quay.io/ceph/ceph:v18, name=vigilant_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:11:12 compute-0 systemd[1]: libpod-dd955ecc717f485d3fca2a229323e100419241b10ff669a178bca4803e4e57fa.scope: Deactivated successfully.
Nov 29 07:11:12 compute-0 podman[74388]: 2025-11-29 07:11:12.021487596 +0000 UTC m=+0.234945144 container died dd955ecc717f485d3fca2a229323e100419241b10ff669a178bca4803e4e57fa (image=quay.io/ceph/ceph:v18, name=vigilant_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdbb64c1833869e3f92a01e0a8b24a41cdcbfe8e62bfdebaa4fb92ac3a9e09de-merged.mount: Deactivated successfully.
Nov 29 07:11:12 compute-0 podman[74388]: 2025-11-29 07:11:12.078116605 +0000 UTC m=+0.291574163 container remove dd955ecc717f485d3fca2a229323e100419241b10ff669a178bca4803e4e57fa (image=quay.io/ceph/ceph:v18, name=vigilant_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:12 compute-0 systemd[1]: libpod-conmon-dd955ecc717f485d3fca2a229323e100419241b10ff669a178bca4803e4e57fa.scope: Deactivated successfully.
Nov 29 07:11:12 compute-0 systemd[1]: Reloading.
Nov 29 07:11:12 compute-0 systemd-sysv-generator[74474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:12 compute-0 systemd-rc-local-generator[74471]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:12 compute-0 systemd[1]: Reloading.
Nov 29 07:11:12 compute-0 systemd-rc-local-generator[74504]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:12 compute-0 systemd-sysv-generator[74508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:12 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 07:11:12 compute-0 systemd[1]: Reloading.
Nov 29 07:11:12 compute-0 systemd-sysv-generator[74542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:12 compute-0 systemd-rc-local-generator[74539]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:12 compute-0 systemd[1]: Reached target Ceph cluster 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:11:12 compute-0 systemd[1]: Reloading.
Nov 29 07:11:13 compute-0 systemd-rc-local-generator[74579]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:13 compute-0 systemd-sysv-generator[74582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:13 compute-0 systemd[1]: Reloading.
Nov 29 07:11:13 compute-0 systemd-rc-local-generator[74621]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:13 compute-0 systemd-sysv-generator[74624]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:13 compute-0 systemd[1]: Created slice Slice /system/ceph-14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:11:13 compute-0 systemd[1]: Reached target System Time Set.
Nov 29 07:11:13 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 29 07:11:13 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:13 compute-0 podman[74679]: 2025-11-29 07:11:13.89944667 +0000 UTC m=+0.059616291 container create 3d1c12500a38ac2909268bbe654674e449195b4840d6a42b9bb8decd0de475ed (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de80a9d9eff679814f7cb459bcce27f6a10f52c717ca0900322bdc045211450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de80a9d9eff679814f7cb459bcce27f6a10f52c717ca0900322bdc045211450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de80a9d9eff679814f7cb459bcce27f6a10f52c717ca0900322bdc045211450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de80a9d9eff679814f7cb459bcce27f6a10f52c717ca0900322bdc045211450/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:13 compute-0 podman[74679]: 2025-11-29 07:11:13.875323225 +0000 UTC m=+0.035492886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:13 compute-0 podman[74679]: 2025-11-29 07:11:13.992873289 +0000 UTC m=+0.153042930 container init 3d1c12500a38ac2909268bbe654674e449195b4840d6a42b9bb8decd0de475ed (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:13 compute-0 podman[74679]: 2025-11-29 07:11:13.999461957 +0000 UTC m=+0.159631568 container start 3d1c12500a38ac2909268bbe654674e449195b4840d6a42b9bb8decd0de475ed (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:14 compute-0 bash[74679]: 3d1c12500a38ac2909268bbe654674e449195b4840d6a42b9bb8decd0de475ed
Nov 29 07:11:14 compute-0 systemd[1]: Started Ceph mon.compute-0 for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:11:14 compute-0 ceph-mon[74699]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:11:14 compute-0 ceph-mon[74699]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 07:11:14 compute-0 ceph-mon[74699]: pidfile_write: ignore empty --pid-file
Nov 29 07:11:14 compute-0 ceph-mon[74699]: load: jerasure load: lrc 
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Git sha 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: DB SUMMARY
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: DB Session ID:  GS6LYMCHINJ0VD89X9TY
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                                     Options.env: 0x55af49af2c40
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                                Options.info_log: 0x55af4b838e80
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                                 Options.wal_dir: 
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                    Options.write_buffer_manager: 0x55af4b848b40
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                               Options.row_cache: None
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                              Options.wal_filter: None
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.wal_compression: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.max_background_jobs: 2
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Compression algorithms supported:
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         kZSTD supported: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:           Options.merge_operator: 
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:        Options.compaction_filter: None
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55af4b838a80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55af4b8311f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:          Options.compression: NoCompression
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.num_levels: 7
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 992a36cf-2c53-4c0e-8733-ad21b6ee24da
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400274058020, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400274060627, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "GS6LYMCHINJ0VD89X9TY", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400274060752, "job": 1, "event": "recovery_finished"}
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55af4b85ae00
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: DB pointer 0x55af4b8e4000
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:11:14 compute-0 ceph-mon[74699]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55af4b8311f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:11:14 compute-0 ceph-mon[74699]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@-1(???) e0 preinit fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 29 07:11:14 compute-0 ceph-mon[74699]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:14 compute-0 ceph-mon[74699]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 07:11:14 compute-0 ceph-mon[74699]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:14 compute-0 ceph-mon[74699]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:11:14 compute-0 ceph-mon[74699]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T07:11:11.967732Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864328,os=Linux}
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).mds e1 new map
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 07:11:14 compute-0 ceph-mon[74699]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 07:11:14 compute-0 podman[74700]: 2025-11-29 07:11:14.134647461 +0000 UTC m=+0.075759159 container create 0b47a356f22b1fe6ac1965dc49d1d44d002db5c528d84cbbcac53478f33da274 (image=quay.io/ceph/ceph:v18, name=relaxed_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mkfs 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 29 07:11:14 compute-0 ceph-mon[74699]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 07:11:14 compute-0 ceph-mon[74699]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:11:14 compute-0 systemd[1]: Started libpod-conmon-0b47a356f22b1fe6ac1965dc49d1d44d002db5c528d84cbbcac53478f33da274.scope.
Nov 29 07:11:14 compute-0 podman[74700]: 2025-11-29 07:11:14.112690174 +0000 UTC m=+0.053801852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bef37af07c0c9749cc541420a9390dc4babe19d65163f6f7f270367e43b4ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bef37af07c0c9749cc541420a9390dc4babe19d65163f6f7f270367e43b4ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9bef37af07c0c9749cc541420a9390dc4babe19d65163f6f7f270367e43b4ec/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:14 compute-0 podman[74700]: 2025-11-29 07:11:14.245397749 +0000 UTC m=+0.186509467 container init 0b47a356f22b1fe6ac1965dc49d1d44d002db5c528d84cbbcac53478f33da274 (image=quay.io/ceph/ceph:v18, name=relaxed_dijkstra, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:11:14 compute-0 podman[74700]: 2025-11-29 07:11:14.255786302 +0000 UTC m=+0.196897950 container start 0b47a356f22b1fe6ac1965dc49d1d44d002db5c528d84cbbcac53478f33da274 (image=quay.io/ceph/ceph:v18, name=relaxed_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:11:14 compute-0 podman[74700]: 2025-11-29 07:11:14.260239352 +0000 UTC m=+0.201351030 container attach 0b47a356f22b1fe6ac1965dc49d1d44d002db5c528d84cbbcac53478f33da274 (image=quay.io/ceph/ceph:v18, name=relaxed_dijkstra, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:11:14 compute-0 ceph-mon[74699]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 07:11:14 compute-0 ceph-mon[74699]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/656237158' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:   cluster:
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:     id:     14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:     health: HEALTH_OK
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:  
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:   services:
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:     mon: 1 daemons, quorum compute-0 (age 0.569714s)
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:     mgr: no daemons active
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:     osd: 0 osds: 0 up, 0 in
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:  
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:   data:
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:     pools:   0 pools, 0 pgs
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:     objects: 0 objects, 0 B
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:     usage:   0 B used, 0 B / 0 B avail
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:     pgs:     
Nov 29 07:11:14 compute-0 relaxed_dijkstra[74755]:  
Nov 29 07:11:14 compute-0 systemd[1]: libpod-0b47a356f22b1fe6ac1965dc49d1d44d002db5c528d84cbbcac53478f33da274.scope: Deactivated successfully.
Nov 29 07:11:14 compute-0 podman[74700]: 2025-11-29 07:11:14.714551167 +0000 UTC m=+0.655662875 container died 0b47a356f22b1fe6ac1965dc49d1d44d002db5c528d84cbbcac53478f33da274 (image=quay.io/ceph/ceph:v18, name=relaxed_dijkstra, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9bef37af07c0c9749cc541420a9390dc4babe19d65163f6f7f270367e43b4ec-merged.mount: Deactivated successfully.
Nov 29 07:11:14 compute-0 podman[74700]: 2025-11-29 07:11:14.773339504 +0000 UTC m=+0.714451162 container remove 0b47a356f22b1fe6ac1965dc49d1d44d002db5c528d84cbbcac53478f33da274 (image=quay.io/ceph/ceph:v18, name=relaxed_dijkstra, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:14 compute-0 systemd[1]: libpod-conmon-0b47a356f22b1fe6ac1965dc49d1d44d002db5c528d84cbbcac53478f33da274.scope: Deactivated successfully.
Nov 29 07:11:14 compute-0 podman[74792]: 2025-11-29 07:11:14.881491462 +0000 UTC m=+0.072787539 container create 3f142ba8bf627ecbefac0e9d8efb957b93949ffedf091aa59fec362db308f231 (image=quay.io/ceph/ceph:v18, name=jolly_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:11:14 compute-0 systemd[1]: Started libpod-conmon-3f142ba8bf627ecbefac0e9d8efb957b93949ffedf091aa59fec362db308f231.scope.
Nov 29 07:11:14 compute-0 podman[74792]: 2025-11-29 07:11:14.852530375 +0000 UTC m=+0.043826522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5203b8d5cbf759b68b5a125d95ffc4a196ee01158e9dc5c1a7b0c7cf401880ad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5203b8d5cbf759b68b5a125d95ffc4a196ee01158e9dc5c1a7b0c7cf401880ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5203b8d5cbf759b68b5a125d95ffc4a196ee01158e9dc5c1a7b0c7cf401880ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5203b8d5cbf759b68b5a125d95ffc4a196ee01158e9dc5c1a7b0c7cf401880ad/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:14 compute-0 podman[74792]: 2025-11-29 07:11:14.978399785 +0000 UTC m=+0.169695892 container init 3f142ba8bf627ecbefac0e9d8efb957b93949ffedf091aa59fec362db308f231 (image=quay.io/ceph/ceph:v18, name=jolly_bhabha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:14 compute-0 podman[74792]: 2025-11-29 07:11:14.989328952 +0000 UTC m=+0.180625019 container start 3f142ba8bf627ecbefac0e9d8efb957b93949ffedf091aa59fec362db308f231 (image=quay.io/ceph/ceph:v18, name=jolly_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:11:14 compute-0 podman[74792]: 2025-11-29 07:11:14.993287489 +0000 UTC m=+0.184583556 container attach 3f142ba8bf627ecbefac0e9d8efb957b93949ffedf091aa59fec362db308f231 (image=quay.io/ceph/ceph:v18, name=jolly_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:15 compute-0 ceph-mon[74699]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:11:15 compute-0 ceph-mon[74699]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:11:15 compute-0 ceph-mon[74699]: fsmap 
Nov 29 07:11:15 compute-0 ceph-mon[74699]: osdmap e1: 0 total, 0 up, 0 in
Nov 29 07:11:15 compute-0 ceph-mon[74699]: mgrmap e1: no daemons active
Nov 29 07:11:15 compute-0 ceph-mon[74699]: from='client.? 192.168.122.100:0/656237158' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 07:11:15 compute-0 ceph-mon[74699]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 07:11:15 compute-0 ceph-mon[74699]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1130034308' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:11:15 compute-0 ceph-mon[74699]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1130034308' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 07:11:15 compute-0 jolly_bhabha[74809]: 
Nov 29 07:11:15 compute-0 jolly_bhabha[74809]: [global]
Nov 29 07:11:15 compute-0 jolly_bhabha[74809]:         fsid = 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:11:15 compute-0 jolly_bhabha[74809]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 29 07:11:15 compute-0 jolly_bhabha[74809]:         osd_crush_chooseleaf_type = 0
Nov 29 07:11:15 compute-0 systemd[1]: libpod-3f142ba8bf627ecbefac0e9d8efb957b93949ffedf091aa59fec362db308f231.scope: Deactivated successfully.
Nov 29 07:11:15 compute-0 podman[74792]: 2025-11-29 07:11:15.399738842 +0000 UTC m=+0.591034949 container died 3f142ba8bf627ecbefac0e9d8efb957b93949ffedf091aa59fec362db308f231 (image=quay.io/ceph/ceph:v18, name=jolly_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5203b8d5cbf759b68b5a125d95ffc4a196ee01158e9dc5c1a7b0c7cf401880ad-merged.mount: Deactivated successfully.
Nov 29 07:11:15 compute-0 podman[74792]: 2025-11-29 07:11:15.451981292 +0000 UTC m=+0.643277379 container remove 3f142ba8bf627ecbefac0e9d8efb957b93949ffedf091aa59fec362db308f231 (image=quay.io/ceph/ceph:v18, name=jolly_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:15 compute-0 systemd[1]: libpod-conmon-3f142ba8bf627ecbefac0e9d8efb957b93949ffedf091aa59fec362db308f231.scope: Deactivated successfully.
Nov 29 07:11:15 compute-0 podman[74846]: 2025-11-29 07:11:15.518266604 +0000 UTC m=+0.038959300 container create 0e55b8b85e4afba30c32d438f5d5fc432f38be9fbb709381f3064046b4e836da (image=quay.io/ceph/ceph:v18, name=blissful_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:15 compute-0 systemd[1]: Started libpod-conmon-0e55b8b85e4afba30c32d438f5d5fc432f38be9fbb709381f3064046b4e836da.scope.
Nov 29 07:11:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/157bd6c59dbc75bdbb8c2797f72d4206b8ed129a4968150d25d920b547bdab9b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/157bd6c59dbc75bdbb8c2797f72d4206b8ed129a4968150d25d920b547bdab9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/157bd6c59dbc75bdbb8c2797f72d4206b8ed129a4968150d25d920b547bdab9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/157bd6c59dbc75bdbb8c2797f72d4206b8ed129a4968150d25d920b547bdab9b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:15 compute-0 podman[74846]: 2025-11-29 07:11:15.595891212 +0000 UTC m=+0.116583998 container init 0e55b8b85e4afba30c32d438f5d5fc432f38be9fbb709381f3064046b4e836da (image=quay.io/ceph/ceph:v18, name=blissful_jackson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:11:15 compute-0 podman[74846]: 2025-11-29 07:11:15.500518911 +0000 UTC m=+0.021211627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:15 compute-0 podman[74846]: 2025-11-29 07:11:15.608711691 +0000 UTC m=+0.129404387 container start 0e55b8b85e4afba30c32d438f5d5fc432f38be9fbb709381f3064046b4e836da (image=quay.io/ceph/ceph:v18, name=blissful_jackson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:11:15 compute-0 podman[74846]: 2025-11-29 07:11:15.612194845 +0000 UTC m=+0.132887621 container attach 0e55b8b85e4afba30c32d438f5d5fc432f38be9fbb709381f3064046b4e836da (image=quay.io/ceph/ceph:v18, name=blissful_jackson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:16 compute-0 ceph-mon[74699]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:11:16 compute-0 ceph-mon[74699]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486844061' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:11:16 compute-0 systemd[1]: libpod-0e55b8b85e4afba30c32d438f5d5fc432f38be9fbb709381f3064046b4e836da.scope: Deactivated successfully.
Nov 29 07:11:16 compute-0 podman[74846]: 2025-11-29 07:11:16.023700596 +0000 UTC m=+0.544393312 container died 0e55b8b85e4afba30c32d438f5d5fc432f38be9fbb709381f3064046b4e836da (image=quay.io/ceph/ceph:v18, name=blissful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-157bd6c59dbc75bdbb8c2797f72d4206b8ed129a4968150d25d920b547bdab9b-merged.mount: Deactivated successfully.
Nov 29 07:11:16 compute-0 podman[74846]: 2025-11-29 07:11:16.069794318 +0000 UTC m=+0.590487004 container remove 0e55b8b85e4afba30c32d438f5d5fc432f38be9fbb709381f3064046b4e836da (image=quay.io/ceph/ceph:v18, name=blissful_jackson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:11:16 compute-0 systemd[1]: libpod-conmon-0e55b8b85e4afba30c32d438f5d5fc432f38be9fbb709381f3064046b4e836da.scope: Deactivated successfully.
Nov 29 07:11:16 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:11:16 compute-0 ceph-mon[74699]: from='client.? 192.168.122.100:0/1130034308' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:11:16 compute-0 ceph-mon[74699]: from='client.? 192.168.122.100:0/1130034308' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 07:11:16 compute-0 ceph-mon[74699]: from='client.? 192.168.122.100:0/3486844061' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:11:16 compute-0 ceph-mon[74699]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 07:11:16 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0[74695]: 2025-11-29T07:11:16.305+0000 7f69c2405640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 07:11:16 compute-0 ceph-mon[74699]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 07:11:16 compute-0 ceph-mon[74699]: mon.compute-0@0(leader) e1 shutdown
Nov 29 07:11:16 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0[74695]: 2025-11-29T07:11:16.305+0000 7f69c2405640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 07:11:16 compute-0 ceph-mon[74699]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 07:11:16 compute-0 ceph-mon[74699]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 07:11:16 compute-0 podman[74928]: 2025-11-29 07:11:16.39671931 +0000 UTC m=+0.124010110 container died 3d1c12500a38ac2909268bbe654674e449195b4840d6a42b9bb8decd0de475ed (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2de80a9d9eff679814f7cb459bcce27f6a10f52c717ca0900322bdc045211450-merged.mount: Deactivated successfully.
Nov 29 07:11:16 compute-0 podman[74928]: 2025-11-29 07:11:16.441282981 +0000 UTC m=+0.168573781 container remove 3d1c12500a38ac2909268bbe654674e449195b4840d6a42b9bb8decd0de475ed (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:11:16 compute-0 bash[74928]: ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0
Nov 29 07:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 07:11:16 compute-0 systemd[1]: ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@mon.compute-0.service: Deactivated successfully.
Nov 29 07:11:16 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:11:16 compute-0 systemd[1]: ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@mon.compute-0.service: Consumed 1.168s CPU time.
Nov 29 07:11:16 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:11:16 compute-0 podman[75031]: 2025-11-29 07:11:16.911790495 +0000 UTC m=+0.046628538 container create 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26060044e6252263c619e6ad11e9a200bdd3c1256baca568245a9205489f9dc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26060044e6252263c619e6ad11e9a200bdd3c1256baca568245a9205489f9dc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26060044e6252263c619e6ad11e9a200bdd3c1256baca568245a9205489f9dc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26060044e6252263c619e6ad11e9a200bdd3c1256baca568245a9205489f9dc2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:16 compute-0 podman[75031]: 2025-11-29 07:11:16.984143401 +0000 UTC m=+0.118981474 container init 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:16 compute-0 podman[75031]: 2025-11-29 07:11:16.89172969 +0000 UTC m=+0.026567733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:16 compute-0 podman[75031]: 2025-11-29 07:11:16.994284266 +0000 UTC m=+0.129122309 container start 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:16 compute-0 bash[75031]: 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90
Nov 29 07:11:17 compute-0 systemd[1]: Started Ceph mon.compute-0 for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:11:17 compute-0 ceph-mon[75050]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:11:17 compute-0 ceph-mon[75050]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 07:11:17 compute-0 ceph-mon[75050]: pidfile_write: ignore empty --pid-file
Nov 29 07:11:17 compute-0 ceph-mon[75050]: load: jerasure load: lrc 
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Git sha 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: DB SUMMARY
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: DB Session ID:  TD18LOQH4ZZ65OBF52SL
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55676 ; 
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                                     Options.env: 0x55bdb3d7dc40
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                                Options.info_log: 0x55bdb5ed3040
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                                 Options.wal_dir: 
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                    Options.write_buffer_manager: 0x55bdb5ee2b40
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                               Options.row_cache: None
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                              Options.wal_filter: None
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.wal_compression: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.max_background_jobs: 2
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Compression algorithms supported:
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         kZSTD supported: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:           Options.merge_operator: 
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:        Options.compaction_filter: None
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdb5ed2c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdb5ecb1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:          Options.compression: NoCompression
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.num_levels: 7
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 992a36cf-2c53-4c0e-8733-ad21b6ee24da
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400277035279, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400277038014, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53797, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51386, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400277, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400277038122, "job": 1, "event": "recovery_finished"}
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bdb5ef4e00
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: DB pointer 0x55bdb6006000
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.99 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.99 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdb5ecb1f0#2 capacity: 512.00 MB usage: 1.73 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:11:17 compute-0 ceph-mon[75050]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@-1(???) e1 preinit fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@-1(???).mds e1 new map
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 07:11:17 compute-0 ceph-mon[75050]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:11:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 07:11:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 07:11:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 07:11:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 07:11:17 compute-0 podman[75051]: 2025-11-29 07:11:17.11000583 +0000 UTC m=+0.068434100 container create 1bf3aa6dae977f381717b2a06e5a88067a771ece7375a1d5ac5e52e104978add (image=quay.io/ceph/ceph:v18, name=wonderful_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 07:11:17 compute-0 ceph-mon[75050]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 07:11:17 compute-0 ceph-mon[75050]: fsmap 
Nov 29 07:11:17 compute-0 ceph-mon[75050]: osdmap e1: 0 total, 0 up, 0 in
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mgrmap e1: no daemons active
Nov 29 07:11:17 compute-0 systemd[1]: Started libpod-conmon-1bf3aa6dae977f381717b2a06e5a88067a771ece7375a1d5ac5e52e104978add.scope.
Nov 29 07:11:17 compute-0 podman[75051]: 2025-11-29 07:11:17.084684503 +0000 UTC m=+0.043112863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8391f94424cac63a241d62dba6d7e49bdb21f070802386721f30d728ffd7c56/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8391f94424cac63a241d62dba6d7e49bdb21f070802386721f30d728ffd7c56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8391f94424cac63a241d62dba6d7e49bdb21f070802386721f30d728ffd7c56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:17 compute-0 podman[75051]: 2025-11-29 07:11:17.213357239 +0000 UTC m=+0.171785529 container init 1bf3aa6dae977f381717b2a06e5a88067a771ece7375a1d5ac5e52e104978add (image=quay.io/ceph/ceph:v18, name=wonderful_hofstadter, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:17 compute-0 podman[75051]: 2025-11-29 07:11:17.220940934 +0000 UTC m=+0.179369244 container start 1bf3aa6dae977f381717b2a06e5a88067a771ece7375a1d5ac5e52e104978add (image=quay.io/ceph/ceph:v18, name=wonderful_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:11:17 compute-0 podman[75051]: 2025-11-29 07:11:17.225739825 +0000 UTC m=+0.184168115 container attach 1bf3aa6dae977f381717b2a06e5a88067a771ece7375a1d5ac5e52e104978add (image=quay.io/ceph/ceph:v18, name=wonderful_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 29 07:11:17 compute-0 systemd[1]: libpod-1bf3aa6dae977f381717b2a06e5a88067a771ece7375a1d5ac5e52e104978add.scope: Deactivated successfully.
Nov 29 07:11:17 compute-0 podman[75131]: 2025-11-29 07:11:17.727446576 +0000 UTC m=+0.045549019 container died 1bf3aa6dae977f381717b2a06e5a88067a771ece7375a1d5ac5e52e104978add (image=quay.io/ceph/ceph:v18, name=wonderful_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8391f94424cac63a241d62dba6d7e49bdb21f070802386721f30d728ffd7c56-merged.mount: Deactivated successfully.
Nov 29 07:11:17 compute-0 podman[75131]: 2025-11-29 07:11:17.781875655 +0000 UTC m=+0.099978088 container remove 1bf3aa6dae977f381717b2a06e5a88067a771ece7375a1d5ac5e52e104978add (image=quay.io/ceph/ceph:v18, name=wonderful_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:17 compute-0 systemd[1]: libpod-conmon-1bf3aa6dae977f381717b2a06e5a88067a771ece7375a1d5ac5e52e104978add.scope: Deactivated successfully.
Nov 29 07:11:17 compute-0 podman[75146]: 2025-11-29 07:11:17.859250497 +0000 UTC m=+0.044390677 container create a528542c687e4dff7475085e8a5170c6fb584dcc9913498e0290a2f4aeaad057 (image=quay.io/ceph/ceph:v18, name=jolly_franklin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:17 compute-0 systemd[1]: Started libpod-conmon-a528542c687e4dff7475085e8a5170c6fb584dcc9913498e0290a2f4aeaad057.scope.
Nov 29 07:11:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e7fb4e2bada6340e290c15900f12687c4003084786da6489d9bb10c66d3852/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e7fb4e2bada6340e290c15900f12687c4003084786da6489d9bb10c66d3852/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e7fb4e2bada6340e290c15900f12687c4003084786da6489d9bb10c66d3852/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:17 compute-0 podman[75146]: 2025-11-29 07:11:17.930779871 +0000 UTC m=+0.115920071 container init a528542c687e4dff7475085e8a5170c6fb584dcc9913498e0290a2f4aeaad057 (image=quay.io/ceph/ceph:v18, name=jolly_franklin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:11:17 compute-0 podman[75146]: 2025-11-29 07:11:17.8402399 +0000 UTC m=+0.025380100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:17 compute-0 podman[75146]: 2025-11-29 07:11:17.935505038 +0000 UTC m=+0.120645238 container start a528542c687e4dff7475085e8a5170c6fb584dcc9913498e0290a2f4aeaad057 (image=quay.io/ceph/ceph:v18, name=jolly_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:17 compute-0 podman[75146]: 2025-11-29 07:11:17.939801306 +0000 UTC m=+0.124941486 container attach a528542c687e4dff7475085e8a5170c6fb584dcc9913498e0290a2f4aeaad057 (image=quay.io/ceph/ceph:v18, name=jolly_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:11:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 29 07:11:18 compute-0 systemd[1]: libpod-a528542c687e4dff7475085e8a5170c6fb584dcc9913498e0290a2f4aeaad057.scope: Deactivated successfully.
Nov 29 07:11:18 compute-0 podman[75146]: 2025-11-29 07:11:18.35992959 +0000 UTC m=+0.545069800 container died a528542c687e4dff7475085e8a5170c6fb584dcc9913498e0290a2f4aeaad057 (image=quay.io/ceph/ceph:v18, name=jolly_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e7fb4e2bada6340e290c15900f12687c4003084786da6489d9bb10c66d3852-merged.mount: Deactivated successfully.
Nov 29 07:11:18 compute-0 podman[75146]: 2025-11-29 07:11:18.423518708 +0000 UTC m=+0.608658888 container remove a528542c687e4dff7475085e8a5170c6fb584dcc9913498e0290a2f4aeaad057 (image=quay.io/ceph/ceph:v18, name=jolly_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:11:18 compute-0 systemd[1]: libpod-conmon-a528542c687e4dff7475085e8a5170c6fb584dcc9913498e0290a2f4aeaad057.scope: Deactivated successfully.
Nov 29 07:11:18 compute-0 systemd[1]: Reloading.
Nov 29 07:11:18 compute-0 systemd-sysv-generator[75229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:18 compute-0 systemd-rc-local-generator[75224]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:18 compute-0 systemd[1]: Reloading.
Nov 29 07:11:18 compute-0 systemd-sysv-generator[75271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:11:18 compute-0 systemd-rc-local-generator[75268]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:11:19 compute-0 systemd[1]: Starting Ceph mgr.compute-0.kzdpag for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:11:19 compute-0 podman[75326]: 2025-11-29 07:11:19.298592008 +0000 UTC m=+0.050754111 container create cf5b754473e023a6c38808405cba62d535fd2d4dd4ad3580b8df218d673d96ea (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0e7bf78f942e6aa3195dd20f290c9411868ac62665f0d752dda2d23b56c9a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0e7bf78f942e6aa3195dd20f290c9411868ac62665f0d752dda2d23b56c9a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0e7bf78f942e6aa3195dd20f290c9411868ac62665f0d752dda2d23b56c9a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0e7bf78f942e6aa3195dd20f290c9411868ac62665f0d752dda2d23b56c9a6/merged/var/lib/ceph/mgr/ceph-compute-0.kzdpag supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:19 compute-0 podman[75326]: 2025-11-29 07:11:19.358357454 +0000 UTC m=+0.110519607 container init cf5b754473e023a6c38808405cba62d535fd2d4dd4ad3580b8df218d673d96ea (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:11:19 compute-0 podman[75326]: 2025-11-29 07:11:19.363690655 +0000 UTC m=+0.115852798 container start cf5b754473e023a6c38808405cba62d535fd2d4dd4ad3580b8df218d673d96ea (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:19 compute-0 bash[75326]: cf5b754473e023a6c38808405cba62d535fd2d4dd4ad3580b8df218d673d96ea
Nov 29 07:11:19 compute-0 podman[75326]: 2025-11-29 07:11:19.276420989 +0000 UTC m=+0.028583132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:19 compute-0 systemd[1]: Started Ceph mgr.compute-0.kzdpag for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:11:19 compute-0 ceph-mgr[75345]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:11:19 compute-0 ceph-mgr[75345]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 07:11:19 compute-0 ceph-mgr[75345]: pidfile_write: ignore empty --pid-file
Nov 29 07:11:19 compute-0 podman[75346]: 2025-11-29 07:11:19.459071742 +0000 UTC m=+0.055750053 container create 16eab9394a3ef313fd17581a47624554cf4332ed95986250636686bbb79a5591 (image=quay.io/ceph/ceph:v18, name=stoic_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:19 compute-0 systemd[1]: Started libpod-conmon-16eab9394a3ef313fd17581a47624554cf4332ed95986250636686bbb79a5591.scope.
Nov 29 07:11:19 compute-0 podman[75346]: 2025-11-29 07:11:19.435284327 +0000 UTC m=+0.031962638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed6b8ff4c4a7c1c1b7853f1d2aad942655a8bfb0e560ab88e893c27ed7a4c2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed6b8ff4c4a7c1c1b7853f1d2aad942655a8bfb0e560ab88e893c27ed7a4c2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed6b8ff4c4a7c1c1b7853f1d2aad942655a8bfb0e560ab88e893c27ed7a4c2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:19 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'alerts'
Nov 29 07:11:19 compute-0 podman[75346]: 2025-11-29 07:11:19.558297938 +0000 UTC m=+0.154976309 container init 16eab9394a3ef313fd17581a47624554cf4332ed95986250636686bbb79a5591 (image=quay.io/ceph/ceph:v18, name=stoic_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:19 compute-0 podman[75346]: 2025-11-29 07:11:19.567428747 +0000 UTC m=+0.164107028 container start 16eab9394a3ef313fd17581a47624554cf4332ed95986250636686bbb79a5591 (image=quay.io/ceph/ceph:v18, name=stoic_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:11:19 compute-0 podman[75346]: 2025-11-29 07:11:19.571196174 +0000 UTC m=+0.167874485 container attach 16eab9394a3ef313fd17581a47624554cf4332ed95986250636686bbb79a5591 (image=quay.io/ceph/ceph:v18, name=stoic_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:19 compute-0 ceph-mgr[75345]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:11:19 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'balancer'
Nov 29 07:11:19 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:19.878+0000 7f6a422fd140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:11:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:11:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460366094' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]: 
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]: {
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "health": {
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "status": "HEALTH_OK",
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "checks": {},
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "mutes": []
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     },
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "election_epoch": 5,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "quorum": [
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         0
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     ],
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "quorum_names": [
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "compute-0"
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     ],
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "quorum_age": 2,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "monmap": {
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "epoch": 1,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "min_mon_release_name": "reef",
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "num_mons": 1
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     },
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "osdmap": {
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "epoch": 1,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "num_osds": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "num_up_osds": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "osd_up_since": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "num_in_osds": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "osd_in_since": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "num_remapped_pgs": 0
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     },
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "pgmap": {
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "pgs_by_state": [],
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "num_pgs": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "num_pools": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "num_objects": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "data_bytes": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "bytes_used": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "bytes_avail": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "bytes_total": 0
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     },
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "fsmap": {
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "epoch": 1,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "by_rank": [],
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "up:standby": 0
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     },
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "mgrmap": {
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "available": false,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "num_standbys": 0,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "modules": [
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:             "iostat",
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:             "nfs",
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:             "restful"
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         ],
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "services": {}
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     },
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "servicemap": {
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "epoch": 1,
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "modified": "2025-11-29T07:11:14.129402+0000",
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:         "services": {}
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     },
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]:     "progress_events": {}
Nov 29 07:11:19 compute-0 stoic_ishizaka[75385]: }
Nov 29 07:11:20 compute-0 systemd[1]: libpod-16eab9394a3ef313fd17581a47624554cf4332ed95986250636686bbb79a5591.scope: Deactivated successfully.
Nov 29 07:11:20 compute-0 podman[75346]: 2025-11-29 07:11:20.016544401 +0000 UTC m=+0.613222682 container died 16eab9394a3ef313fd17581a47624554cf4332ed95986250636686bbb79a5591 (image=quay.io/ceph/ceph:v18, name=stoic_ishizaka, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ed6b8ff4c4a7c1c1b7853f1d2aad942655a8bfb0e560ab88e893c27ed7a4c2a-merged.mount: Deactivated successfully.
Nov 29 07:11:20 compute-0 ceph-mgr[75345]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:11:20 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'cephadm'
Nov 29 07:11:20 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:20.132+0000 7f6a422fd140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:11:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1460366094' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:20 compute-0 podman[75346]: 2025-11-29 07:11:20.513810732 +0000 UTC m=+1.110489013 container remove 16eab9394a3ef313fd17581a47624554cf4332ed95986250636686bbb79a5591 (image=quay.io/ceph/ceph:v18, name=stoic_ishizaka, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:20 compute-0 systemd[1]: libpod-conmon-16eab9394a3ef313fd17581a47624554cf4332ed95986250636686bbb79a5591.scope: Deactivated successfully.
Nov 29 07:11:22 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'crash'
Nov 29 07:11:22 compute-0 ceph-mgr[75345]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:11:22 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'dashboard'
Nov 29 07:11:22 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:22.493+0000 7f6a422fd140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:11:22 compute-0 podman[75434]: 2025-11-29 07:11:22.601184104 +0000 UTC m=+0.057696698 container create 6d7d19766d5d5f5095f8566c517eb352cbbc76d35dfa016170574582d7fd11c5 (image=quay.io/ceph/ceph:v18, name=keen_napier, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:22 compute-0 systemd[1]: Started libpod-conmon-6d7d19766d5d5f5095f8566c517eb352cbbc76d35dfa016170574582d7fd11c5.scope.
Nov 29 07:11:22 compute-0 podman[75434]: 2025-11-29 07:11:22.571063749 +0000 UTC m=+0.027576373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89463f5f17b41a26847fd78ff3a3c76383abd2f3477b0fbfb3ce19665c56cd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89463f5f17b41a26847fd78ff3a3c76383abd2f3477b0fbfb3ce19665c56cd8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89463f5f17b41a26847fd78ff3a3c76383abd2f3477b0fbfb3ce19665c56cd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:22 compute-0 podman[75434]: 2025-11-29 07:11:22.712625307 +0000 UTC m=+0.169137981 container init 6d7d19766d5d5f5095f8566c517eb352cbbc76d35dfa016170574582d7fd11c5 (image=quay.io/ceph/ceph:v18, name=keen_napier, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:22 compute-0 podman[75434]: 2025-11-29 07:11:22.719916143 +0000 UTC m=+0.176428727 container start 6d7d19766d5d5f5095f8566c517eb352cbbc76d35dfa016170574582d7fd11c5 (image=quay.io/ceph/ceph:v18, name=keen_napier, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:11:22 compute-0 podman[75434]: 2025-11-29 07:11:22.723780124 +0000 UTC m=+0.180292748 container attach 6d7d19766d5d5f5095f8566c517eb352cbbc76d35dfa016170574582d7fd11c5 (image=quay.io/ceph/ceph:v18, name=keen_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:11:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/352993084' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:23 compute-0 keen_napier[75450]: 
Nov 29 07:11:23 compute-0 keen_napier[75450]: {
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "health": {
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "status": "HEALTH_OK",
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "checks": {},
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "mutes": []
Nov 29 07:11:23 compute-0 keen_napier[75450]:     },
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "election_epoch": 5,
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "quorum": [
Nov 29 07:11:23 compute-0 keen_napier[75450]:         0
Nov 29 07:11:23 compute-0 keen_napier[75450]:     ],
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "quorum_names": [
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "compute-0"
Nov 29 07:11:23 compute-0 keen_napier[75450]:     ],
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "quorum_age": 6,
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "monmap": {
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "epoch": 1,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "min_mon_release_name": "reef",
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "num_mons": 1
Nov 29 07:11:23 compute-0 keen_napier[75450]:     },
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "osdmap": {
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "epoch": 1,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "num_osds": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "num_up_osds": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "osd_up_since": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "num_in_osds": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "osd_in_since": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "num_remapped_pgs": 0
Nov 29 07:11:23 compute-0 keen_napier[75450]:     },
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "pgmap": {
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "pgs_by_state": [],
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "num_pgs": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "num_pools": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "num_objects": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "data_bytes": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "bytes_used": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "bytes_avail": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "bytes_total": 0
Nov 29 07:11:23 compute-0 keen_napier[75450]:     },
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "fsmap": {
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "epoch": 1,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "by_rank": [],
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "up:standby": 0
Nov 29 07:11:23 compute-0 keen_napier[75450]:     },
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "mgrmap": {
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "available": false,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "num_standbys": 0,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "modules": [
Nov 29 07:11:23 compute-0 keen_napier[75450]:             "iostat",
Nov 29 07:11:23 compute-0 keen_napier[75450]:             "nfs",
Nov 29 07:11:23 compute-0 keen_napier[75450]:             "restful"
Nov 29 07:11:23 compute-0 keen_napier[75450]:         ],
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "services": {}
Nov 29 07:11:23 compute-0 keen_napier[75450]:     },
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "servicemap": {
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "epoch": 1,
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "modified": "2025-11-29T07:11:14.129402+0000",
Nov 29 07:11:23 compute-0 keen_napier[75450]:         "services": {}
Nov 29 07:11:23 compute-0 keen_napier[75450]:     },
Nov 29 07:11:23 compute-0 keen_napier[75450]:     "progress_events": {}
Nov 29 07:11:23 compute-0 keen_napier[75450]: }
Nov 29 07:11:23 compute-0 systemd[1]: libpod-6d7d19766d5d5f5095f8566c517eb352cbbc76d35dfa016170574582d7fd11c5.scope: Deactivated successfully.
Nov 29 07:11:23 compute-0 podman[75434]: 2025-11-29 07:11:23.163789949 +0000 UTC m=+0.620302543 container died 6d7d19766d5d5f5095f8566c517eb352cbbc76d35dfa016170574582d7fd11c5 (image=quay.io/ceph/ceph:v18, name=keen_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f89463f5f17b41a26847fd78ff3a3c76383abd2f3477b0fbfb3ce19665c56cd8-merged.mount: Deactivated successfully.
Nov 29 07:11:23 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/352993084' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:23 compute-0 podman[75434]: 2025-11-29 07:11:23.211089192 +0000 UTC m=+0.667601786 container remove 6d7d19766d5d5f5095f8566c517eb352cbbc76d35dfa016170574582d7fd11c5 (image=quay.io/ceph/ceph:v18, name=keen_napier, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:11:23 compute-0 systemd[1]: libpod-conmon-6d7d19766d5d5f5095f8566c517eb352cbbc76d35dfa016170574582d7fd11c5.scope: Deactivated successfully.
Nov 29 07:11:23 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'devicehealth'
Nov 29 07:11:24 compute-0 ceph-mgr[75345]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:11:24 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 07:11:24 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:24.197+0000 7f6a422fd140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:11:24 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 07:11:24 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 07:11:24 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]:   from numpy import show_config as show_numpy_config
Nov 29 07:11:24 compute-0 ceph-mgr[75345]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:11:24 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'influx'
Nov 29 07:11:24 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:24.749+0000 7f6a422fd140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:11:24 compute-0 ceph-mgr[75345]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:11:24 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'insights'
Nov 29 07:11:24 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:24.979+0000 7f6a422fd140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:11:25 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'iostat'
Nov 29 07:11:25 compute-0 podman[75488]: 2025-11-29 07:11:25.315566069 +0000 UTC m=+0.071435758 container create a2fbf08f4e02e669b6b2d298a854ec5ea570660e8eaa80b59080ce76c6fff092 (image=quay.io/ceph/ceph:v18, name=recursing_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:11:25 compute-0 systemd[1]: Started libpod-conmon-a2fbf08f4e02e669b6b2d298a854ec5ea570660e8eaa80b59080ce76c6fff092.scope.
Nov 29 07:11:25 compute-0 podman[75488]: 2025-11-29 07:11:25.283088697 +0000 UTC m=+0.038958476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a97795c1488f113bd024cd73faf88f158aac8055375578e59b82c816ca38369/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a97795c1488f113bd024cd73faf88f158aac8055375578e59b82c816ca38369/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a97795c1488f113bd024cd73faf88f158aac8055375578e59b82c816ca38369/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:25 compute-0 podman[75488]: 2025-11-29 07:11:25.415893516 +0000 UTC m=+0.171763225 container init a2fbf08f4e02e669b6b2d298a854ec5ea570660e8eaa80b59080ce76c6fff092 (image=quay.io/ceph/ceph:v18, name=recursing_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:25 compute-0 podman[75488]: 2025-11-29 07:11:25.427483405 +0000 UTC m=+0.183353094 container start a2fbf08f4e02e669b6b2d298a854ec5ea570660e8eaa80b59080ce76c6fff092 (image=quay.io/ceph/ceph:v18, name=recursing_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:11:25 compute-0 podman[75488]: 2025-11-29 07:11:25.431871909 +0000 UTC m=+0.187741618 container attach a2fbf08f4e02e669b6b2d298a854ec5ea570660e8eaa80b59080ce76c6fff092 (image=quay.io/ceph/ceph:v18, name=recursing_villani, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:25 compute-0 ceph-mgr[75345]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:11:25 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'k8sevents'
Nov 29 07:11:25 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:25.461+0000 7f6a422fd140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:11:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:11:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/957246232' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:25 compute-0 recursing_villani[75504]: 
Nov 29 07:11:25 compute-0 recursing_villani[75504]: {
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "health": {
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "status": "HEALTH_OK",
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "checks": {},
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "mutes": []
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     },
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "election_epoch": 5,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "quorum": [
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         0
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     ],
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "quorum_names": [
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "compute-0"
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     ],
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "quorum_age": 8,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "monmap": {
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "epoch": 1,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "min_mon_release_name": "reef",
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "num_mons": 1
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     },
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "osdmap": {
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "epoch": 1,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "num_osds": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "num_up_osds": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "osd_up_since": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "num_in_osds": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "osd_in_since": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "num_remapped_pgs": 0
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     },
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "pgmap": {
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "pgs_by_state": [],
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "num_pgs": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "num_pools": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "num_objects": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "data_bytes": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "bytes_used": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "bytes_avail": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "bytes_total": 0
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     },
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "fsmap": {
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "epoch": 1,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "by_rank": [],
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "up:standby": 0
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     },
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "mgrmap": {
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "available": false,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "num_standbys": 0,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "modules": [
Nov 29 07:11:25 compute-0 recursing_villani[75504]:             "iostat",
Nov 29 07:11:25 compute-0 recursing_villani[75504]:             "nfs",
Nov 29 07:11:25 compute-0 recursing_villani[75504]:             "restful"
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         ],
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "services": {}
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     },
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "servicemap": {
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "epoch": 1,
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "modified": "2025-11-29T07:11:14.129402+0000",
Nov 29 07:11:25 compute-0 recursing_villani[75504]:         "services": {}
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     },
Nov 29 07:11:25 compute-0 recursing_villani[75504]:     "progress_events": {}
Nov 29 07:11:25 compute-0 recursing_villani[75504]: }
Nov 29 07:11:25 compute-0 systemd[1]: libpod-a2fbf08f4e02e669b6b2d298a854ec5ea570660e8eaa80b59080ce76c6fff092.scope: Deactivated successfully.
Nov 29 07:11:25 compute-0 podman[75488]: 2025-11-29 07:11:25.885650666 +0000 UTC m=+0.641520365 container died a2fbf08f4e02e669b6b2d298a854ec5ea570660e8eaa80b59080ce76c6fff092 (image=quay.io/ceph/ceph:v18, name=recursing_villani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a97795c1488f113bd024cd73faf88f158aac8055375578e59b82c816ca38369-merged.mount: Deactivated successfully.
Nov 29 07:11:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/957246232' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:25 compute-0 podman[75488]: 2025-11-29 07:11:25.933909346 +0000 UTC m=+0.689779035 container remove a2fbf08f4e02e669b6b2d298a854ec5ea570660e8eaa80b59080ce76c6fff092 (image=quay.io/ceph/ceph:v18, name=recursing_villani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:11:25 compute-0 systemd[1]: libpod-conmon-a2fbf08f4e02e669b6b2d298a854ec5ea570660e8eaa80b59080ce76c6fff092.scope: Deactivated successfully.
Nov 29 07:11:27 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'localpool'
Nov 29 07:11:27 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 07:11:28 compute-0 podman[75543]: 2025-11-29 07:11:28.026775454 +0000 UTC m=+0.063912555 container create db0890c7e942ab98453889899e597d42414b786ece85b9af8c06054352b0a2aa (image=quay.io/ceph/ceph:v18, name=suspicious_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:11:28 compute-0 systemd[1]: Started libpod-conmon-db0890c7e942ab98453889899e597d42414b786ece85b9af8c06054352b0a2aa.scope.
Nov 29 07:11:28 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'mirroring'
Nov 29 07:11:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3104fa6a3e609eee065e1621335f6cf799297372a3acc542df31e8896828a8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3104fa6a3e609eee065e1621335f6cf799297372a3acc542df31e8896828a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3104fa6a3e609eee065e1621335f6cf799297372a3acc542df31e8896828a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:28 compute-0 podman[75543]: 2025-11-29 07:11:27.998460861 +0000 UTC m=+0.035598052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:28 compute-0 podman[75543]: 2025-11-29 07:11:28.106536407 +0000 UTC m=+0.143673518 container init db0890c7e942ab98453889899e597d42414b786ece85b9af8c06054352b0a2aa (image=quay.io/ceph/ceph:v18, name=suspicious_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:28 compute-0 podman[75543]: 2025-11-29 07:11:28.112526897 +0000 UTC m=+0.149664008 container start db0890c7e942ab98453889899e597d42414b786ece85b9af8c06054352b0a2aa (image=quay.io/ceph/ceph:v18, name=suspicious_hellman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:28 compute-0 podman[75543]: 2025-11-29 07:11:28.117183359 +0000 UTC m=+0.154320470 container attach db0890c7e942ab98453889899e597d42414b786ece85b9af8c06054352b0a2aa (image=quay.io/ceph/ceph:v18, name=suspicious_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:11:28 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'nfs'
Nov 29 07:11:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:11:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450503078' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]: 
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]: {
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "health": {
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "status": "HEALTH_OK",
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "checks": {},
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "mutes": []
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     },
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "election_epoch": 5,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "quorum": [
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         0
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     ],
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "quorum_names": [
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "compute-0"
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     ],
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "quorum_age": 11,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "monmap": {
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "epoch": 1,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "min_mon_release_name": "reef",
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "num_mons": 1
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     },
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "osdmap": {
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "epoch": 1,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "num_osds": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "num_up_osds": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "osd_up_since": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "num_in_osds": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "osd_in_since": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "num_remapped_pgs": 0
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     },
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "pgmap": {
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "pgs_by_state": [],
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "num_pgs": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "num_pools": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "num_objects": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "data_bytes": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "bytes_used": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "bytes_avail": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "bytes_total": 0
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     },
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "fsmap": {
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "epoch": 1,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "by_rank": [],
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "up:standby": 0
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     },
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "mgrmap": {
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "available": false,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "num_standbys": 0,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "modules": [
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:             "iostat",
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:             "nfs",
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:             "restful"
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         ],
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "services": {}
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     },
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "servicemap": {
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "epoch": 1,
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "modified": "2025-11-29T07:11:14.129402+0000",
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:         "services": {}
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     },
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]:     "progress_events": {}
Nov 29 07:11:28 compute-0 suspicious_hellman[75559]: }
Nov 29 07:11:28 compute-0 systemd[1]: libpod-db0890c7e942ab98453889899e597d42414b786ece85b9af8c06054352b0a2aa.scope: Deactivated successfully.
Nov 29 07:11:28 compute-0 podman[75543]: 2025-11-29 07:11:28.51369422 +0000 UTC m=+0.550831351 container died db0890c7e942ab98453889899e597d42414b786ece85b9af8c06054352b0a2aa (image=quay.io/ceph/ceph:v18, name=suspicious_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:11:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c3104fa6a3e609eee065e1621335f6cf799297372a3acc542df31e8896828a8-merged.mount: Deactivated successfully.
Nov 29 07:11:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/450503078' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:28 compute-0 podman[75543]: 2025-11-29 07:11:28.55561231 +0000 UTC m=+0.592749421 container remove db0890c7e942ab98453889899e597d42414b786ece85b9af8c06054352b0a2aa (image=quay.io/ceph/ceph:v18, name=suspicious_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:11:28 compute-0 systemd[1]: libpod-conmon-db0890c7e942ab98453889899e597d42414b786ece85b9af8c06054352b0a2aa.scope: Deactivated successfully.
Nov 29 07:11:29 compute-0 ceph-mgr[75345]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:11:29 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:29.027+0000 7f6a422fd140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:11:29 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'orchestrator'
Nov 29 07:11:29 compute-0 ceph-mgr[75345]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:11:29 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 07:11:29 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:29.660+0000 7f6a422fd140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:11:29 compute-0 ceph-mgr[75345]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:11:29 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'osd_support'
Nov 29 07:11:29 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:29.917+0000 7f6a422fd140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:11:30 compute-0 ceph-mgr[75345]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:11:30 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:30.146+0000 7f6a422fd140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:11:30 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 07:11:30 compute-0 ceph-mgr[75345]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:11:30 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:30.418+0000 7f6a422fd140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:11:30 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'progress'
Nov 29 07:11:30 compute-0 podman[75598]: 2025-11-29 07:11:30.631862297 +0000 UTC m=+0.051198954 container create 7f4a46931844b156fdc1da6090685bcd3ccc9b4e6f018de77bb9f232af258975 (image=quay.io/ceph/ceph:v18, name=nice_lalande, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:11:30 compute-0 ceph-mgr[75345]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:11:30 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'prometheus'
Nov 29 07:11:30 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:30.647+0000 7f6a422fd140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:11:30 compute-0 systemd[1]: Started libpod-conmon-7f4a46931844b156fdc1da6090685bcd3ccc9b4e6f018de77bb9f232af258975.scope.
Nov 29 07:11:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2111f150787e7a99685a84e29a23182a88938bb48caad474cd41bd35b1286dd1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2111f150787e7a99685a84e29a23182a88938bb48caad474cd41bd35b1286dd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2111f150787e7a99685a84e29a23182a88938bb48caad474cd41bd35b1286dd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:30 compute-0 podman[75598]: 2025-11-29 07:11:30.611796047 +0000 UTC m=+0.031132814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:30 compute-0 podman[75598]: 2025-11-29 07:11:30.713750191 +0000 UTC m=+0.133086948 container init 7f4a46931844b156fdc1da6090685bcd3ccc9b4e6f018de77bb9f232af258975 (image=quay.io/ceph/ceph:v18, name=nice_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:11:30 compute-0 podman[75598]: 2025-11-29 07:11:30.720709488 +0000 UTC m=+0.140046145 container start 7f4a46931844b156fdc1da6090685bcd3ccc9b4e6f018de77bb9f232af258975 (image=quay.io/ceph/ceph:v18, name=nice_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:11:30 compute-0 podman[75598]: 2025-11-29 07:11:30.730162467 +0000 UTC m=+0.149499154 container attach 7f4a46931844b156fdc1da6090685bcd3ccc9b4e6f018de77bb9f232af258975 (image=quay.io/ceph/ceph:v18, name=nice_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:11:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:11:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257533678' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:31 compute-0 nice_lalande[75614]: 
Nov 29 07:11:31 compute-0 nice_lalande[75614]: {
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "health": {
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "status": "HEALTH_OK",
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "checks": {},
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "mutes": []
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     },
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "election_epoch": 5,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "quorum": [
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         0
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     ],
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "quorum_names": [
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "compute-0"
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     ],
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "quorum_age": 14,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "monmap": {
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "epoch": 1,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "min_mon_release_name": "reef",
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "num_mons": 1
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     },
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "osdmap": {
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "epoch": 1,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "num_osds": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "num_up_osds": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "osd_up_since": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "num_in_osds": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "osd_in_since": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "num_remapped_pgs": 0
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     },
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "pgmap": {
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "pgs_by_state": [],
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "num_pgs": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "num_pools": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "num_objects": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "data_bytes": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "bytes_used": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "bytes_avail": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "bytes_total": 0
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     },
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "fsmap": {
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "epoch": 1,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "by_rank": [],
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "up:standby": 0
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     },
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "mgrmap": {
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "available": false,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "num_standbys": 0,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "modules": [
Nov 29 07:11:31 compute-0 nice_lalande[75614]:             "iostat",
Nov 29 07:11:31 compute-0 nice_lalande[75614]:             "nfs",
Nov 29 07:11:31 compute-0 nice_lalande[75614]:             "restful"
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         ],
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "services": {}
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     },
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "servicemap": {
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "epoch": 1,
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "modified": "2025-11-29T07:11:14.129402+0000",
Nov 29 07:11:31 compute-0 nice_lalande[75614]:         "services": {}
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     },
Nov 29 07:11:31 compute-0 nice_lalande[75614]:     "progress_events": {}
Nov 29 07:11:31 compute-0 nice_lalande[75614]: }
Nov 29 07:11:31 compute-0 systemd[1]: libpod-7f4a46931844b156fdc1da6090685bcd3ccc9b4e6f018de77bb9f232af258975.scope: Deactivated successfully.
Nov 29 07:11:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4257533678' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:31 compute-0 podman[75640]: 2025-11-29 07:11:31.187902775 +0000 UTC m=+0.038296838 container died 7f4a46931844b156fdc1da6090685bcd3ccc9b4e6f018de77bb9f232af258975 (image=quay.io/ceph/ceph:v18, name=nice_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2111f150787e7a99685a84e29a23182a88938bb48caad474cd41bd35b1286dd1-merged.mount: Deactivated successfully.
Nov 29 07:11:31 compute-0 podman[75640]: 2025-11-29 07:11:31.240707274 +0000 UTC m=+0.091101337 container remove 7f4a46931844b156fdc1da6090685bcd3ccc9b4e6f018de77bb9f232af258975 (image=quay.io/ceph/ceph:v18, name=nice_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:31 compute-0 systemd[1]: libpod-conmon-7f4a46931844b156fdc1da6090685bcd3ccc9b4e6f018de77bb9f232af258975.scope: Deactivated successfully.
Nov 29 07:11:31 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:31.638+0000 7f6a422fd140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:11:31 compute-0 ceph-mgr[75345]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:11:31 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'rbd_support'
Nov 29 07:11:31 compute-0 ceph-mgr[75345]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:11:31 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:31.948+0000 7f6a422fd140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:11:31 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'restful'
Nov 29 07:11:32 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'rgw'
Nov 29 07:11:33 compute-0 ceph-mgr[75345]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:11:33 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:33.360+0000 7f6a422fd140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:11:33 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'rook'
Nov 29 07:11:33 compute-0 podman[75655]: 2025-11-29 07:11:33.315743856 +0000 UTC m=+0.039233504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:33 compute-0 podman[75655]: 2025-11-29 07:11:33.532744504 +0000 UTC m=+0.256234062 container create 116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338 (image=quay.io/ceph/ceph:v18, name=boring_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:33 compute-0 systemd[1]: Started libpod-conmon-116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338.scope.
Nov 29 07:11:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046386dd6d52cfd655670af29eb117ce54c72dcd1178d51c295f5660c9292731/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046386dd6d52cfd655670af29eb117ce54c72dcd1178d51c295f5660c9292731/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046386dd6d52cfd655670af29eb117ce54c72dcd1178d51c295f5660c9292731/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:33 compute-0 podman[75655]: 2025-11-29 07:11:33.630179499 +0000 UTC m=+0.353669137 container init 116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338 (image=quay.io/ceph/ceph:v18, name=boring_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:33 compute-0 podman[75655]: 2025-11-29 07:11:33.635900871 +0000 UTC m=+0.359390429 container start 116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338 (image=quay.io/ceph/ceph:v18, name=boring_rubin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:33 compute-0 podman[75655]: 2025-11-29 07:11:33.642623642 +0000 UTC m=+0.366113230 container attach 116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338 (image=quay.io/ceph/ceph:v18, name=boring_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:11:34 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2848183080' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:34 compute-0 boring_rubin[75672]: 
Nov 29 07:11:34 compute-0 boring_rubin[75672]: {
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "health": {
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "status": "HEALTH_OK",
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "checks": {},
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "mutes": []
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     },
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "election_epoch": 5,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "quorum": [
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         0
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     ],
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "quorum_names": [
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "compute-0"
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     ],
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "quorum_age": 16,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "monmap": {
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "epoch": 1,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "min_mon_release_name": "reef",
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "num_mons": 1
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     },
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "osdmap": {
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "epoch": 1,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "num_osds": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "num_up_osds": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "osd_up_since": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "num_in_osds": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "osd_in_since": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "num_remapped_pgs": 0
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     },
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "pgmap": {
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "pgs_by_state": [],
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "num_pgs": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "num_pools": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "num_objects": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "data_bytes": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "bytes_used": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "bytes_avail": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "bytes_total": 0
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     },
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "fsmap": {
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "epoch": 1,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "by_rank": [],
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "up:standby": 0
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     },
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "mgrmap": {
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "available": false,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "num_standbys": 0,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "modules": [
Nov 29 07:11:34 compute-0 boring_rubin[75672]:             "iostat",
Nov 29 07:11:34 compute-0 boring_rubin[75672]:             "nfs",
Nov 29 07:11:34 compute-0 boring_rubin[75672]:             "restful"
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         ],
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "services": {}
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     },
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "servicemap": {
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "epoch": 1,
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "modified": "2025-11-29T07:11:14.129402+0000",
Nov 29 07:11:34 compute-0 boring_rubin[75672]:         "services": {}
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     },
Nov 29 07:11:34 compute-0 boring_rubin[75672]:     "progress_events": {}
Nov 29 07:11:34 compute-0 boring_rubin[75672]: }
Nov 29 07:11:34 compute-0 systemd[1]: libpod-116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338.scope: Deactivated successfully.
Nov 29 07:11:34 compute-0 conmon[75672]: conmon 116ee294a1fb489e8112 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338.scope/container/memory.events
Nov 29 07:11:34 compute-0 podman[75655]: 2025-11-29 07:11:34.080553439 +0000 UTC m=+0.804042997 container died 116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338 (image=quay.io/ceph/ceph:v18, name=boring_rubin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-046386dd6d52cfd655670af29eb117ce54c72dcd1178d51c295f5660c9292731-merged.mount: Deactivated successfully.
Nov 29 07:11:34 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2848183080' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:34 compute-0 podman[75655]: 2025-11-29 07:11:34.129383744 +0000 UTC m=+0.852873322 container remove 116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338 (image=quay.io/ceph/ceph:v18, name=boring_rubin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:34 compute-0 systemd[1]: libpod-conmon-116ee294a1fb489e81128b9884d40871e8d1f565fa1bb98f729f24e2f0863338.scope: Deactivated successfully.
Nov 29 07:11:35 compute-0 ceph-mgr[75345]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:11:35 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:35.421+0000 7f6a422fd140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:11:35 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'selftest'
Nov 29 07:11:35 compute-0 ceph-mgr[75345]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:11:35 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:35.661+0000 7f6a422fd140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:11:35 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'snap_schedule'
Nov 29 07:11:35 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:35.920+0000 7f6a422fd140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:11:35 compute-0 ceph-mgr[75345]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:11:35 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'stats'
Nov 29 07:11:36 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'status'
Nov 29 07:11:36 compute-0 podman[75713]: 2025-11-29 07:11:36.202766119 +0000 UTC m=+0.042783524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:36 compute-0 podman[75713]: 2025-11-29 07:11:36.358424327 +0000 UTC m=+0.198441692 container create 13d4ac937c14849e7bde471a81fc2ca5acb06e8271e7d0912ac55ea2f85711e7 (image=quay.io/ceph/ceph:v18, name=hopeful_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:36 compute-0 systemd[1]: Started libpod-conmon-13d4ac937c14849e7bde471a81fc2ca5acb06e8271e7d0912ac55ea2f85711e7.scope.
Nov 29 07:11:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de6267fa1271990b5fede80cd1224b350ec958b94d7ddf763a534071e88a6f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de6267fa1271990b5fede80cd1224b350ec958b94d7ddf763a534071e88a6f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de6267fa1271990b5fede80cd1224b350ec958b94d7ddf763a534071e88a6f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:36 compute-0 ceph-mgr[75345]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:11:36 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'telegraf'
Nov 29 07:11:36 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:36.435+0000 7f6a422fd140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:11:36 compute-0 podman[75713]: 2025-11-29 07:11:36.455649955 +0000 UTC m=+0.295667340 container init 13d4ac937c14849e7bde471a81fc2ca5acb06e8271e7d0912ac55ea2f85711e7 (image=quay.io/ceph/ceph:v18, name=hopeful_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:11:36 compute-0 podman[75713]: 2025-11-29 07:11:36.464382654 +0000 UTC m=+0.304400009 container start 13d4ac937c14849e7bde471a81fc2ca5acb06e8271e7d0912ac55ea2f85711e7 (image=quay.io/ceph/ceph:v18, name=hopeful_williams, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:36 compute-0 podman[75713]: 2025-11-29 07:11:36.467854732 +0000 UTC m=+0.307872097 container attach 13d4ac937c14849e7bde471a81fc2ca5acb06e8271e7d0912ac55ea2f85711e7 (image=quay.io/ceph/ceph:v18, name=hopeful_williams, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:11:36 compute-0 ceph-mgr[75345]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:11:36 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:36.662+0000 7f6a422fd140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:11:36 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'telemetry'
Nov 29 07:11:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:11:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053135047' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:36 compute-0 hopeful_williams[75729]: 
Nov 29 07:11:36 compute-0 hopeful_williams[75729]: {
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "health": {
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "status": "HEALTH_OK",
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "checks": {},
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "mutes": []
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     },
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "election_epoch": 5,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "quorum": [
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         0
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     ],
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "quorum_names": [
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "compute-0"
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     ],
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "quorum_age": 19,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "monmap": {
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "epoch": 1,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "min_mon_release_name": "reef",
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "num_mons": 1
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     },
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "osdmap": {
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "epoch": 1,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "num_osds": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "num_up_osds": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "osd_up_since": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "num_in_osds": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "osd_in_since": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "num_remapped_pgs": 0
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     },
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "pgmap": {
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "pgs_by_state": [],
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "num_pgs": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "num_pools": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "num_objects": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "data_bytes": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "bytes_used": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "bytes_avail": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "bytes_total": 0
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     },
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "fsmap": {
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "epoch": 1,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "by_rank": [],
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "up:standby": 0
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     },
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "mgrmap": {
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "available": false,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "num_standbys": 0,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "modules": [
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:             "iostat",
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:             "nfs",
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:             "restful"
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         ],
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "services": {}
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     },
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "servicemap": {
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "epoch": 1,
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "modified": "2025-11-29T07:11:14.129402+0000",
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:         "services": {}
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     },
Nov 29 07:11:36 compute-0 hopeful_williams[75729]:     "progress_events": {}
Nov 29 07:11:36 compute-0 hopeful_williams[75729]: }
Nov 29 07:11:36 compute-0 systemd[1]: libpod-13d4ac937c14849e7bde471a81fc2ca5acb06e8271e7d0912ac55ea2f85711e7.scope: Deactivated successfully.
Nov 29 07:11:36 compute-0 podman[75713]: 2025-11-29 07:11:36.932070415 +0000 UTC m=+0.772087810 container died 13d4ac937c14849e7bde471a81fc2ca5acb06e8271e7d0912ac55ea2f85711e7 (image=quay.io/ceph/ceph:v18, name=hopeful_williams, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:37 compute-0 ceph-mgr[75345]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:11:37 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 07:11:37 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:37.256+0000 7f6a422fd140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:11:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2053135047' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6de6267fa1271990b5fede80cd1224b350ec958b94d7ddf763a534071e88a6f0-merged.mount: Deactivated successfully.
Nov 29 07:11:37 compute-0 podman[75713]: 2025-11-29 07:11:37.411683125 +0000 UTC m=+1.251700510 container remove 13d4ac937c14849e7bde471a81fc2ca5acb06e8271e7d0912ac55ea2f85711e7 (image=quay.io/ceph/ceph:v18, name=hopeful_williams, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:11:37 compute-0 systemd[1]: libpod-conmon-13d4ac937c14849e7bde471a81fc2ca5acb06e8271e7d0912ac55ea2f85711e7.scope: Deactivated successfully.
Nov 29 07:11:37 compute-0 ceph-mgr[75345]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:11:37 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:37.911+0000 7f6a422fd140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:11:37 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'volumes'
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'zabbix'
Nov 29 07:11:38 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:38.602+0000 7f6a422fd140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:11:38 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:38.827+0000 7f6a422fd140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: ms_deliver_dispatch: unhandled message 0x5578b06751e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kzdpag
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr handle_mgr_map Activating!
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr handle_mgr_map I am now activating
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.kzdpag(active, starting, since 0.0132279s)
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.kzdpag", "id": "compute-0.kzdpag"} v 0) v1
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kzdpag", "id": "compute-0.kzdpag"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: balancer
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [balancer INFO root] Starting
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: crash
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: devicehealth
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: iostat
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [devicehealth INFO root] Starting
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Manager daemon compute-0.kzdpag is now available
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:11:38
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [balancer INFO root] No pools available
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: nfs
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: orchestrator
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: pg_autoscaler
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: progress
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [progress INFO root] Loading...
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [progress INFO root] No stored events to load
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [progress INFO root] Loaded [] historic events
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support INFO root] recovery thread starting
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support INFO root] starting setup
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: rbd_support
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: restful
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: status
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/mirror_snapshot_schedule"} v 0) v1
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/mirror_snapshot_schedule"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: telemetry
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [restful WARNING root] server not running: no certificate configured
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support INFO root] PerfHandler: starting
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TaskHandler: starting
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/trash_purge_schedule"} v 0) v1
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/trash_purge_schedule"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' 
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: [rbd_support INFO root] setup complete
Nov 29 07:11:38 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: volumes
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' 
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 29 07:11:38 compute-0 ceph-mon[75050]: Activating manager daemon compute-0.kzdpag
Nov 29 07:11:38 compute-0 ceph-mon[75050]: mgrmap e2: compute-0.kzdpag(active, starting, since 0.0132279s)
Nov 29 07:11:38 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kzdpag", "id": "compute-0.kzdpag"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: Manager daemon compute-0.kzdpag is now available
Nov 29 07:11:38 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/mirror_snapshot_schedule"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/trash_purge_schedule"}]: dispatch
Nov 29 07:11:38 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' 
Nov 29 07:11:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' 
Nov 29 07:11:39 compute-0 podman[75848]: 2025-11-29 07:11:39.497119362 +0000 UTC m=+0.051795829 container create ced33a3804aa111b3d23187506e0524120d140727c229befd0b37b4a5a8c34dd (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:39 compute-0 systemd[1]: Started libpod-conmon-ced33a3804aa111b3d23187506e0524120d140727c229befd0b37b4a5a8c34dd.scope.
Nov 29 07:11:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f714bd86863e6ed856efa82bf9bcf55b968b0af1604ae9558c2d88036b1eb089/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f714bd86863e6ed856efa82bf9bcf55b968b0af1604ae9558c2d88036b1eb089/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f714bd86863e6ed856efa82bf9bcf55b968b0af1604ae9558c2d88036b1eb089/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:39 compute-0 podman[75848]: 2025-11-29 07:11:39.477878356 +0000 UTC m=+0.032554873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:39 compute-0 podman[75848]: 2025-11-29 07:11:39.582504065 +0000 UTC m=+0.137180622 container init ced33a3804aa111b3d23187506e0524120d140727c229befd0b37b4a5a8c34dd (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:11:39 compute-0 podman[75848]: 2025-11-29 07:11:39.592639992 +0000 UTC m=+0.147316499 container start ced33a3804aa111b3d23187506e0524120d140727c229befd0b37b4a5a8c34dd (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:39 compute-0 podman[75848]: 2025-11-29 07:11:39.596811621 +0000 UTC m=+0.151488168 container attach ced33a3804aa111b3d23187506e0524120d140727c229befd0b37b4a5a8c34dd (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.kzdpag(active, since 1.02386s)
Nov 29 07:11:39 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' 
Nov 29 07:11:39 compute-0 ceph-mon[75050]: from='mgr.14102 192.168.122.100:0/4170807624' entity='mgr.compute-0.kzdpag' 
Nov 29 07:11:39 compute-0 ceph-mon[75050]: mgrmap e3: compute-0.kzdpag(active, since 1.02386s)
Nov 29 07:11:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 07:11:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518035034' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]: 
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]: {
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "health": {
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "status": "HEALTH_OK",
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "checks": {},
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "mutes": []
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     },
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "election_epoch": 5,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "quorum": [
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         0
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     ],
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "quorum_names": [
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "compute-0"
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     ],
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "quorum_age": 23,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "monmap": {
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "epoch": 1,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "min_mon_release_name": "reef",
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "num_mons": 1
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     },
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "osdmap": {
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "epoch": 1,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "num_osds": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "num_up_osds": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "osd_up_since": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "num_in_osds": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "osd_in_since": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "num_remapped_pgs": 0
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     },
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "pgmap": {
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "pgs_by_state": [],
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "num_pgs": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "num_pools": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "num_objects": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "data_bytes": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "bytes_used": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "bytes_avail": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "bytes_total": 0
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     },
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "fsmap": {
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "epoch": 1,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "by_rank": [],
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "up:standby": 0
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     },
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "mgrmap": {
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "available": true,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "num_standbys": 0,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "modules": [
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:             "iostat",
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:             "nfs",
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:             "restful"
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         ],
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "services": {}
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     },
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "servicemap": {
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "epoch": 1,
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "modified": "2025-11-29T07:11:14.129402+0000",
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:         "services": {}
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     },
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]:     "progress_events": {}
Nov 29 07:11:40 compute-0 romantic_lovelace[75864]: }
Nov 29 07:11:40 compute-0 systemd[1]: libpod-ced33a3804aa111b3d23187506e0524120d140727c229befd0b37b4a5a8c34dd.scope: Deactivated successfully.
Nov 29 07:11:40 compute-0 podman[75848]: 2025-11-29 07:11:40.336227683 +0000 UTC m=+0.890904150 container died ced33a3804aa111b3d23187506e0524120d140727c229befd0b37b4a5a8c34dd (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f714bd86863e6ed856efa82bf9bcf55b968b0af1604ae9558c2d88036b1eb089-merged.mount: Deactivated successfully.
Nov 29 07:11:40 compute-0 podman[75848]: 2025-11-29 07:11:40.417285193 +0000 UTC m=+0.971961650 container remove ced33a3804aa111b3d23187506e0524120d140727c229befd0b37b4a5a8c34dd (image=quay.io/ceph/ceph:v18, name=romantic_lovelace, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:11:40 compute-0 systemd[1]: libpod-conmon-ced33a3804aa111b3d23187506e0524120d140727c229befd0b37b4a5a8c34dd.scope: Deactivated successfully.
Nov 29 07:11:40 compute-0 podman[75902]: 2025-11-29 07:11:40.493674981 +0000 UTC m=+0.049828645 container create 3037493deb19b86f806d302c62e9c6f4f946bdc796c1c2310a497dffa54bb4bf (image=quay.io/ceph/ceph:v18, name=modest_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:11:40 compute-0 systemd[1]: Started libpod-conmon-3037493deb19b86f806d302c62e9c6f4f946bdc796c1c2310a497dffa54bb4bf.scope.
Nov 29 07:11:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54228ce2eede9fff7a9892da83dc3f341dce74d64e06613be21919d0103b7cfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54228ce2eede9fff7a9892da83dc3f341dce74d64e06613be21919d0103b7cfb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54228ce2eede9fff7a9892da83dc3f341dce74d64e06613be21919d0103b7cfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54228ce2eede9fff7a9892da83dc3f341dce74d64e06613be21919d0103b7cfb/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:40 compute-0 podman[75902]: 2025-11-29 07:11:40.565152748 +0000 UTC m=+0.121306432 container init 3037493deb19b86f806d302c62e9c6f4f946bdc796c1c2310a497dffa54bb4bf (image=quay.io/ceph/ceph:v18, name=modest_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:40 compute-0 podman[75902]: 2025-11-29 07:11:40.570776168 +0000 UTC m=+0.126929832 container start 3037493deb19b86f806d302c62e9c6f4f946bdc796c1c2310a497dffa54bb4bf (image=quay.io/ceph/ceph:v18, name=modest_raman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:40 compute-0 podman[75902]: 2025-11-29 07:11:40.475551856 +0000 UTC m=+0.031705540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:40 compute-0 podman[75902]: 2025-11-29 07:11:40.573512426 +0000 UTC m=+0.129666090 container attach 3037493deb19b86f806d302c62e9c6f4f946bdc796c1c2310a497dffa54bb4bf (image=quay.io/ceph/ceph:v18, name=modest_raman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:11:40 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:11:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2518035034' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 07:11:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.kzdpag(active, since 2s)
Nov 29 07:11:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 07:11:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3374271507' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:11:41 compute-0 systemd[1]: libpod-3037493deb19b86f806d302c62e9c6f4f946bdc796c1c2310a497dffa54bb4bf.scope: Deactivated successfully.
Nov 29 07:11:41 compute-0 podman[75902]: 2025-11-29 07:11:41.18542075 +0000 UTC m=+0.741574434 container died 3037493deb19b86f806d302c62e9c6f4f946bdc796c1c2310a497dffa54bb4bf (image=quay.io/ceph/ceph:v18, name=modest_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-54228ce2eede9fff7a9892da83dc3f341dce74d64e06613be21919d0103b7cfb-merged.mount: Deactivated successfully.
Nov 29 07:11:41 compute-0 podman[75902]: 2025-11-29 07:11:41.241034688 +0000 UTC m=+0.797188392 container remove 3037493deb19b86f806d302c62e9c6f4f946bdc796c1c2310a497dffa54bb4bf (image=quay.io/ceph/ceph:v18, name=modest_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:41 compute-0 systemd[1]: libpod-conmon-3037493deb19b86f806d302c62e9c6f4f946bdc796c1c2310a497dffa54bb4bf.scope: Deactivated successfully.
Nov 29 07:11:41 compute-0 podman[75955]: 2025-11-29 07:11:41.314150773 +0000 UTC m=+0.048878359 container create 12bc647b40118f57f589523b389542f4160339f1eadb682d1207fd256c455cb0 (image=quay.io/ceph/ceph:v18, name=pedantic_curie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:11:41 compute-0 systemd[1]: Started libpod-conmon-12bc647b40118f57f589523b389542f4160339f1eadb682d1207fd256c455cb0.scope.
Nov 29 07:11:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:41 compute-0 podman[75955]: 2025-11-29 07:11:41.29112748 +0000 UTC m=+0.025855126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29540dbbc1c485ca4063efd0326ed76b3558d2969edb2fb15bed7a9f8e119fd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29540dbbc1c485ca4063efd0326ed76b3558d2969edb2fb15bed7a9f8e119fd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29540dbbc1c485ca4063efd0326ed76b3558d2969edb2fb15bed7a9f8e119fd0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:41 compute-0 podman[75955]: 2025-11-29 07:11:41.398890847 +0000 UTC m=+0.133618453 container init 12bc647b40118f57f589523b389542f4160339f1eadb682d1207fd256c455cb0 (image=quay.io/ceph/ceph:v18, name=pedantic_curie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:11:41 compute-0 podman[75955]: 2025-11-29 07:11:41.404129126 +0000 UTC m=+0.138856702 container start 12bc647b40118f57f589523b389542f4160339f1eadb682d1207fd256c455cb0 (image=quay.io/ceph/ceph:v18, name=pedantic_curie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:11:41 compute-0 podman[75955]: 2025-11-29 07:11:41.407642246 +0000 UTC m=+0.142369922 container attach 12bc647b40118f57f589523b389542f4160339f1eadb682d1207fd256c455cb0 (image=quay.io/ceph/ceph:v18, name=pedantic_curie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:11:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 29 07:11:42 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2935788544' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 07:11:42 compute-0 ceph-mon[75050]: mgrmap e4: compute-0.kzdpag(active, since 2s)
Nov 29 07:11:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3374271507' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:11:42 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:11:44 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:11:45 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2935788544' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  1: '-n'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  2: 'mgr.compute-0.kzdpag'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  3: '-f'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  4: '--setuser'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  5: 'ceph'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  6: '--setgroup'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  7: 'ceph'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  8: '--default-log-to-file=false'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  9: '--default-log-to-journald=true'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr respawn  exe_path /proc/self/exe
Nov 29 07:11:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.kzdpag(active, since 6s)
Nov 29 07:11:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2935788544' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 07:11:45 compute-0 systemd[1]: libpod-12bc647b40118f57f589523b389542f4160339f1eadb682d1207fd256c455cb0.scope: Deactivated successfully.
Nov 29 07:11:45 compute-0 podman[76001]: 2025-11-29 07:11:45.499069597 +0000 UTC m=+0.032174865 container died 12bc647b40118f57f589523b389542f4160339f1eadb682d1207fd256c455cb0 (image=quay.io/ceph/ceph:v18, name=pedantic_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-29540dbbc1c485ca4063efd0326ed76b3558d2969edb2fb15bed7a9f8e119fd0-merged.mount: Deactivated successfully.
Nov 29 07:11:45 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: ignoring --setuser ceph since I am not root
Nov 29 07:11:45 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: ignoring --setgroup ceph since I am not root
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: pidfile_write: ignore empty --pid-file
Nov 29 07:11:45 compute-0 podman[76001]: 2025-11-29 07:11:45.561834317 +0000 UTC m=+0.094939555 container remove 12bc647b40118f57f589523b389542f4160339f1eadb682d1207fd256c455cb0 (image=quay.io/ceph/ceph:v18, name=pedantic_curie, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:11:45 compute-0 systemd[1]: libpod-conmon-12bc647b40118f57f589523b389542f4160339f1eadb682d1207fd256c455cb0.scope: Deactivated successfully.
Nov 29 07:11:45 compute-0 podman[76040]: 2025-11-29 07:11:45.629166338 +0000 UTC m=+0.038738460 container create 4f4e9b984214457745a2b9dab99965c16294e072d40d38e43da28bb6de7983d1 (image=quay.io/ceph/ceph:v18, name=gallant_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'alerts'
Nov 29 07:11:45 compute-0 systemd[1]: Started libpod-conmon-4f4e9b984214457745a2b9dab99965c16294e072d40d38e43da28bb6de7983d1.scope.
Nov 29 07:11:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef229910bce393462ffdfbe93aa57957a9fdd2566cc8241ccfaad3d7dbfd68fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef229910bce393462ffdfbe93aa57957a9fdd2566cc8241ccfaad3d7dbfd68fa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef229910bce393462ffdfbe93aa57957a9fdd2566cc8241ccfaad3d7dbfd68fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:45 compute-0 podman[76040]: 2025-11-29 07:11:45.613065461 +0000 UTC m=+0.022637613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:45 compute-0 podman[76040]: 2025-11-29 07:11:45.722789074 +0000 UTC m=+0.132361196 container init 4f4e9b984214457745a2b9dab99965c16294e072d40d38e43da28bb6de7983d1 (image=quay.io/ceph/ceph:v18, name=gallant_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:11:45 compute-0 podman[76040]: 2025-11-29 07:11:45.727865018 +0000 UTC m=+0.137437140 container start 4f4e9b984214457745a2b9dab99965c16294e072d40d38e43da28bb6de7983d1 (image=quay.io/ceph/ceph:v18, name=gallant_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:11:45 compute-0 podman[76040]: 2025-11-29 07:11:45.732245073 +0000 UTC m=+0.141817215 container attach 4f4e9b984214457745a2b9dab99965c16294e072d40d38e43da28bb6de7983d1 (image=quay.io/ceph/ceph:v18, name=gallant_franklin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:11:45 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'balancer'
Nov 29 07:11:45 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:45.983+0000 7f4bbb90d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:11:46 compute-0 ceph-mgr[75345]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:11:46 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'cephadm'
Nov 29 07:11:46 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:46.245+0000 7f4bbb90d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:11:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 07:11:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/212383305' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 07:11:46 compute-0 gallant_franklin[76057]: {
Nov 29 07:11:46 compute-0 gallant_franklin[76057]:     "epoch": 5,
Nov 29 07:11:46 compute-0 gallant_franklin[76057]:     "available": true,
Nov 29 07:11:46 compute-0 gallant_franklin[76057]:     "active_name": "compute-0.kzdpag",
Nov 29 07:11:46 compute-0 gallant_franklin[76057]:     "num_standby": 0
Nov 29 07:11:46 compute-0 gallant_franklin[76057]: }
Nov 29 07:11:46 compute-0 systemd[1]: libpod-4f4e9b984214457745a2b9dab99965c16294e072d40d38e43da28bb6de7983d1.scope: Deactivated successfully.
Nov 29 07:11:46 compute-0 podman[76083]: 2025-11-29 07:11:46.329382168 +0000 UTC m=+0.027020378 container died 4f4e9b984214457745a2b9dab99965c16294e072d40d38e43da28bb6de7983d1 (image=quay.io/ceph/ceph:v18, name=gallant_franklin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:11:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef229910bce393462ffdfbe93aa57957a9fdd2566cc8241ccfaad3d7dbfd68fa-merged.mount: Deactivated successfully.
Nov 29 07:11:46 compute-0 podman[76083]: 2025-11-29 07:11:46.372517251 +0000 UTC m=+0.070155441 container remove 4f4e9b984214457745a2b9dab99965c16294e072d40d38e43da28bb6de7983d1 (image=quay.io/ceph/ceph:v18, name=gallant_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:11:46 compute-0 systemd[1]: libpod-conmon-4f4e9b984214457745a2b9dab99965c16294e072d40d38e43da28bb6de7983d1.scope: Deactivated successfully.
Nov 29 07:11:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2935788544' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 07:11:46 compute-0 ceph-mon[75050]: mgrmap e5: compute-0.kzdpag(active, since 6s)
Nov 29 07:11:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/212383305' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 07:11:46 compute-0 podman[76096]: 2025-11-29 07:11:46.463668998 +0000 UTC m=+0.057522773 container create bee769ae1a7006b6192bb8fb2d6170242e3e5281c42d829b565f4d961aefa6e8 (image=quay.io/ceph/ceph:v18, name=cool_vaughan, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:11:46 compute-0 systemd[1]: Started libpod-conmon-bee769ae1a7006b6192bb8fb2d6170242e3e5281c42d829b565f4d961aefa6e8.scope.
Nov 29 07:11:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:11:46 compute-0 podman[76096]: 2025-11-29 07:11:46.445703869 +0000 UTC m=+0.039557624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5a4b17abfa0d459905cda7c236261043d347060f33bc6dc4937b0d4f56c949/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5a4b17abfa0d459905cda7c236261043d347060f33bc6dc4937b0d4f56c949/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5a4b17abfa0d459905cda7c236261043d347060f33bc6dc4937b0d4f56c949/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:11:46 compute-0 podman[76096]: 2025-11-29 07:11:46.569106031 +0000 UTC m=+0.162959786 container init bee769ae1a7006b6192bb8fb2d6170242e3e5281c42d829b565f4d961aefa6e8 (image=quay.io/ceph/ceph:v18, name=cool_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:11:46 compute-0 podman[76096]: 2025-11-29 07:11:46.579371671 +0000 UTC m=+0.173225406 container start bee769ae1a7006b6192bb8fb2d6170242e3e5281c42d829b565f4d961aefa6e8 (image=quay.io/ceph/ceph:v18, name=cool_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:11:46 compute-0 podman[76096]: 2025-11-29 07:11:46.583738806 +0000 UTC m=+0.177592541 container attach bee769ae1a7006b6192bb8fb2d6170242e3e5281c42d829b565f4d961aefa6e8 (image=quay.io/ceph/ceph:v18, name=cool_vaughan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:11:48 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'crash'
Nov 29 07:11:48 compute-0 ceph-mgr[75345]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:11:48 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:48.614+0000 7f4bbb90d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:11:48 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'dashboard'
Nov 29 07:11:50 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'devicehealth'
Nov 29 07:11:50 compute-0 ceph-mgr[75345]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:11:50 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 07:11:50 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:50.464+0000 7f4bbb90d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:11:51 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 07:11:51 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 07:11:51 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]:   from numpy import show_config as show_numpy_config
Nov 29 07:11:51 compute-0 ceph-mgr[75345]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:11:51 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'influx'
Nov 29 07:11:51 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:51.015+0000 7f4bbb90d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:11:51 compute-0 ceph-mgr[75345]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:11:51 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'insights'
Nov 29 07:11:51 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:51.279+0000 7f4bbb90d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 07:11:51 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'iostat'
Nov 29 07:11:51 compute-0 ceph-mgr[75345]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:11:51 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'k8sevents'
Nov 29 07:11:51 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:51.775+0000 7f4bbb90d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 07:11:53 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'localpool'
Nov 29 07:11:53 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 07:11:54 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'mirroring'
Nov 29 07:11:54 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'nfs'
Nov 29 07:11:55 compute-0 ceph-mgr[75345]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:11:55 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'orchestrator'
Nov 29 07:11:55 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:55.321+0000 7f4bbb90d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 07:11:55 compute-0 ceph-mgr[75345]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:11:55 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:55.987+0000 7f4bbb90d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 07:11:55 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 07:11:56 compute-0 ceph-mgr[75345]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'osd_support'
Nov 29 07:11:56 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:56.269+0000 7f4bbb90d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-0 ceph-mgr[75345]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 07:11:56 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:56.541+0000 7f4bbb90d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-0 ceph-mgr[75345]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:11:56 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'progress'
Nov 29 07:11:56 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:56.796+0000 7f4bbb90d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 07:11:57 compute-0 ceph-mgr[75345]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:11:57 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'prometheus'
Nov 29 07:11:57 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:57.041+0000 7f4bbb90d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 07:11:58 compute-0 ceph-mgr[75345]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:11:58 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'rbd_support'
Nov 29 07:11:58 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:58.098+0000 7f4bbb90d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 07:11:58 compute-0 ceph-mgr[75345]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:11:58 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'restful'
Nov 29 07:11:58 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:58.394+0000 7f4bbb90d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 07:11:59 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'rgw'
Nov 29 07:11:59 compute-0 ceph-mgr[75345]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:11:59 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'rook'
Nov 29 07:11:59 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:11:59.873+0000 7f4bbb90d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 07:12:01 compute-0 ceph-mgr[75345]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:12:01 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'selftest'
Nov 29 07:12:01 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:12:01.912+0000 7f4bbb90d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 07:12:02 compute-0 ceph-mgr[75345]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:12:02 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'snap_schedule'
Nov 29 07:12:02 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:12:02.157+0000 7f4bbb90d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 07:12:02 compute-0 ceph-mgr[75345]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:12:02 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'stats'
Nov 29 07:12:02 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:12:02.401+0000 7f4bbb90d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 07:12:02 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'status'
Nov 29 07:12:02 compute-0 ceph-mgr[75345]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:12:02 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'telegraf'
Nov 29 07:12:02 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:12:02.888+0000 7f4bbb90d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 07:12:03 compute-0 ceph-mgr[75345]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:12:03 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'telemetry'
Nov 29 07:12:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:12:03.121+0000 7f4bbb90d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 07:12:03 compute-0 ceph-mgr[75345]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:12:03 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 07:12:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:12:03.703+0000 7f4bbb90d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 07:12:04 compute-0 ceph-mgr[75345]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:12:04 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'volumes'
Nov 29 07:12:04 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:12:04.366+0000 7f4bbb90d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr[py] Loading python module 'zabbix'
Nov 29 07:12:05 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:12:05.117+0000 7f4bbb90d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:12:05 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T07:12:05.350+0000 7f4bbb90d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Active manager daemon compute-0.kzdpag restarted
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kzdpag
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: ms_deliver_dispatch: unhandled message 0x562932abd1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr handle_mgr_map Activating!
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.kzdpag(active, starting, since 0.0520714s)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr handle_mgr_map I am now activating
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.kzdpag", "id": "compute-0.kzdpag"} v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kzdpag", "id": "compute-0.kzdpag"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Manager daemon compute-0.kzdpag is now available
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: balancer
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Starting
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:12:05
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] No pools available
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mon[75050]: Active manager daemon compute-0.kzdpag restarted
Nov 29 07:12:05 compute-0 ceph-mon[75050]: Activating manager daemon compute-0.kzdpag
Nov 29 07:12:05 compute-0 ceph-mon[75050]: osdmap e2: 0 total, 0 up, 0 in
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mgrmap e6: compute-0.kzdpag(active, starting, since 0.0520714s)
Nov 29 07:12:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kzdpag", "id": "compute-0.kzdpag"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mon[75050]: Manager daemon compute-0.kzdpag is now available
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: cephadm
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: crash
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: devicehealth
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: iostat
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: nfs
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [devicehealth INFO root] Starting
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: orchestrator
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: pg_autoscaler
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: progress
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [progress INFO root] Loading...
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [progress INFO root] No stored events to load
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [progress INFO root] Loaded [] historic events
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] recovery thread starting
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] starting setup
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: rbd_support
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: restful
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/mirror_snapshot_schedule"} v 0) v1
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/mirror_snapshot_schedule"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: status
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: telemetry
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [restful WARNING root] server not running: no certificate configured
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] PerfHandler: starting
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TaskHandler: starting
Nov 29 07:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/trash_purge_schedule"} v 0) v1
Nov 29 07:12:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/trash_purge_schedule"}]: dispatch
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] setup complete
Nov 29 07:12:05 compute-0 ceph-mgr[75345]: mgr load Constructed class from module: volumes
Nov 29 07:12:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.kzdpag(active, since 1.05777s)
Nov 29 07:12:06 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 07:12:06 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 07:12:06 compute-0 cool_vaughan[76113]: {
Nov 29 07:12:06 compute-0 cool_vaughan[76113]:     "mgrmap_epoch": 7,
Nov 29 07:12:06 compute-0 cool_vaughan[76113]:     "initialized": true
Nov 29 07:12:06 compute-0 cool_vaughan[76113]: }
Nov 29 07:12:06 compute-0 systemd[1]: libpod-bee769ae1a7006b6192bb8fb2d6170242e3e5281c42d829b565f4d961aefa6e8.scope: Deactivated successfully.
Nov 29 07:12:06 compute-0 podman[76096]: 2025-11-29 07:12:06.44362634 +0000 UTC m=+20.037480155 container died bee769ae1a7006b6192bb8fb2d6170242e3e5281c42d829b565f4d961aefa6e8 (image=quay.io/ceph/ceph:v18, name=cool_vaughan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e5a4b17abfa0d459905cda7c236261043d347060f33bc6dc4937b0d4f56c949-merged.mount: Deactivated successfully.
Nov 29 07:12:06 compute-0 ceph-mon[75050]: Found migration_current of "None". Setting to last migration.
Nov 29 07:12:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/mirror_snapshot_schedule"}]: dispatch
Nov 29 07:12:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kzdpag/trash_purge_schedule"}]: dispatch
Nov 29 07:12:06 compute-0 ceph-mon[75050]: mgrmap e7: compute-0.kzdpag(active, since 1.05777s)
Nov 29 07:12:06 compute-0 podman[76096]: 2025-11-29 07:12:06.509355535 +0000 UTC m=+20.103209310 container remove bee769ae1a7006b6192bb8fb2d6170242e3e5281c42d829b565f4d961aefa6e8 (image=quay.io/ceph/ceph:v18, name=cool_vaughan, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:06 compute-0 systemd[1]: libpod-conmon-bee769ae1a7006b6192bb8fb2d6170242e3e5281c42d829b565f4d961aefa6e8.scope: Deactivated successfully.
Nov 29 07:12:06 compute-0 podman[76272]: 2025-11-29 07:12:06.599243646 +0000 UTC m=+0.056106803 container create 615b3f5abb8ed6ac8219af1bdc288bc67dad720eb317d21c4a004b0d34d8ca22 (image=quay.io/ceph/ceph:v18, name=pensive_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:12:06 compute-0 systemd[1]: Started libpod-conmon-615b3f5abb8ed6ac8219af1bdc288bc67dad720eb317d21c4a004b0d34d8ca22.scope.
Nov 29 07:12:06 compute-0 podman[76272]: 2025-11-29 07:12:06.573036932 +0000 UTC m=+0.029900119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7a2ab31d8329f090cef688fac449ccffc12ff736ce516a416d652f8494a06a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7a2ab31d8329f090cef688fac449ccffc12ff736ce516a416d652f8494a06a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7a2ab31d8329f090cef688fac449ccffc12ff736ce516a416d652f8494a06a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:06 compute-0 podman[76272]: 2025-11-29 07:12:06.695672412 +0000 UTC m=+0.152535609 container init 615b3f5abb8ed6ac8219af1bdc288bc67dad720eb317d21c4a004b0d34d8ca22 (image=quay.io/ceph/ceph:v18, name=pensive_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:06 compute-0 podman[76272]: 2025-11-29 07:12:06.705711627 +0000 UTC m=+0.162574754 container start 615b3f5abb8ed6ac8219af1bdc288bc67dad720eb317d21c4a004b0d34d8ca22 (image=quay.io/ceph/ceph:v18, name=pensive_poitras, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:12:06 compute-0 podman[76272]: 2025-11-29 07:12:06.709849974 +0000 UTC m=+0.166713141 container attach 615b3f5abb8ed6ac8219af1bdc288bc67dad720eb317d21c4a004b0d34d8ca22 (image=quay.io/ceph/ceph:v18, name=pensive_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:12:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 29 07:12:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 29 07:12:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:07 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 29 07:12:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019918574 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:07 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:12:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:12:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:07 compute-0 systemd[1]: libpod-615b3f5abb8ed6ac8219af1bdc288bc67dad720eb317d21c4a004b0d34d8ca22.scope: Deactivated successfully.
Nov 29 07:12:07 compute-0 podman[76272]: 2025-11-29 07:12:07.501103688 +0000 UTC m=+0.957966865 container died 615b3f5abb8ed6ac8219af1bdc288bc67dad720eb317d21c4a004b0d34d8ca22 (image=quay.io/ceph/ceph:v18, name=pensive_poitras, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:07 compute-0 ceph-mon[75050]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 07:12:07 compute-0 ceph-mon[75050]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 07:12:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad7a2ab31d8329f090cef688fac449ccffc12ff736ce516a416d652f8494a06a-merged.mount: Deactivated successfully.
Nov 29 07:12:07 compute-0 podman[76272]: 2025-11-29 07:12:07.86416807 +0000 UTC m=+1.321031197 container remove 615b3f5abb8ed6ac8219af1bdc288bc67dad720eb317d21c4a004b0d34d8ca22 (image=quay.io/ceph/ceph:v18, name=pensive_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:07 compute-0 systemd[1]: libpod-conmon-615b3f5abb8ed6ac8219af1bdc288bc67dad720eb317d21c4a004b0d34d8ca22.scope: Deactivated successfully.
Nov 29 07:12:07 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.kzdpag(active, since 2s)
Nov 29 07:12:07 compute-0 podman[76327]: 2025-11-29 07:12:07.975120598 +0000 UTC m=+0.076642906 container create 73b268ca894ea74bef36c9eced45919382af8a067ff6f30677b7aefb23027101 (image=quay.io/ceph/ceph:v18, name=romantic_khayyam, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:08 compute-0 systemd[1]: Started libpod-conmon-73b268ca894ea74bef36c9eced45919382af8a067ff6f30677b7aefb23027101.scope.
Nov 29 07:12:08 compute-0 podman[76327]: 2025-11-29 07:12:07.947497414 +0000 UTC m=+0.049019762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51eb920e83b103449d31266ac1fcf4b2ffc09587a5ba9b8a5bf2c334fafafcfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51eb920e83b103449d31266ac1fcf4b2ffc09587a5ba9b8a5bf2c334fafafcfe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51eb920e83b103449d31266ac1fcf4b2ffc09587a5ba9b8a5bf2c334fafafcfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-0 podman[76327]: 2025-11-29 07:12:08.059930465 +0000 UTC m=+0.161452813 container init 73b268ca894ea74bef36c9eced45919382af8a067ff6f30677b7aefb23027101 (image=quay.io/ceph/ceph:v18, name=romantic_khayyam, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:12:08 compute-0 podman[76327]: 2025-11-29 07:12:08.069768424 +0000 UTC m=+0.171290732 container start 73b268ca894ea74bef36c9eced45919382af8a067ff6f30677b7aefb23027101 (image=quay.io/ceph/ceph:v18, name=romantic_khayyam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:08 compute-0 podman[76327]: 2025-11-29 07:12:08.074242691 +0000 UTC m=+0.175764989 container attach 73b268ca894ea74bef36c9eced45919382af8a067ff6f30677b7aefb23027101 (image=quay.io/ceph/ceph:v18, name=romantic_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:08] ENGINE Bus STARTING
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:08] ENGINE Bus STARTING
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:08] ENGINE Serving on https://192.168.122.100:7150
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:08] ENGINE Serving on https://192.168.122.100:7150
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:08] ENGINE Client ('192.168.122.100', 45520) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:08] ENGINE Client ('192.168.122.100', 45520) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:08] ENGINE Serving on http://192.168.122.100:8765
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:08] ENGINE Serving on http://192.168.122.100:8765
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:08] ENGINE Bus STARTED
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:08] ENGINE Bus STARTED
Nov 29 07:12:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:12:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 29 07:12:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: [cephadm INFO root] Set ssh ssh_user
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 29 07:12:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 29 07:12:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: [cephadm INFO root] Set ssh ssh_config
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 29 07:12:08 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 29 07:12:08 compute-0 romantic_khayyam[76343]: ssh user set to ceph-admin. sudo will be used
Nov 29 07:12:08 compute-0 systemd[1]: libpod-73b268ca894ea74bef36c9eced45919382af8a067ff6f30677b7aefb23027101.scope: Deactivated successfully.
Nov 29 07:12:08 compute-0 podman[76327]: 2025-11-29 07:12:08.703175168 +0000 UTC m=+0.804697546 container died 73b268ca894ea74bef36c9eced45919382af8a067ff6f30677b7aefb23027101 (image=quay.io/ceph/ceph:v18, name=romantic_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-51eb920e83b103449d31266ac1fcf4b2ffc09587a5ba9b8a5bf2c334fafafcfe-merged.mount: Deactivated successfully.
Nov 29 07:12:08 compute-0 podman[76327]: 2025-11-29 07:12:08.749330827 +0000 UTC m=+0.850853105 container remove 73b268ca894ea74bef36c9eced45919382af8a067ff6f30677b7aefb23027101 (image=quay.io/ceph/ceph:v18, name=romantic_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:12:08 compute-0 systemd[1]: libpod-conmon-73b268ca894ea74bef36c9eced45919382af8a067ff6f30677b7aefb23027101.scope: Deactivated successfully.
Nov 29 07:12:08 compute-0 ceph-mon[75050]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:08 compute-0 ceph-mon[75050]: mgrmap e8: compute-0.kzdpag(active, since 2s)
Nov 29 07:12:08 compute-0 ceph-mon[75050]: [29/Nov/2025:07:12:08] ENGINE Bus STARTING
Nov 29 07:12:08 compute-0 ceph-mon[75050]: [29/Nov/2025:07:12:08] ENGINE Serving on https://192.168.122.100:7150
Nov 29 07:12:08 compute-0 ceph-mon[75050]: [29/Nov/2025:07:12:08] ENGINE Client ('192.168.122.100', 45520) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 07:12:08 compute-0 ceph-mon[75050]: [29/Nov/2025:07:12:08] ENGINE Serving on http://192.168.122.100:8765
Nov 29 07:12:08 compute-0 ceph-mon[75050]: [29/Nov/2025:07:12:08] ENGINE Bus STARTED
Nov 29 07:12:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:08 compute-0 podman[76404]: 2025-11-29 07:12:08.801456057 +0000 UTC m=+0.035957482 container create f1fcc6726a2ab7e67c292bef17917c72cce8557b797d8c29dbee14b4b73e25aa (image=quay.io/ceph/ceph:v18, name=reverent_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:08 compute-0 systemd[1]: Started libpod-conmon-f1fcc6726a2ab7e67c292bef17917c72cce8557b797d8c29dbee14b4b73e25aa.scope.
Nov 29 07:12:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeb107c17838dbb436728d7c144bbf942fa8193bc4e089798d5ee1ae406d28b/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeb107c17838dbb436728d7c144bbf942fa8193bc4e089798d5ee1ae406d28b/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeb107c17838dbb436728d7c144bbf942fa8193bc4e089798d5ee1ae406d28b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeb107c17838dbb436728d7c144bbf942fa8193bc4e089798d5ee1ae406d28b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeb107c17838dbb436728d7c144bbf942fa8193bc4e089798d5ee1ae406d28b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:08 compute-0 podman[76404]: 2025-11-29 07:12:08.875521028 +0000 UTC m=+0.110022513 container init f1fcc6726a2ab7e67c292bef17917c72cce8557b797d8c29dbee14b4b73e25aa (image=quay.io/ceph/ceph:v18, name=reverent_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:12:08 compute-0 podman[76404]: 2025-11-29 07:12:08.784189117 +0000 UTC m=+0.018690522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:08 compute-0 podman[76404]: 2025-11-29 07:12:08.888447275 +0000 UTC m=+0.122948660 container start f1fcc6726a2ab7e67c292bef17917c72cce8557b797d8c29dbee14b4b73e25aa (image=quay.io/ceph/ceph:v18, name=reverent_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:08 compute-0 podman[76404]: 2025-11-29 07:12:08.892909891 +0000 UTC m=+0.127411286 container attach f1fcc6726a2ab7e67c292bef17917c72cce8557b797d8c29dbee14b4b73e25aa (image=quay.io/ceph/ceph:v18, name=reverent_wiles, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:09 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 29 07:12:09 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:12:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:09 compute-0 ceph-mgr[75345]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 29 07:12:09 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 29 07:12:09 compute-0 ceph-mgr[75345]: [cephadm INFO root] Set ssh private key
Nov 29 07:12:09 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 29 07:12:09 compute-0 systemd[1]: libpod-f1fcc6726a2ab7e67c292bef17917c72cce8557b797d8c29dbee14b4b73e25aa.scope: Deactivated successfully.
Nov 29 07:12:09 compute-0 podman[76404]: 2025-11-29 07:12:09.442079835 +0000 UTC m=+0.676581230 container died f1fcc6726a2ab7e67c292bef17917c72cce8557b797d8c29dbee14b4b73e25aa (image=quay.io/ceph/ceph:v18, name=reverent_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-caeb107c17838dbb436728d7c144bbf942fa8193bc4e089798d5ee1ae406d28b-merged.mount: Deactivated successfully.
Nov 29 07:12:09 compute-0 podman[76404]: 2025-11-29 07:12:09.490127989 +0000 UTC m=+0.724629374 container remove f1fcc6726a2ab7e67c292bef17917c72cce8557b797d8c29dbee14b4b73e25aa (image=quay.io/ceph/ceph:v18, name=reverent_wiles, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:09 compute-0 systemd[1]: libpod-conmon-f1fcc6726a2ab7e67c292bef17917c72cce8557b797d8c29dbee14b4b73e25aa.scope: Deactivated successfully.
Nov 29 07:12:09 compute-0 podman[76459]: 2025-11-29 07:12:09.556536093 +0000 UTC m=+0.045484732 container create 77122ab931c7ede83429b32e920cef3ba5d68841dca33ab32b8b435cbe1af9b3 (image=quay.io/ceph/ceph:v18, name=romantic_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:12:09 compute-0 systemd[1]: Started libpod-conmon-77122ab931c7ede83429b32e920cef3ba5d68841dca33ab32b8b435cbe1af9b3.scope.
Nov 29 07:12:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71e6b1b15f06e6bb63c3882de9c7da567cfc67ee2484931a16c834b9e505e97/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71e6b1b15f06e6bb63c3882de9c7da567cfc67ee2484931a16c834b9e505e97/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71e6b1b15f06e6bb63c3882de9c7da567cfc67ee2484931a16c834b9e505e97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71e6b1b15f06e6bb63c3882de9c7da567cfc67ee2484931a16c834b9e505e97/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71e6b1b15f06e6bb63c3882de9c7da567cfc67ee2484931a16c834b9e505e97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:09 compute-0 podman[76459]: 2025-11-29 07:12:09.623479883 +0000 UTC m=+0.112428502 container init 77122ab931c7ede83429b32e920cef3ba5d68841dca33ab32b8b435cbe1af9b3 (image=quay.io/ceph/ceph:v18, name=romantic_nash, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 29 07:12:09 compute-0 podman[76459]: 2025-11-29 07:12:09.534331083 +0000 UTC m=+0.023279732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:09 compute-0 podman[76459]: 2025-11-29 07:12:09.631757438 +0000 UTC m=+0.120706087 container start 77122ab931c7ede83429b32e920cef3ba5d68841dca33ab32b8b435cbe1af9b3 (image=quay.io/ceph/ceph:v18, name=romantic_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:09 compute-0 podman[76459]: 2025-11-29 07:12:09.635660299 +0000 UTC m=+0.124608908 container attach 77122ab931c7ede83429b32e920cef3ba5d68841dca33ab32b8b435cbe1af9b3 (image=quay.io/ceph/ceph:v18, name=romantic_nash, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:12:10 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 29 07:12:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:10 compute-0 ceph-mgr[75345]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 29 07:12:10 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 29 07:12:10 compute-0 systemd[1]: libpod-77122ab931c7ede83429b32e920cef3ba5d68841dca33ab32b8b435cbe1af9b3.scope: Deactivated successfully.
Nov 29 07:12:10 compute-0 podman[76459]: 2025-11-29 07:12:10.16751051 +0000 UTC m=+0.656459119 container died 77122ab931c7ede83429b32e920cef3ba5d68841dca33ab32b8b435cbe1af9b3 (image=quay.io/ceph/ceph:v18, name=romantic_nash, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a71e6b1b15f06e6bb63c3882de9c7da567cfc67ee2484931a16c834b9e505e97-merged.mount: Deactivated successfully.
Nov 29 07:12:10 compute-0 podman[76459]: 2025-11-29 07:12:10.20802696 +0000 UTC m=+0.696975569 container remove 77122ab931c7ede83429b32e920cef3ba5d68841dca33ab32b8b435cbe1af9b3 (image=quay.io/ceph/ceph:v18, name=romantic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:10 compute-0 systemd[1]: libpod-conmon-77122ab931c7ede83429b32e920cef3ba5d68841dca33ab32b8b435cbe1af9b3.scope: Deactivated successfully.
Nov 29 07:12:10 compute-0 podman[76513]: 2025-11-29 07:12:10.268184537 +0000 UTC m=+0.041001834 container create 718ff9c77c99e7bb018c329b8ef7cd0ffaa3d19500e7586a2bc57525195163eb (image=quay.io/ceph/ceph:v18, name=determined_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:12:10 compute-0 systemd[1]: Started libpod-conmon-718ff9c77c99e7bb018c329b8ef7cd0ffaa3d19500e7586a2bc57525195163eb.scope.
Nov 29 07:12:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6e13d87f236b0e39fd9a3a09fa3d1835865be4eea084cff090fd67dc0ec1f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6e13d87f236b0e39fd9a3a09fa3d1835865be4eea084cff090fd67dc0ec1f7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6e13d87f236b0e39fd9a3a09fa3d1835865be4eea084cff090fd67dc0ec1f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:10 compute-0 podman[76513]: 2025-11-29 07:12:10.24923691 +0000 UTC m=+0.022054227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:10 compute-0 podman[76513]: 2025-11-29 07:12:10.34935955 +0000 UTC m=+0.122176847 container init 718ff9c77c99e7bb018c329b8ef7cd0ffaa3d19500e7586a2bc57525195163eb (image=quay.io/ceph/ceph:v18, name=determined_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:12:10 compute-0 podman[76513]: 2025-11-29 07:12:10.358631744 +0000 UTC m=+0.131449041 container start 718ff9c77c99e7bb018c329b8ef7cd0ffaa3d19500e7586a2bc57525195163eb (image=quay.io/ceph/ceph:v18, name=determined_saha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:12:10 compute-0 podman[76513]: 2025-11-29 07:12:10.363646146 +0000 UTC m=+0.136463443 container attach 718ff9c77c99e7bb018c329b8ef7cd0ffaa3d19500e7586a2bc57525195163eb (image=quay.io/ceph/ceph:v18, name=determined_saha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:12:10 compute-0 ceph-mon[75050]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:10 compute-0 ceph-mon[75050]: Set ssh ssh_user
Nov 29 07:12:10 compute-0 ceph-mon[75050]: Set ssh ssh_config
Nov 29 07:12:10 compute-0 ceph-mon[75050]: ssh user set to ceph-admin. sudo will be used
Nov 29 07:12:10 compute-0 ceph-mon[75050]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:10 compute-0 ceph-mon[75050]: Set ssh ssh_identity_key
Nov 29 07:12:10 compute-0 ceph-mon[75050]: Set ssh private key
Nov 29 07:12:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:10 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:10 compute-0 determined_saha[76530]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZXwGI0Vo3NRzS4vOzdtj0F3Q00PEt1iDkzWmvz+60ClaDFQ8Ttph8jvhfAPZR2IZcSJ+6WYqwGftfNQ8K08XAcEqECDS6Pf9T3+VkNDdmBOvr6OoHOfwLKitGWrUfdP4+iCHUXeIVHz5MiLBFmu3LuxtJTdOBmSiLzmRZ0wWidwqe9uwo7LbBMnIobg2FQKRrY6QNs9i5PGvyKlwOuyQkVXX+W9/qzNDD9pQqtONL32wa3UExYRVFVWeCzIV7vqpDCt4yKDWobXoN8i+BIV+pubOFTBc6H+8290Bq2+eiy75/kVFbuGwpIMjSIP7fg5RsKHcRXlpkbuXDwiaggDyUg4uVs+TVneQsilDx4AC7cllWT8O7UOFl1XZb0l0q0LEPPULMMzr//BTYnZoTjL7FGyAS0OqamHfdt6yOxToFP8pNah49NcGYC8b0J+Itzocvo9CQ1AhlpaDh52AY52jImBuWQz/y8mrXNfu33fjhNwrhtKT9oSEqx3xvnxxBqm8= zuul@controller
Nov 29 07:12:10 compute-0 systemd[1]: libpod-718ff9c77c99e7bb018c329b8ef7cd0ffaa3d19500e7586a2bc57525195163eb.scope: Deactivated successfully.
Nov 29 07:12:10 compute-0 podman[76513]: 2025-11-29 07:12:10.932423116 +0000 UTC m=+0.705240433 container died 718ff9c77c99e7bb018c329b8ef7cd0ffaa3d19500e7586a2bc57525195163eb (image=quay.io/ceph/ceph:v18, name=determined_saha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb6e13d87f236b0e39fd9a3a09fa3d1835865be4eea084cff090fd67dc0ec1f7-merged.mount: Deactivated successfully.
Nov 29 07:12:10 compute-0 podman[76513]: 2025-11-29 07:12:10.977021761 +0000 UTC m=+0.749839048 container remove 718ff9c77c99e7bb018c329b8ef7cd0ffaa3d19500e7586a2bc57525195163eb (image=quay.io/ceph/ceph:v18, name=determined_saha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:10 compute-0 systemd[1]: libpod-conmon-718ff9c77c99e7bb018c329b8ef7cd0ffaa3d19500e7586a2bc57525195163eb.scope: Deactivated successfully.
Nov 29 07:12:11 compute-0 podman[76571]: 2025-11-29 07:12:11.049061826 +0000 UTC m=+0.052196963 container create c3f9c12ebd00e4a256ce0c47ed535aa2376f6a8884005b7142c5a3e6989315d0 (image=quay.io/ceph/ceph:v18, name=naughty_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:11 compute-0 systemd[1]: Started libpod-conmon-c3f9c12ebd00e4a256ce0c47ed535aa2376f6a8884005b7142c5a3e6989315d0.scope.
Nov 29 07:12:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:11 compute-0 podman[76571]: 2025-11-29 07:12:11.018656303 +0000 UTC m=+0.021791520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea12e51de4392605c6569752d74210a3642fd788bd5fec3b587cec9d1dadf87/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea12e51de4392605c6569752d74210a3642fd788bd5fec3b587cec9d1dadf87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ea12e51de4392605c6569752d74210a3642fd788bd5fec3b587cec9d1dadf87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:11 compute-0 podman[76571]: 2025-11-29 07:12:11.133630895 +0000 UTC m=+0.136766032 container init c3f9c12ebd00e4a256ce0c47ed535aa2376f6a8884005b7142c5a3e6989315d0 (image=quay.io/ceph/ceph:v18, name=naughty_cannon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:12:11 compute-0 podman[76571]: 2025-11-29 07:12:11.145443801 +0000 UTC m=+0.148578948 container start c3f9c12ebd00e4a256ce0c47ed535aa2376f6a8884005b7142c5a3e6989315d0 (image=quay.io/ceph/ceph:v18, name=naughty_cannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:12:11 compute-0 podman[76571]: 2025-11-29 07:12:11.149731112 +0000 UTC m=+0.152866239 container attach c3f9c12ebd00e4a256ce0c47ed535aa2376f6a8884005b7142c5a3e6989315d0 (image=quay.io/ceph/ceph:v18, name=naughty_cannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:12:11 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:12:11 compute-0 ceph-mon[75050]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:11 compute-0 ceph-mon[75050]: Set ssh ssh_identity_pub
Nov 29 07:12:11 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:11 compute-0 sshd-session[76613]: Accepted publickey for ceph-admin from 192.168.122.100 port 38538 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:11 compute-0 systemd-logind[807]: New session 20 of user ceph-admin.
Nov 29 07:12:11 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 07:12:11 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 07:12:12 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 07:12:12 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 29 07:12:12 compute-0 systemd[76617]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:12 compute-0 sshd-session[76620]: Accepted publickey for ceph-admin from 192.168.122.100 port 38544 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:12 compute-0 systemd-logind[807]: New session 22 of user ceph-admin.
Nov 29 07:12:12 compute-0 systemd[76617]: Queued start job for default target Main User Target.
Nov 29 07:12:12 compute-0 systemd[76617]: Created slice User Application Slice.
Nov 29 07:12:12 compute-0 systemd[76617]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 07:12:12 compute-0 systemd[76617]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 07:12:12 compute-0 systemd[76617]: Reached target Paths.
Nov 29 07:12:12 compute-0 systemd[76617]: Reached target Timers.
Nov 29 07:12:12 compute-0 systemd[76617]: Starting D-Bus User Message Bus Socket...
Nov 29 07:12:12 compute-0 systemd[76617]: Starting Create User's Volatile Files and Directories...
Nov 29 07:12:12 compute-0 systemd[76617]: Listening on D-Bus User Message Bus Socket.
Nov 29 07:12:12 compute-0 systemd[76617]: Reached target Sockets.
Nov 29 07:12:12 compute-0 systemd[76617]: Finished Create User's Volatile Files and Directories.
Nov 29 07:12:12 compute-0 systemd[76617]: Reached target Basic System.
Nov 29 07:12:12 compute-0 systemd[76617]: Reached target Main User Target.
Nov 29 07:12:12 compute-0 systemd[76617]: Startup finished in 169ms.
Nov 29 07:12:12 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 29 07:12:12 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Nov 29 07:12:12 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Nov 29 07:12:12 compute-0 sshd-session[76613]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:12 compute-0 sshd-session[76620]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:12 compute-0 sudo[76637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:12 compute-0 sudo[76637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:12 compute-0 sudo[76637]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052982 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:12 compute-0 sudo[76662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:12 compute-0 sudo[76662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:12 compute-0 sudo[76662]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:12 compute-0 ceph-mon[75050]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:12 compute-0 sshd-session[76687]: Accepted publickey for ceph-admin from 192.168.122.100 port 38552 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:12 compute-0 systemd-logind[807]: New session 23 of user ceph-admin.
Nov 29 07:12:12 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 07:12:12 compute-0 sshd-session[76687]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:12 compute-0 sudo[76691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:12 compute-0 sudo[76691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:12 compute-0 sudo[76691]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:12 compute-0 sudo[76716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 07:12:12 compute-0 sudo[76716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:12 compute-0 sudo[76716]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:13 compute-0 sshd-session[76741]: Accepted publickey for ceph-admin from 192.168.122.100 port 38556 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:13 compute-0 systemd-logind[807]: New session 24 of user ceph-admin.
Nov 29 07:12:13 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 07:12:13 compute-0 sshd-session[76741]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:13 compute-0 sudo[76745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:13 compute-0 sudo[76745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:13 compute-0 sudo[76745]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:13 compute-0 sudo[76770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 07:12:13 compute-0 sudo[76770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:13 compute-0 sudo[76770]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:13 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 29 07:12:13 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 29 07:12:13 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:12:13 compute-0 ceph-mon[75050]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:13 compute-0 sshd-session[76795]: Accepted publickey for ceph-admin from 192.168.122.100 port 38564 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:13 compute-0 systemd-logind[807]: New session 25 of user ceph-admin.
Nov 29 07:12:13 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 07:12:13 compute-0 sshd-session[76795]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:13 compute-0 sudo[76799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:13 compute-0 sudo[76799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:13 compute-0 sudo[76799]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:13 compute-0 sudo[76824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:13 compute-0 sudo[76824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:13 compute-0 sudo[76824]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:14 compute-0 sshd-session[76849]: Accepted publickey for ceph-admin from 192.168.122.100 port 38578 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:14 compute-0 systemd-logind[807]: New session 26 of user ceph-admin.
Nov 29 07:12:14 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 07:12:14 compute-0 sshd-session[76849]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:14 compute-0 sudo[76853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:14 compute-0 sudo[76853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:14 compute-0 sudo[76853]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:14 compute-0 sudo[76878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:14 compute-0 sudo[76878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:14 compute-0 sudo[76878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:14 compute-0 ceph-mon[75050]: Deploying cephadm binary to compute-0
Nov 29 07:12:14 compute-0 sshd-session[76903]: Accepted publickey for ceph-admin from 192.168.122.100 port 38594 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:14 compute-0 systemd-logind[807]: New session 27 of user ceph-admin.
Nov 29 07:12:14 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 07:12:14 compute-0 sshd-session[76903]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:14 compute-0 sudo[76907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:14 compute-0 sudo[76907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:14 compute-0 sudo[76907]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:14 compute-0 sudo[76932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 07:12:14 compute-0 sudo[76932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:14 compute-0 sudo[76932]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:14 compute-0 sshd-session[76957]: Accepted publickey for ceph-admin from 192.168.122.100 port 38600 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:15 compute-0 systemd-logind[807]: New session 28 of user ceph-admin.
Nov 29 07:12:15 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 07:12:15 compute-0 sshd-session[76957]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:15 compute-0 sudo[76961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:15 compute-0 sudo[76961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:15 compute-0 sudo[76961]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:15 compute-0 sudo[76986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:15 compute-0 sudo[76986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:15 compute-0 sudo[76986]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:15 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:12:15 compute-0 sshd-session[77011]: Accepted publickey for ceph-admin from 192.168.122.100 port 38612 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:15 compute-0 systemd-logind[807]: New session 29 of user ceph-admin.
Nov 29 07:12:15 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 07:12:15 compute-0 sshd-session[77011]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:15 compute-0 sudo[77015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:15 compute-0 sudo[77015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:15 compute-0 sudo[77015]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:15 compute-0 sudo[77040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 07:12:15 compute-0 sudo[77040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:15 compute-0 sudo[77040]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:15 compute-0 sshd-session[77065]: Accepted publickey for ceph-admin from 192.168.122.100 port 38622 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:15 compute-0 systemd-logind[807]: New session 30 of user ceph-admin.
Nov 29 07:12:15 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 07:12:15 compute-0 sshd-session[77065]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:16 compute-0 sshd-session[77092]: Accepted publickey for ceph-admin from 192.168.122.100 port 38624 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:16 compute-0 systemd-logind[807]: New session 31 of user ceph-admin.
Nov 29 07:12:16 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 07:12:16 compute-0 sshd-session[77092]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:16 compute-0 sudo[77096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:16 compute-0 sudo[77096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:16 compute-0 sudo[77096]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:16 compute-0 sudo[77121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 07:12:16 compute-0 sudo[77121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:16 compute-0 sudo[77121]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:16 compute-0 sshd-session[77146]: Accepted publickey for ceph-admin from 192.168.122.100 port 38630 ssh2: RSA SHA256:tYpU1O9UYah9JHmMueLWMurdrusBcIIdicRTOId3sIE
Nov 29 07:12:16 compute-0 systemd-logind[807]: New session 32 of user ceph-admin.
Nov 29 07:12:16 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 07:12:16 compute-0 sshd-session[77146]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 07:12:17 compute-0 sudo[77150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:17 compute-0 sudo[77150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:17 compute-0 sudo[77150]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:17 compute-0 sudo[77175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 07:12:17 compute-0 sudo[77175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:17 compute-0 sudo[77175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:12:17 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:12:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:17 compute-0 ceph-mgr[75345]: [cephadm INFO root] Added host compute-0
Nov 29 07:12:17 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 07:12:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:12:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:17 compute-0 naughty_cannon[76587]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 07:12:17 compute-0 systemd[1]: libpod-c3f9c12ebd00e4a256ce0c47ed535aa2376f6a8884005b7142c5a3e6989315d0.scope: Deactivated successfully.
Nov 29 07:12:17 compute-0 podman[76571]: 2025-11-29 07:12:17.44391568 +0000 UTC m=+6.447050817 container died c3f9c12ebd00e4a256ce0c47ed535aa2376f6a8884005b7142c5a3e6989315d0 (image=quay.io/ceph/ceph:v18, name=naughty_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ea12e51de4392605c6569752d74210a3642fd788bd5fec3b587cec9d1dadf87-merged.mount: Deactivated successfully.
Nov 29 07:12:17 compute-0 sudo[77220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:17 compute-0 sudo[77220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:17 compute-0 sudo[77220]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:17 compute-0 podman[76571]: 2025-11-29 07:12:17.503388888 +0000 UTC m=+6.506524025 container remove c3f9c12ebd00e4a256ce0c47ed535aa2376f6a8884005b7142c5a3e6989315d0 (image=quay.io/ceph/ceph:v18, name=naughty_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:12:17 compute-0 systemd[1]: libpod-conmon-c3f9c12ebd00e4a256ce0c47ed535aa2376f6a8884005b7142c5a3e6989315d0.scope: Deactivated successfully.
Nov 29 07:12:17 compute-0 sudo[77257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:17 compute-0 sudo[77257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:17 compute-0 sudo[77257]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:17 compute-0 podman[77262]: 2025-11-29 07:12:17.567552008 +0000 UTC m=+0.042046104 container create c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120 (image=quay.io/ceph/ceph:v18, name=nostalgic_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:17 compute-0 systemd[1]: Started libpod-conmon-c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120.scope.
Nov 29 07:12:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c247560175db967cbf0552d0762e8d04093d4722e1ecea7dbfbb683c167aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c247560175db967cbf0552d0762e8d04093d4722e1ecea7dbfbb683c167aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c247560175db967cbf0552d0762e8d04093d4722e1ecea7dbfbb683c167aa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:17 compute-0 sudo[77296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:17 compute-0 sudo[77296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:17 compute-0 sudo[77296]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:17 compute-0 podman[77262]: 2025-11-29 07:12:17.549661061 +0000 UTC m=+0.024155167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:17 compute-0 podman[77262]: 2025-11-29 07:12:17.651334776 +0000 UTC m=+0.125828892 container init c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120 (image=quay.io/ceph/ceph:v18, name=nostalgic_agnesi, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:12:17 compute-0 podman[77262]: 2025-11-29 07:12:17.662242866 +0000 UTC m=+0.136736962 container start c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120 (image=quay.io/ceph/ceph:v18, name=nostalgic_agnesi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:17 compute-0 podman[77262]: 2025-11-29 07:12:17.666193087 +0000 UTC m=+0.140687203 container attach c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120 (image=quay.io/ceph/ceph:v18, name=nostalgic_agnesi, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:17 compute-0 sudo[77327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 29 07:12:17 compute-0 sudo[77327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:18 compute-0 podman[77380]: 2025-11-29 07:12:17.999953678 +0000 UTC m=+0.053087467 container create 289e11aaf440ed48af1c61ab60608555def3f12dac10195c6ab94b0bda63b703 (image=quay.io/ceph/ceph:v18, name=angry_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:18 compute-0 systemd[1]: Started libpod-conmon-289e11aaf440ed48af1c61ab60608555def3f12dac10195c6ab94b0bda63b703.scope.
Nov 29 07:12:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:18 compute-0 podman[77380]: 2025-11-29 07:12:17.978893381 +0000 UTC m=+0.032027130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:18 compute-0 podman[77380]: 2025-11-29 07:12:18.075358138 +0000 UTC m=+0.128491917 container init 289e11aaf440ed48af1c61ab60608555def3f12dac10195c6ab94b0bda63b703 (image=quay.io/ceph/ceph:v18, name=angry_hawking, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:18 compute-0 podman[77380]: 2025-11-29 07:12:18.079898517 +0000 UTC m=+0.133032266 container start 289e11aaf440ed48af1c61ab60608555def3f12dac10195c6ab94b0bda63b703 (image=quay.io/ceph/ceph:v18, name=angry_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:18 compute-0 podman[77380]: 2025-11-29 07:12:18.083341805 +0000 UTC m=+0.136475574 container attach 289e11aaf440ed48af1c61ab60608555def3f12dac10195c6ab94b0bda63b703 (image=quay.io/ceph/ceph:v18, name=angry_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:12:18 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:18 compute-0 ceph-mgr[75345]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 29 07:12:18 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 29 07:12:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 07:12:18 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:18 compute-0 nostalgic_agnesi[77320]: Scheduled mon update...
Nov 29 07:12:18 compute-0 systemd[1]: libpod-c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120.scope: Deactivated successfully.
Nov 29 07:12:18 compute-0 conmon[77320]: conmon c81c9686e5d86584d7c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120.scope/container/memory.events
Nov 29 07:12:18 compute-0 podman[77262]: 2025-11-29 07:12:18.255369357 +0000 UTC m=+0.729863493 container died c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120 (image=quay.io/ceph/ceph:v18, name=nostalgic_agnesi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a77c247560175db967cbf0552d0762e8d04093d4722e1ecea7dbfbb683c167aa-merged.mount: Deactivated successfully.
Nov 29 07:12:18 compute-0 podman[77262]: 2025-11-29 07:12:18.319199047 +0000 UTC m=+0.793693173 container remove c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120 (image=quay.io/ceph/ceph:v18, name=nostalgic_agnesi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:18 compute-0 systemd[1]: libpod-conmon-c81c9686e5d86584d7c96803e1074d6e9379311c933d602b1e53721fe6fe9120.scope: Deactivated successfully.
Nov 29 07:12:18 compute-0 angry_hawking[77415]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 07:12:18 compute-0 systemd[1]: libpod-289e11aaf440ed48af1c61ab60608555def3f12dac10195c6ab94b0bda63b703.scope: Deactivated successfully.
Nov 29 07:12:18 compute-0 podman[77380]: 2025-11-29 07:12:18.371101441 +0000 UTC m=+0.424235230 container died 289e11aaf440ed48af1c61ab60608555def3f12dac10195c6ab94b0bda63b703 (image=quay.io/ceph/ceph:v18, name=angry_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:18 compute-0 podman[77433]: 2025-11-29 07:12:18.396150011 +0000 UTC m=+0.050713509 container create 2e94f1f9d2da7b6e92d9923c2aedf39a7be38b159b59c5200d1da7796a10d833 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6aaba4683c574a44d8128f0bc31506e6a5bd13f748439d3d9ec436ab1bfffe2-merged.mount: Deactivated successfully.
Nov 29 07:12:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:18 compute-0 ceph-mon[75050]: Added host compute-0
Nov 29 07:12:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:12:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:18 compute-0 podman[77380]: 2025-11-29 07:12:18.427208243 +0000 UTC m=+0.480341992 container remove 289e11aaf440ed48af1c61ab60608555def3f12dac10195c6ab94b0bda63b703 (image=quay.io/ceph/ceph:v18, name=angry_hawking, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:18 compute-0 systemd[1]: Started libpod-conmon-2e94f1f9d2da7b6e92d9923c2aedf39a7be38b159b59c5200d1da7796a10d833.scope.
Nov 29 07:12:18 compute-0 systemd[1]: libpod-conmon-289e11aaf440ed48af1c61ab60608555def3f12dac10195c6ab94b0bda63b703.scope: Deactivated successfully.
Nov 29 07:12:18 compute-0 podman[77433]: 2025-11-29 07:12:18.376617977 +0000 UTC m=+0.031181495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:18 compute-0 sudo[77327]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 29 07:12:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:18 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e10a6c30fd8d91d39f1e4e44a8a7c080db05d7c654100fc31a69770f6206441/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e10a6c30fd8d91d39f1e4e44a8a7c080db05d7c654100fc31a69770f6206441/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e10a6c30fd8d91d39f1e4e44a8a7c080db05d7c654100fc31a69770f6206441/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:18 compute-0 podman[77433]: 2025-11-29 07:12:18.502900021 +0000 UTC m=+0.157463539 container init 2e94f1f9d2da7b6e92d9923c2aedf39a7be38b159b59c5200d1da7796a10d833 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:12:18 compute-0 podman[77433]: 2025-11-29 07:12:18.514356205 +0000 UTC m=+0.168919723 container start 2e94f1f9d2da7b6e92d9923c2aedf39a7be38b159b59c5200d1da7796a10d833 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:18 compute-0 podman[77433]: 2025-11-29 07:12:18.520206771 +0000 UTC m=+0.174770299 container attach 2e94f1f9d2da7b6e92d9923c2aedf39a7be38b159b59c5200d1da7796a10d833 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:18 compute-0 sudo[77467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:18 compute-0 sudo[77467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:18 compute-0 sudo[77467]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:18 compute-0 sudo[77493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:18 compute-0 sudo[77493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:18 compute-0 sudo[77493]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:18 compute-0 sudo[77518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:18 compute-0 sudo[77518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:18 compute-0 sudo[77518]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:18 compute-0 sudo[77543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:12:18 compute-0 sudo[77543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:19 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:19 compute-0 ceph-mgr[75345]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 29 07:12:19 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 29 07:12:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:12:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:19 compute-0 unruffled_driscoll[77463]: Scheduled mgr update...
Nov 29 07:12:19 compute-0 systemd[1]: libpod-2e94f1f9d2da7b6e92d9923c2aedf39a7be38b159b59c5200d1da7796a10d833.scope: Deactivated successfully.
Nov 29 07:12:19 compute-0 podman[77433]: 2025-11-29 07:12:19.069886959 +0000 UTC m=+0.724450447 container died 2e94f1f9d2da7b6e92d9923c2aedf39a7be38b159b59c5200d1da7796a10d833 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e10a6c30fd8d91d39f1e4e44a8a7c080db05d7c654100fc31a69770f6206441-merged.mount: Deactivated successfully.
Nov 29 07:12:19 compute-0 podman[77433]: 2025-11-29 07:12:19.12241512 +0000 UTC m=+0.776978638 container remove 2e94f1f9d2da7b6e92d9923c2aedf39a7be38b159b59c5200d1da7796a10d833 (image=quay.io/ceph/ceph:v18, name=unruffled_driscoll, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:12:19 compute-0 systemd[1]: libpod-conmon-2e94f1f9d2da7b6e92d9923c2aedf39a7be38b159b59c5200d1da7796a10d833.scope: Deactivated successfully.
Nov 29 07:12:19 compute-0 sudo[77543]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:19 compute-0 podman[77622]: 2025-11-29 07:12:19.187330072 +0000 UTC m=+0.041508909 container create c21a0921480c108d9cfe2af7aa4b9fc851fb7d4b2f6a4373e9e742f22f5c04e9 (image=quay.io/ceph/ceph:v18, name=compassionate_chatelet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:19 compute-0 systemd[1]: Started libpod-conmon-c21a0921480c108d9cfe2af7aa4b9fc851fb7d4b2f6a4373e9e742f22f5c04e9.scope.
Nov 29 07:12:19 compute-0 sudo[77634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:19 compute-0 sudo[77634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:19 compute-0 sudo[77634]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8521978c3790be83148a7c256e9b872d34754ddb3e53ad1af36c51844436e28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8521978c3790be83148a7c256e9b872d34754ddb3e53ad1af36c51844436e28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8521978c3790be83148a7c256e9b872d34754ddb3e53ad1af36c51844436e28/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:19 compute-0 podman[77622]: 2025-11-29 07:12:19.168458116 +0000 UTC m=+0.022637003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:19 compute-0 sudo[77666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:19 compute-0 sudo[77666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:19 compute-0 sudo[77666]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:19 compute-0 podman[77622]: 2025-11-29 07:12:19.391546427 +0000 UTC m=+0.245725344 container init c21a0921480c108d9cfe2af7aa4b9fc851fb7d4b2f6a4373e9e742f22f5c04e9 (image=quay.io/ceph/ceph:v18, name=compassionate_chatelet, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:12:19 compute-0 podman[77622]: 2025-11-29 07:12:19.402657902 +0000 UTC m=+0.256836749 container start c21a0921480c108d9cfe2af7aa4b9fc851fb7d4b2f6a4373e9e742f22f5c04e9 (image=quay.io/ceph/ceph:v18, name=compassionate_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:19 compute-0 podman[77622]: 2025-11-29 07:12:19.406833691 +0000 UTC m=+0.261012578 container attach c21a0921480c108d9cfe2af7aa4b9fc851fb7d4b2f6a4373e9e742f22f5c04e9 (image=quay.io/ceph/ceph:v18, name=compassionate_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:12:19 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:12:19 compute-0 sudo[77691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:19 compute-0 sudo[77691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:19 compute-0 sudo[77691]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:19 compute-0 ceph-mon[75050]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:19 compute-0 ceph-mon[75050]: Saving service mon spec with placement count:5
Nov 29 07:12:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:19 compute-0 sudo[77718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:12:19 compute-0 sudo[77718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:19 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:19 compute-0 ceph-mgr[75345]: [cephadm INFO root] Saving service crash spec with placement *
Nov 29 07:12:19 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 29 07:12:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 07:12:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:19 compute-0 compassionate_chatelet[77661]: Scheduled crash update...
Nov 29 07:12:19 compute-0 systemd[1]: libpod-c21a0921480c108d9cfe2af7aa4b9fc851fb7d4b2f6a4373e9e742f22f5c04e9.scope: Deactivated successfully.
Nov 29 07:12:19 compute-0 podman[77622]: 2025-11-29 07:12:19.936020447 +0000 UTC m=+0.790199294 container died c21a0921480c108d9cfe2af7aa4b9fc851fb7d4b2f6a4373e9e742f22f5c04e9 (image=quay.io/ceph/ceph:v18, name=compassionate_chatelet, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:12:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8521978c3790be83148a7c256e9b872d34754ddb3e53ad1af36c51844436e28-merged.mount: Deactivated successfully.
Nov 29 07:12:19 compute-0 podman[77622]: 2025-11-29 07:12:19.996623387 +0000 UTC m=+0.850802234 container remove c21a0921480c108d9cfe2af7aa4b9fc851fb7d4b2f6a4373e9e742f22f5c04e9 (image=quay.io/ceph/ceph:v18, name=compassionate_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:12:20 compute-0 systemd[1]: libpod-conmon-c21a0921480c108d9cfe2af7aa4b9fc851fb7d4b2f6a4373e9e742f22f5c04e9.scope: Deactivated successfully.
Nov 29 07:12:20 compute-0 podman[77833]: 2025-11-29 07:12:20.030122208 +0000 UTC m=+0.081010930 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:12:20 compute-0 podman[77857]: 2025-11-29 07:12:20.082036751 +0000 UTC m=+0.054455797 container create 39c675803d65fb5b20876af6975d8bc67ecdb0f5a1e6045e1fc63234393b6f50 (image=quay.io/ceph/ceph:v18, name=flamboyant_aryabhata, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:12:20 compute-0 systemd[1]: Started libpod-conmon-39c675803d65fb5b20876af6975d8bc67ecdb0f5a1e6045e1fc63234393b6f50.scope.
Nov 29 07:12:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4262c9949988cddda8ec8bbeb54763487416870317c45365f96c3817f704f7a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4262c9949988cddda8ec8bbeb54763487416870317c45365f96c3817f704f7a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4262c9949988cddda8ec8bbeb54763487416870317c45365f96c3817f704f7a8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:20 compute-0 podman[77857]: 2025-11-29 07:12:20.060622283 +0000 UTC m=+0.033041359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:20 compute-0 podman[77857]: 2025-11-29 07:12:20.159473858 +0000 UTC m=+0.131892924 container init 39c675803d65fb5b20876af6975d8bc67ecdb0f5a1e6045e1fc63234393b6f50 (image=quay.io/ceph/ceph:v18, name=flamboyant_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:20 compute-0 podman[77857]: 2025-11-29 07:12:20.164357227 +0000 UTC m=+0.136776273 container start 39c675803d65fb5b20876af6975d8bc67ecdb0f5a1e6045e1fc63234393b6f50 (image=quay.io/ceph/ceph:v18, name=flamboyant_aryabhata, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:12:20 compute-0 podman[77857]: 2025-11-29 07:12:20.167920418 +0000 UTC m=+0.140339544 container attach 39c675803d65fb5b20876af6975d8bc67ecdb0f5a1e6045e1fc63234393b6f50 (image=quay.io/ceph/ceph:v18, name=flamboyant_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:12:20 compute-0 podman[77833]: 2025-11-29 07:12:20.342858602 +0000 UTC m=+0.393747354 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:20 compute-0 ceph-mon[75050]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:20 compute-0 ceph-mon[75050]: Saving service mgr spec with placement count:2
Nov 29 07:12:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:20 compute-0 sudo[77718]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:20 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:20 compute-0 sudo[77932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:20 compute-0 sudo[77932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:20 compute-0 sudo[77932]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:20 compute-0 sudo[77957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:20 compute-0 sudo[77957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:20 compute-0 sudo[77957]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 29 07:12:20 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/86952065' entity='client.admin' 
Nov 29 07:12:20 compute-0 systemd[1]: libpod-39c675803d65fb5b20876af6975d8bc67ecdb0f5a1e6045e1fc63234393b6f50.scope: Deactivated successfully.
Nov 29 07:12:20 compute-0 podman[77857]: 2025-11-29 07:12:20.762598873 +0000 UTC m=+0.735017959 container died 39c675803d65fb5b20876af6975d8bc67ecdb0f5a1e6045e1fc63234393b6f50 (image=quay.io/ceph/ceph:v18, name=flamboyant_aryabhata, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4262c9949988cddda8ec8bbeb54763487416870317c45365f96c3817f704f7a8-merged.mount: Deactivated successfully.
Nov 29 07:12:20 compute-0 sudo[77982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:20 compute-0 sudo[77982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:20 compute-0 sudo[77982]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:20 compute-0 podman[77857]: 2025-11-29 07:12:20.821443923 +0000 UTC m=+0.793862969 container remove 39c675803d65fb5b20876af6975d8bc67ecdb0f5a1e6045e1fc63234393b6f50 (image=quay.io/ceph/ceph:v18, name=flamboyant_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:20 compute-0 systemd[1]: libpod-conmon-39c675803d65fb5b20876af6975d8bc67ecdb0f5a1e6045e1fc63234393b6f50.scope: Deactivated successfully.
Nov 29 07:12:20 compute-0 sudo[78022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:12:20 compute-0 sudo[78022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:20 compute-0 podman[78025]: 2025-11-29 07:12:20.896108251 +0000 UTC m=+0.050894655 container create d24c29975f6a9649bf2d7ddd7cfbfd0ad9cee5a1e3bcd5267a31929de61b5871 (image=quay.io/ceph/ceph:v18, name=festive_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 29 07:12:20 compute-0 systemd[1]: Started libpod-conmon-d24c29975f6a9649bf2d7ddd7cfbfd0ad9cee5a1e3bcd5267a31929de61b5871.scope.
Nov 29 07:12:20 compute-0 podman[78025]: 2025-11-29 07:12:20.872779399 +0000 UTC m=+0.027565783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20d83552d303f54b1f3e32080db8303a6d451a2b2e067a4bdf1c0a379b7e290/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20d83552d303f54b1f3e32080db8303a6d451a2b2e067a4bdf1c0a379b7e290/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b20d83552d303f54b1f3e32080db8303a6d451a2b2e067a4bdf1c0a379b7e290/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:21 compute-0 podman[78025]: 2025-11-29 07:12:21.003118367 +0000 UTC m=+0.157904751 container init d24c29975f6a9649bf2d7ddd7cfbfd0ad9cee5a1e3bcd5267a31929de61b5871 (image=quay.io/ceph/ceph:v18, name=festive_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:12:21 compute-0 podman[78025]: 2025-11-29 07:12:21.012011811 +0000 UTC m=+0.166798205 container start d24c29975f6a9649bf2d7ddd7cfbfd0ad9cee5a1e3bcd5267a31929de61b5871 (image=quay.io/ceph/ceph:v18, name=festive_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:12:21 compute-0 podman[78025]: 2025-11-29 07:12:21.016168698 +0000 UTC m=+0.170955062 container attach d24c29975f6a9649bf2d7ddd7cfbfd0ad9cee5a1e3bcd5267a31929de61b5871 (image=quay.io/ceph/ceph:v18, name=festive_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:21 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78078 (sysctl)
Nov 29 07:12:21 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 07:12:21 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 07:12:21 compute-0 sudo[78022]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:21 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:12:21 compute-0 sudo[78119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:21 compute-0 sudo[78119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:21 compute-0 sudo[78119]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:21 compute-0 sudo[78144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:21 compute-0 sudo[78144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:21 compute-0 sudo[78144]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:21 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 29 07:12:21 compute-0 sudo[78169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:21 compute-0 sudo[78169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:21 compute-0 sudo[78169]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:21 compute-0 sudo[78195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 07:12:21 compute-0 sudo[78195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:21 compute-0 ceph-mon[75050]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:21 compute-0 ceph-mon[75050]: Saving service crash spec with placement *
Nov 29 07:12:21 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:21 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/86952065' entity='client.admin' 
Nov 29 07:12:21 compute-0 sudo[78195]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:21 compute-0 systemd[1]: libpod-d24c29975f6a9649bf2d7ddd7cfbfd0ad9cee5a1e3bcd5267a31929de61b5871.scope: Deactivated successfully.
Nov 29 07:12:21 compute-0 podman[78025]: 2025-11-29 07:12:21.926560122 +0000 UTC m=+1.081346506 container died d24c29975f6a9649bf2d7ddd7cfbfd0ad9cee5a1e3bcd5267a31929de61b5871 (image=quay.io/ceph/ceph:v18, name=festive_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:12:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b20d83552d303f54b1f3e32080db8303a6d451a2b2e067a4bdf1c0a379b7e290-merged.mount: Deactivated successfully.
Nov 29 07:12:22 compute-0 podman[78025]: 2025-11-29 07:12:22.070502336 +0000 UTC m=+1.225288700 container remove d24c29975f6a9649bf2d7ddd7cfbfd0ad9cee5a1e3bcd5267a31929de61b5871 (image=quay.io/ceph/ceph:v18, name=festive_yalow, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:12:22 compute-0 sudo[78250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:22 compute-0 sudo[78250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:22 compute-0 systemd[1]: libpod-conmon-d24c29975f6a9649bf2d7ddd7cfbfd0ad9cee5a1e3bcd5267a31929de61b5871.scope: Deactivated successfully.
Nov 29 07:12:22 compute-0 sudo[78250]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:22 compute-0 sudo[78278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:22 compute-0 sudo[78278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:22 compute-0 sudo[78278]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:22 compute-0 podman[78276]: 2025-11-29 07:12:22.159363668 +0000 UTC m=+0.056488764 container create a1212f0dc6aafb8d7712a4545b63db5bb7c8257fdb86afd530449e5929f092a9 (image=quay.io/ceph/ceph:v18, name=sweet_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:22 compute-0 systemd[1]: Started libpod-conmon-a1212f0dc6aafb8d7712a4545b63db5bb7c8257fdb86afd530449e5929f092a9.scope.
Nov 29 07:12:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:22 compute-0 sudo[78315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:22 compute-0 sudo[78315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:22 compute-0 podman[78276]: 2025-11-29 07:12:22.141255415 +0000 UTC m=+0.038380291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:22 compute-0 sudo[78315]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17f66416687f928ded1e9008193dcf7c05b5ea58d31d88a0b88649265b319a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17f66416687f928ded1e9008193dcf7c05b5ea58d31d88a0b88649265b319a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17f66416687f928ded1e9008193dcf7c05b5ea58d31d88a0b88649265b319a8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:22 compute-0 podman[78276]: 2025-11-29 07:12:22.271019366 +0000 UTC m=+0.168144242 container init a1212f0dc6aafb8d7712a4545b63db5bb7c8257fdb86afd530449e5929f092a9 (image=quay.io/ceph/ceph:v18, name=sweet_mcnulty, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:12:22 compute-0 podman[78276]: 2025-11-29 07:12:22.284576101 +0000 UTC m=+0.181700957 container start a1212f0dc6aafb8d7712a4545b63db5bb7c8257fdb86afd530449e5929f092a9 (image=quay.io/ceph/ceph:v18, name=sweet_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:22 compute-0 podman[78276]: 2025-11-29 07:12:22.289872702 +0000 UTC m=+0.186997578 container attach a1212f0dc6aafb8d7712a4545b63db5bb7c8257fdb86afd530449e5929f092a9 (image=quay.io/ceph/ceph:v18, name=sweet_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:12:22 compute-0 sudo[78345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- inventory --format=json-pretty --filter-for-batch
Nov 29 07:12:22 compute-0 sudo[78345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:22 compute-0 podman[78414]: 2025-11-29 07:12:22.707796421 +0000 UTC m=+0.083794769 container create 31c685816e792abd138b12b4659afc5c86e07e7cf9a971c0985655eefc6485ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:22 compute-0 podman[78414]: 2025-11-29 07:12:22.647387626 +0000 UTC m=+0.023386034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:22 compute-0 systemd[1]: Started libpod-conmon-31c685816e792abd138b12b4659afc5c86e07e7cf9a971c0985655eefc6485ec.scope.
Nov 29 07:12:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:22 compute-0 podman[78414]: 2025-11-29 07:12:22.863926061 +0000 UTC m=+0.239924419 container init 31c685816e792abd138b12b4659afc5c86e07e7cf9a971c0985655eefc6485ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:12:22 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:12:22 compute-0 podman[78414]: 2025-11-29 07:12:22.873807471 +0000 UTC m=+0.249805819 container start 31c685816e792abd138b12b4659afc5c86e07e7cf9a971c0985655eefc6485ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:22 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:22 compute-0 ceph-mgr[75345]: [cephadm INFO root] Added label _admin to host compute-0
Nov 29 07:12:22 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 29 07:12:22 compute-0 sweet_mcnulty[78338]: Added label _admin to host compute-0
Nov 29 07:12:22 compute-0 podman[78414]: 2025-11-29 07:12:22.879849893 +0000 UTC m=+0.255848241 container attach 31c685816e792abd138b12b4659afc5c86e07e7cf9a971c0985655eefc6485ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:22 compute-0 wonderful_goodall[78446]: 167 167
Nov 29 07:12:22 compute-0 systemd[1]: libpod-31c685816e792abd138b12b4659afc5c86e07e7cf9a971c0985655eefc6485ec.scope: Deactivated successfully.
Nov 29 07:12:22 compute-0 podman[78414]: 2025-11-29 07:12:22.881611133 +0000 UTC m=+0.257609491 container died 31c685816e792abd138b12b4659afc5c86e07e7cf9a971c0985655eefc6485ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:22 compute-0 systemd[1]: libpod-a1212f0dc6aafb8d7712a4545b63db5bb7c8257fdb86afd530449e5929f092a9.scope: Deactivated successfully.
Nov 29 07:12:22 compute-0 podman[78276]: 2025-11-29 07:12:22.900197571 +0000 UTC m=+0.797322457 container died a1212f0dc6aafb8d7712a4545b63db5bb7c8257fdb86afd530449e5929f092a9 (image=quay.io/ceph/ceph:v18, name=sweet_mcnulty, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:22 compute-0 ceph-mon[75050]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-425f65883e6f43cca33082256abae788874cb84ebbbea0065f542d090548b0f9-merged.mount: Deactivated successfully.
Nov 29 07:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d17f66416687f928ded1e9008193dcf7c05b5ea58d31d88a0b88649265b319a8-merged.mount: Deactivated successfully.
Nov 29 07:12:22 compute-0 podman[78414]: 2025-11-29 07:12:22.948622885 +0000 UTC m=+0.324621213 container remove 31c685816e792abd138b12b4659afc5c86e07e7cf9a971c0985655eefc6485ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goodall, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:12:22 compute-0 podman[78276]: 2025-11-29 07:12:22.968651753 +0000 UTC m=+0.865776599 container remove a1212f0dc6aafb8d7712a4545b63db5bb7c8257fdb86afd530449e5929f092a9 (image=quay.io/ceph/ceph:v18, name=sweet_mcnulty, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:12:22 compute-0 systemd[1]: libpod-conmon-31c685816e792abd138b12b4659afc5c86e07e7cf9a971c0985655eefc6485ec.scope: Deactivated successfully.
Nov 29 07:12:22 compute-0 systemd[1]: libpod-conmon-a1212f0dc6aafb8d7712a4545b63db5bb7c8257fdb86afd530449e5929f092a9.scope: Deactivated successfully.
Nov 29 07:12:23 compute-0 podman[78476]: 2025-11-29 07:12:23.021832622 +0000 UTC m=+0.026534484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:23 compute-0 podman[78476]: 2025-11-29 07:12:23.300900302 +0000 UTC m=+0.305602084 container create a14537350c8d1cd795f2c6588dabfcbac98ce0a05e007596648f4c40f21a35e5 (image=quay.io/ceph/ceph:v18, name=modest_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:12:23 compute-0 systemd[1]: Started libpod-conmon-a14537350c8d1cd795f2c6588dabfcbac98ce0a05e007596648f4c40f21a35e5.scope.
Nov 29 07:12:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c588b0b35b6a4635fd89fc9abbcad88a5dedb15d77205a2227dc84e2d7c45b96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c588b0b35b6a4635fd89fc9abbcad88a5dedb15d77205a2227dc84e2d7c45b96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c588b0b35b6a4635fd89fc9abbcad88a5dedb15d77205a2227dc84e2d7c45b96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:23 compute-0 ceph-mgr[75345]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 07:12:23 compute-0 podman[78476]: 2025-11-29 07:12:23.463785717 +0000 UTC m=+0.468487569 container init a14537350c8d1cd795f2c6588dabfcbac98ce0a05e007596648f4c40f21a35e5 (image=quay.io/ceph/ceph:v18, name=modest_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:23 compute-0 podman[78476]: 2025-11-29 07:12:23.469128563 +0000 UTC m=+0.473830365 container start a14537350c8d1cd795f2c6588dabfcbac98ce0a05e007596648f4c40f21a35e5 (image=quay.io/ceph/ceph:v18, name=modest_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:23 compute-0 podman[78476]: 2025-11-29 07:12:23.473515714 +0000 UTC m=+0.478217496 container attach a14537350c8d1cd795f2c6588dabfcbac98ce0a05e007596648f4c40f21a35e5 (image=quay.io/ceph/ceph:v18, name=modest_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:12:23 compute-0 ceph-mon[75050]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:23 compute-0 ceph-mon[75050]: Added label _admin to host compute-0
Nov 29 07:12:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 29 07:12:24 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3140986555' entity='client.admin' 
Nov 29 07:12:24 compute-0 systemd[1]: libpod-a14537350c8d1cd795f2c6588dabfcbac98ce0a05e007596648f4c40f21a35e5.scope: Deactivated successfully.
Nov 29 07:12:24 compute-0 podman[78476]: 2025-11-29 07:12:24.174383421 +0000 UTC m=+1.179085223 container died a14537350c8d1cd795f2c6588dabfcbac98ce0a05e007596648f4c40f21a35e5 (image=quay.io/ceph/ceph:v18, name=modest_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c588b0b35b6a4635fd89fc9abbcad88a5dedb15d77205a2227dc84e2d7c45b96-merged.mount: Deactivated successfully.
Nov 29 07:12:24 compute-0 podman[78476]: 2025-11-29 07:12:24.306062324 +0000 UTC m=+1.310764106 container remove a14537350c8d1cd795f2c6588dabfcbac98ce0a05e007596648f4c40f21a35e5 (image=quay.io/ceph/ceph:v18, name=modest_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:12:24 compute-0 systemd[1]: libpod-conmon-a14537350c8d1cd795f2c6588dabfcbac98ce0a05e007596648f4c40f21a35e5.scope: Deactivated successfully.
Nov 29 07:12:24 compute-0 podman[78532]: 2025-11-29 07:12:24.375888784 +0000 UTC m=+0.053636177 container create 06f614075f000035a8ed386276e6dcccfcd9efd8be783965aafaed33ac2ae80c (image=quay.io/ceph/ceph:v18, name=thirsty_wilbur, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:24 compute-0 systemd[1]: Started libpod-conmon-06f614075f000035a8ed386276e6dcccfcd9efd8be783965aafaed33ac2ae80c.scope.
Nov 29 07:12:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3730e8be307ba12dd2674526cd03a4625889f235f518987f4acefc88ddd47be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3730e8be307ba12dd2674526cd03a4625889f235f518987f4acefc88ddd47be/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3730e8be307ba12dd2674526cd03a4625889f235f518987f4acefc88ddd47be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:24 compute-0 podman[78532]: 2025-11-29 07:12:24.348332127 +0000 UTC m=+0.026079560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:24 compute-0 podman[78532]: 2025-11-29 07:12:24.560418816 +0000 UTC m=+0.238166219 container init 06f614075f000035a8ed386276e6dcccfcd9efd8be783965aafaed33ac2ae80c (image=quay.io/ceph/ceph:v18, name=thirsty_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:12:24 compute-0 podman[78532]: 2025-11-29 07:12:24.570389395 +0000 UTC m=+0.248136828 container start 06f614075f000035a8ed386276e6dcccfcd9efd8be783965aafaed33ac2ae80c (image=quay.io/ceph/ceph:v18, name=thirsty_wilbur, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:12:24 compute-0 podman[78532]: 2025-11-29 07:12:24.575042408 +0000 UTC m=+0.252789811 container attach 06f614075f000035a8ed386276e6dcccfcd9efd8be783965aafaed33ac2ae80c (image=quay.io/ceph/ceph:v18, name=thirsty_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:12:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3140986555' entity='client.admin' 
Nov 29 07:12:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 29 07:12:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2677214572' entity='client.admin' 
Nov 29 07:12:25 compute-0 thirsty_wilbur[78549]: set mgr/dashboard/cluster/status
Nov 29 07:12:25 compute-0 systemd[1]: libpod-06f614075f000035a8ed386276e6dcccfcd9efd8be783965aafaed33ac2ae80c.scope: Deactivated successfully.
Nov 29 07:12:25 compute-0 podman[78532]: 2025-11-29 07:12:25.231054034 +0000 UTC m=+0.908801487 container died 06f614075f000035a8ed386276e6dcccfcd9efd8be783965aafaed33ac2ae80c (image=quay.io/ceph/ceph:v18, name=thirsty_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3730e8be307ba12dd2674526cd03a4625889f235f518987f4acefc88ddd47be-merged.mount: Deactivated successfully.
Nov 29 07:12:25 compute-0 podman[78532]: 2025-11-29 07:12:25.324226157 +0000 UTC m=+1.001973550 container remove 06f614075f000035a8ed386276e6dcccfcd9efd8be783965aafaed33ac2ae80c (image=quay.io/ceph/ceph:v18, name=thirsty_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:12:25 compute-0 systemd[1]: libpod-conmon-06f614075f000035a8ed386276e6dcccfcd9efd8be783965aafaed33ac2ae80c.scope: Deactivated successfully.
Nov 29 07:12:25 compute-0 sudo[74038]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:25 compute-0 ceph-mgr[75345]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 29 07:12:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:25 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 07:12:25 compute-0 podman[78594]: 2025-11-29 07:12:25.558398691 +0000 UTC m=+0.047751916 container create 5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_joliot, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:12:25 compute-0 systemd[1]: Started libpod-conmon-5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4.scope.
Nov 29 07:12:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44b578018b21ad5c3598a98ff048fdc454d50694c810ceddfa5407fcd8c308af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44b578018b21ad5c3598a98ff048fdc454d50694c810ceddfa5407fcd8c308af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44b578018b21ad5c3598a98ff048fdc454d50694c810ceddfa5407fcd8c308af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44b578018b21ad5c3598a98ff048fdc454d50694c810ceddfa5407fcd8c308af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:25 compute-0 podman[78594]: 2025-11-29 07:12:25.532320352 +0000 UTC m=+0.021673607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:25 compute-0 podman[78594]: 2025-11-29 07:12:25.635630631 +0000 UTC m=+0.124983866 container init 5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_joliot, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:25 compute-0 podman[78594]: 2025-11-29 07:12:25.651455479 +0000 UTC m=+0.140808704 container start 5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:12:25 compute-0 podman[78594]: 2025-11-29 07:12:25.687343388 +0000 UTC m=+0.176696663 container attach 5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:25 compute-0 sudo[78638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azkbjwitshfaqocagonjxjfpkptzpkal ; /usr/bin/python3'
Nov 29 07:12:25 compute-0 sudo[78638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:25 compute-0 python3[78640]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:12:26 compute-0 podman[78641]: 2025-11-29 07:12:26.000332446 +0000 UTC m=+0.054667764 container create 3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4 (image=quay.io/ceph/ceph:v18, name=relaxed_leakey, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:26 compute-0 systemd[1]: Started libpod-conmon-3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4.scope.
Nov 29 07:12:26 compute-0 podman[78641]: 2025-11-29 07:12:25.975427581 +0000 UTC m=+0.029762949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399fa488b7402d6fd7d44b4d38c925c741a9b2ffce2ac7b13f714e1c05d08fc3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399fa488b7402d6fd7d44b4d38c925c741a9b2ffce2ac7b13f714e1c05d08fc3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:26 compute-0 podman[78641]: 2025-11-29 07:12:26.096625023 +0000 UTC m=+0.150960391 container init 3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4 (image=quay.io/ceph/ceph:v18, name=relaxed_leakey, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:12:26 compute-0 podman[78641]: 2025-11-29 07:12:26.107273521 +0000 UTC m=+0.161608809 container start 3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4 (image=quay.io/ceph/ceph:v18, name=relaxed_leakey, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:26 compute-0 podman[78641]: 2025-11-29 07:12:26.111211363 +0000 UTC m=+0.165546761 container attach 3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4 (image=quay.io/ceph/ceph:v18, name=relaxed_leakey, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:12:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2677214572' entity='client.admin' 
Nov 29 07:12:26 compute-0 ceph-mon[75050]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:26 compute-0 ceph-mon[75050]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 07:12:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 29 07:12:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1078988009' entity='client.admin' 
Nov 29 07:12:26 compute-0 systemd[1]: libpod-3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4.scope: Deactivated successfully.
Nov 29 07:12:26 compute-0 conmon[78657]: conmon 3e8ecd3d518bfaeb3285 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4.scope/container/memory.events
Nov 29 07:12:26 compute-0 podman[79168]: 2025-11-29 07:12:26.855197492 +0000 UTC m=+0.045364597 container died 3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4 (image=quay.io/ceph/ceph:v18, name=relaxed_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-399fa488b7402d6fd7d44b4d38c925c741a9b2ffce2ac7b13f714e1c05d08fc3-merged.mount: Deactivated successfully.
Nov 29 07:12:26 compute-0 podman[79168]: 2025-11-29 07:12:26.903991575 +0000 UTC m=+0.094158640 container remove 3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4 (image=quay.io/ceph/ceph:v18, name=relaxed_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:12:26 compute-0 systemd[1]: libpod-conmon-3e8ecd3d518bfaeb3285985ce4e9be5d97013c1c7af29e0e94d57f6372f342c4.scope: Deactivated successfully.
Nov 29 07:12:26 compute-0 sudo[78638]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 elegant_joliot[78610]: [
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:     {
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         "available": false,
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         "ceph_device": false,
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         "lsm_data": {},
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         "lvs": [],
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         "path": "/dev/sr0",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         "rejected_reasons": [
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "Insufficient space (<5GB)",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "Has a FileSystem"
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         ],
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         "sys_api": {
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "actuators": null,
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "device_nodes": "sr0",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "devname": "sr0",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "human_readable_size": "482.00 KB",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "id_bus": "ata",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "model": "QEMU DVD-ROM",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "nr_requests": "2",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "parent": "/dev/sr0",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "partitions": {},
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "path": "/dev/sr0",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "removable": "1",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "rev": "2.5+",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "ro": "0",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "rotational": "1",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "sas_address": "",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "sas_device_handle": "",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "scheduler_mode": "mq-deadline",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "sectors": 0,
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "sectorsize": "2048",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "size": 493568.0,
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "support_discard": "2048",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "type": "disk",
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:             "vendor": "QEMU"
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:         }
Nov 29 07:12:27 compute-0 elegant_joliot[78610]:     }
Nov 29 07:12:27 compute-0 elegant_joliot[78610]: ]
Nov 29 07:12:27 compute-0 systemd[1]: libpod-5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4.scope: Deactivated successfully.
Nov 29 07:12:27 compute-0 systemd[1]: libpod-5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4.scope: Consumed 1.516s CPU time.
Nov 29 07:12:27 compute-0 podman[78594]: 2025-11-29 07:12:27.135646233 +0000 UTC m=+1.624999448 container died 5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_joliot, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-44b578018b21ad5c3598a98ff048fdc454d50694c810ceddfa5407fcd8c308af-merged.mount: Deactivated successfully.
Nov 29 07:12:27 compute-0 podman[78594]: 2025-11-29 07:12:27.195355978 +0000 UTC m=+1.684709203 container remove 5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:12:27 compute-0 systemd[1]: libpod-conmon-5a8a4a6cb50ebca8e8a20cf30c66c54f8d4938985f710db99b0e5c2e72964fb4.scope: Deactivated successfully.
Nov 29 07:12:27 compute-0 sudo[78345]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:12:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:12:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:12:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:12:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:12:27 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 07:12:27 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 07:12:27 compute-0 sudo[80481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:27 compute-0 sudo[80481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-0 sudo[80481]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:27 compute-0 sudo[80532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 07:12:27 compute-0 sudo[80532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-0 sudo[80532]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:27 compute-0 sudo[80579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:27 compute-0 sudo[80579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-0 sudo[80579]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 sudo[80604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph
Nov 29 07:12:27 compute-0 sudo[80604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-0 sudo[80604]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 sudo[80634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:27 compute-0 sudo[80634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-0 sudo[80634]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 sudo[80680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.conf.new
Nov 29 07:12:27 compute-0 sudo[80680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-0 sudo[80680]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1078988009' entity='client.admin' 
Nov 29 07:12:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:12:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:12:27 compute-0 ceph-mon[75050]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 07:12:27 compute-0 ceph-mon[75050]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:27 compute-0 sudo[80771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuqoeeysmbvzhqyfsxgeoqqhcjfuxawt ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764400347.2484472-36484-3451724658878/async_wrapper.py j500642440638 30 /home/zuul/.ansible/tmp/ansible-tmp-1764400347.2484472-36484-3451724658878/AnsiballZ_command.py _'
Nov 29 07:12:27 compute-0 sudo[80734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:27 compute-0 sudo[80771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:27 compute-0 sudo[80734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-0 sudo[80734]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 sudo[80779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:27 compute-0 sudo[80779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-0 sudo[80779]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 sudo[80804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:27 compute-0 sudo[80804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:27 compute-0 sudo[80804]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:27 compute-0 ansible-async_wrapper.py[80777]: Invoked with j500642440638 30 /home/zuul/.ansible/tmp/ansible-tmp-1764400347.2484472-36484-3451724658878/AnsiballZ_command.py _
Nov 29 07:12:27 compute-0 ansible-async_wrapper.py[80834]: Starting module and watcher
Nov 29 07:12:27 compute-0 ansible-async_wrapper.py[80834]: Start watching 80836 (30)
Nov 29 07:12:27 compute-0 ansible-async_wrapper.py[80836]: Start module (80836)
Nov 29 07:12:27 compute-0 ansible-async_wrapper.py[80777]: Return async_wrapper task started.
Nov 29 07:12:27 compute-0 sudo[80771]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 sudo[80829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.conf.new
Nov 29 07:12:28 compute-0 sudo[80829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[80829]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 python3[80839]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:12:28 compute-0 sudo[80882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-0 sudo[80882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[80882]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 podman[80905]: 2025-11-29 07:12:28.166411365 +0000 UTC m=+0.038483089 container create e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb (image=quay.io/ceph/ceph:v18, name=adoring_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:12:28 compute-0 sudo[80913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.conf.new
Nov 29 07:12:28 compute-0 sudo[80913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[80913]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 systemd[1]: Started libpod-conmon-e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb.scope.
Nov 29 07:12:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6f056fee2a42cce53f5fd1c4c839412f25524ea5e31dfd3af7152753e053c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6f056fee2a42cce53f5fd1c4c839412f25524ea5e31dfd3af7152753e053c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:28 compute-0 podman[80905]: 2025-11-29 07:12:28.246175632 +0000 UTC m=+0.118247386 container init e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb (image=quay.io/ceph/ceph:v18, name=adoring_swirles, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:12:28 compute-0 podman[80905]: 2025-11-29 07:12:28.152010823 +0000 UTC m=+0.024082577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:28 compute-0 sudo[80949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-0 sudo[80949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 podman[80905]: 2025-11-29 07:12:28.253894357 +0000 UTC m=+0.125966081 container start e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb (image=quay.io/ceph/ceph:v18, name=adoring_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:28 compute-0 sudo[80949]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 podman[80905]: 2025-11-29 07:12:28.257590157 +0000 UTC m=+0.129661911 container attach e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb (image=quay.io/ceph/ceph:v18, name=adoring_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:28 compute-0 sudo[80977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.conf.new
Nov 29 07:12:28 compute-0 sudo[80977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[80977]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 sudo[81002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-0 sudo[81002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[81002]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 sudo[81027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 29 07:12:28 compute-0 sudo[81027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[81027]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.conf
Nov 29 07:12:28 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.conf
Nov 29 07:12:28 compute-0 sudo[81052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-0 sudo[81052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[81052]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 sudo[81078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config
Nov 29 07:12:28 compute-0 sudo[81078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[81078]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 sudo[81121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-0 sudo[81121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[81121]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 sudo[81146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config
Nov 29 07:12:28 compute-0 sudo[81146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[81146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 sudo[81171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-0 sudo[81171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[81171]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:12:28 compute-0 adoring_swirles[80948]: 
Nov 29 07:12:28 compute-0 adoring_swirles[80948]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 07:12:28 compute-0 sudo[81196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.conf.new
Nov 29 07:12:28 compute-0 systemd[1]: libpod-e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb.scope: Deactivated successfully.
Nov 29 07:12:28 compute-0 conmon[80948]: conmon e3cac5c8be7432033b27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb.scope/container/memory.events
Nov 29 07:12:28 compute-0 podman[80905]: 2025-11-29 07:12:28.865253909 +0000 UTC m=+0.737325633 container died e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb (image=quay.io/ceph/ceph:v18, name=adoring_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:12:28 compute-0 sudo[81196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[81196]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-08d6f056fee2a42cce53f5fd1c4c839412f25524ea5e31dfd3af7152753e053c-merged.mount: Deactivated successfully.
Nov 29 07:12:28 compute-0 podman[80905]: 2025-11-29 07:12:28.918058127 +0000 UTC m=+0.790129851 container remove e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb (image=quay.io/ceph/ceph:v18, name=adoring_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:28 compute-0 systemd[1]: libpod-conmon-e3cac5c8be7432033b27741f27efcc14be09b24d0cd2602f59fc78ae2aa093fb.scope: Deactivated successfully.
Nov 29 07:12:28 compute-0 ansible-async_wrapper.py[80836]: Module complete (80836)
Nov 29 07:12:28 compute-0 sudo[81224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:28 compute-0 sudo[81224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:28 compute-0 sudo[81224]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:29 compute-0 sudo[81259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81259]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-0 sudo[81286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81286]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.conf.new
Nov 29 07:12:29 compute-0 sudo[81332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81332]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 ceph-mon[75050]: Updating compute-0:/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.conf
Nov 29 07:12:29 compute-0 sudo[81380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-0 sudo[81380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjxajpzzhxhrvyuhzmeeaackssmkcfdk ; /usr/bin/python3'
Nov 29 07:12:29 compute-0 sudo[81443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:29 compute-0 sudo[81413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.conf.new
Nov 29 07:12:29 compute-0 sudo[81413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81413]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:29 compute-0 sudo[81456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-0 sudo[81456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81456]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 python3[81453]: ansible-ansible.legacy.async_status Invoked with jid=j500642440638.80777 mode=status _async_dir=/root/.ansible_async
Nov 29 07:12:29 compute-0 sudo[81481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.conf.new
Nov 29 07:12:29 compute-0 sudo[81481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81481]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81443]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-0 sudo[81506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81506]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.conf.new /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.conf
Nov 29 07:12:29 compute-0 sudo[81554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81554]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 07:12:29 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 07:12:29 compute-0 sudo[81602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycfbxwfusqgjrmwvnzqnpwmqwixraerj ; /usr/bin/python3'
Nov 29 07:12:29 compute-0 sudo[81602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:29 compute-0 sudo[81603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-0 sudo[81603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81603]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 07:12:29 compute-0 sudo[81630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 python3[81611]: ansible-ansible.legacy.async_status Invoked with jid=j500642440638.80777 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 07:12:29 compute-0 sudo[81630]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81602]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:29 compute-0 sudo[81655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81655]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:29 compute-0 sudo[81680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph
Nov 29 07:12:29 compute-0 sudo[81680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:29 compute-0 sudo[81680]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[81705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-0 sudo[81705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81705]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[81730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:12:30 compute-0 sudo[81730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[81755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-0 sudo[81755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81755]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[81823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftjzjyundhhuveaaofqhqyfdgyibnhxe ; /usr/bin/python3'
Nov 29 07:12:30 compute-0 sudo[81823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:30 compute-0 sudo[81784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:30 compute-0 sudo[81784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81784]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 ceph-mon[75050]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:12:30 compute-0 ceph-mon[75050]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:30 compute-0 sudo[81831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-0 sudo[81831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81831]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 python3[81828]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 07:12:30 compute-0 sudo[81856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:12:30 compute-0 sudo[81856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81856]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[81823]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[81906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-0 sudo[81906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81906]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[81931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:12:30 compute-0 sudo[81931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81931]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[81956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-0 sudo[81956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81956]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[81981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.client.admin.keyring.new
Nov 29 07:12:30 compute-0 sudo[81981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[81981]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[82006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-0 sudo[82006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[82006]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 sudo[82054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjqcuiablwjgvywsdkuyxgkmvcqnrvqu ; /usr/bin/python3'
Nov 29 07:12:30 compute-0 sudo[82054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:30 compute-0 sudo[82055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 29 07:12:30 compute-0 sudo[82055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[82055]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.client.admin.keyring
Nov 29 07:12:30 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.client.admin.keyring
Nov 29 07:12:30 compute-0 sudo[82082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:30 compute-0 sudo[82082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[82082]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:30 compute-0 python3[82059]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:12:30 compute-0 podman[82113]: 2025-11-29 07:12:30.962349907 +0000 UTC m=+0.037225132 container create 3272e94de6e62e1aa76541a6450c9b0523d5446a67d900937ba9555a65c41885 (image=quay.io/ceph/ceph:v18, name=hardcore_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:30 compute-0 sudo[82107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config
Nov 29 07:12:30 compute-0 sudo[82107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:30 compute-0 sudo[82107]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 systemd[1]: Started libpod-conmon-3272e94de6e62e1aa76541a6450c9b0523d5446a67d900937ba9555a65c41885.scope.
Nov 29 07:12:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:31 compute-0 sudo[82147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:31 compute-0 podman[82113]: 2025-11-29 07:12:30.944896945 +0000 UTC m=+0.019772160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:31 compute-0 sudo[82147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95671e7a1ac03e324bcad37dee183eb78ad44ea9fd28e7ad507fdb8d96ac7a50/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95671e7a1ac03e324bcad37dee183eb78ad44ea9fd28e7ad507fdb8d96ac7a50/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95671e7a1ac03e324bcad37dee183eb78ad44ea9fd28e7ad507fdb8d96ac7a50/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:31 compute-0 sudo[82147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 podman[82113]: 2025-11-29 07:12:31.057666419 +0000 UTC m=+0.132541634 container init 3272e94de6e62e1aa76541a6450c9b0523d5446a67d900937ba9555a65c41885 (image=quay.io/ceph/ceph:v18, name=hardcore_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:12:31 compute-0 podman[82113]: 2025-11-29 07:12:31.064516644 +0000 UTC m=+0.139391839 container start 3272e94de6e62e1aa76541a6450c9b0523d5446a67d900937ba9555a65c41885 (image=quay.io/ceph/ceph:v18, name=hardcore_mccarthy, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:12:31 compute-0 podman[82113]: 2025-11-29 07:12:31.068726047 +0000 UTC m=+0.143601252 container attach 3272e94de6e62e1aa76541a6450c9b0523d5446a67d900937ba9555a65c41885 (image=quay.io/ceph/ceph:v18, name=hardcore_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:31 compute-0 sudo[82175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config
Nov 29 07:12:31 compute-0 sudo[82175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:31 compute-0 sudo[82201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82201]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.client.admin.keyring.new
Nov 29 07:12:31 compute-0 sudo[82226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82226]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 ceph-mon[75050]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 07:12:31 compute-0 sudo[82251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:31 compute-0 sudo[82251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82251]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:31 compute-0 sudo[82276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:31 compute-0 sudo[82276]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:31 compute-0 sudo[82320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82320]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.client.admin.keyring.new
Nov 29 07:12:31 compute-0 sudo[82345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82345]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:12:31 compute-0 hardcore_mccarthy[82164]: 
Nov 29 07:12:31 compute-0 hardcore_mccarthy[82164]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 07:12:31 compute-0 systemd[1]: libpod-3272e94de6e62e1aa76541a6450c9b0523d5446a67d900937ba9555a65c41885.scope: Deactivated successfully.
Nov 29 07:12:31 compute-0 podman[82113]: 2025-11-29 07:12:31.624106417 +0000 UTC m=+0.698981612 container died 3272e94de6e62e1aa76541a6450c9b0523d5446a67d900937ba9555a65c41885 (image=quay.io/ceph/ceph:v18, name=hardcore_mccarthy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-95671e7a1ac03e324bcad37dee183eb78ad44ea9fd28e7ad507fdb8d96ac7a50-merged.mount: Deactivated successfully.
Nov 29 07:12:31 compute-0 podman[82113]: 2025-11-29 07:12:31.669956185 +0000 UTC m=+0.744831400 container remove 3272e94de6e62e1aa76541a6450c9b0523d5446a67d900937ba9555a65c41885 (image=quay.io/ceph/ceph:v18, name=hardcore_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:31 compute-0 sudo[82395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:31 compute-0 sudo[82395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 systemd[1]: libpod-conmon-3272e94de6e62e1aa76541a6450c9b0523d5446a67d900937ba9555a65c41885.scope: Deactivated successfully.
Nov 29 07:12:31 compute-0 sudo[82395]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82054]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.client.admin.keyring.new
Nov 29 07:12:31 compute-0 sudo[82432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82432]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:31 compute-0 sudo[82457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82457]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.client.admin.keyring.new
Nov 29 07:12:31 compute-0 sudo[82482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82482]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:31 compute-0 sudo[82507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:31 compute-0 sudo[82507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:31 compute-0 sudo[82507]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:32 compute-0 sudo[82556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzpkepzvjwtpzkcqpnkgntpkihcgiiqz ; /usr/bin/python3'
Nov 29 07:12:32 compute-0 sudo[82556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:32 compute-0 sudo[82555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-14ff1f30-5059-58f1-9a23-69871bb275a1/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.client.admin.keyring.new /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.client.admin.keyring
Nov 29 07:12:32 compute-0 sudo[82555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:32 compute-0 sudo[82555]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:12:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:32 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 82576349-cd01-47fb-a889-41ce375d24fd (Updating crash deployment (+1 -> 1))
Nov 29 07:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 07:12:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 07:12:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 07:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:12:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:32 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 29 07:12:32 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 29 07:12:32 compute-0 python3[82577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:12:32 compute-0 sudo[82583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:32 compute-0 sudo[82583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:32 compute-0 sudo[82583]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:32 compute-0 podman[82591]: 2025-11-29 07:12:32.215941452 +0000 UTC m=+0.038356104 container create 11ed352a51f9049ef94bca2ff4918fc02c5c9bdb1b03adc2441b44159b94e564 (image=quay.io/ceph/ceph:v18, name=compassionate_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:32 compute-0 systemd[1]: Started libpod-conmon-11ed352a51f9049ef94bca2ff4918fc02c5c9bdb1b03adc2441b44159b94e564.scope.
Nov 29 07:12:32 compute-0 sudo[82621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:32 compute-0 sudo[82621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:32 compute-0 sudo[82621]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f615fc1c9624992345717ca8fc4e574575bb3aaa3ee5dc91b4a7b055294f20e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f615fc1c9624992345717ca8fc4e574575bb3aaa3ee5dc91b4a7b055294f20e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f615fc1c9624992345717ca8fc4e574575bb3aaa3ee5dc91b4a7b055294f20e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:32 compute-0 ceph-mon[75050]: Updating compute-0:/var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/config/ceph.client.admin.keyring
Nov 29 07:12:32 compute-0 ceph-mon[75050]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 07:12:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 07:12:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:32 compute-0 podman[82591]: 2025-11-29 07:12:32.198195467 +0000 UTC m=+0.020610139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:32 compute-0 podman[82591]: 2025-11-29 07:12:32.295787472 +0000 UTC m=+0.118202154 container init 11ed352a51f9049ef94bca2ff4918fc02c5c9bdb1b03adc2441b44159b94e564 (image=quay.io/ceph/ceph:v18, name=compassionate_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:32 compute-0 podman[82591]: 2025-11-29 07:12:32.303917166 +0000 UTC m=+0.126331818 container start 11ed352a51f9049ef94bca2ff4918fc02c5c9bdb1b03adc2441b44159b94e564 (image=quay.io/ceph/ceph:v18, name=compassionate_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:12:32 compute-0 podman[82591]: 2025-11-29 07:12:32.308509418 +0000 UTC m=+0.130924070 container attach 11ed352a51f9049ef94bca2ff4918fc02c5c9bdb1b03adc2441b44159b94e564 (image=quay.io/ceph/ceph:v18, name=compassionate_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:12:32 compute-0 sudo[82651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:32 compute-0 sudo[82651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:32 compute-0 sudo[82651]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:32 compute-0 sudo[82677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:32 compute-0 sudo[82677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:32 compute-0 podman[82743]: 2025-11-29 07:12:32.679840276 +0000 UTC m=+0.044581569 container create a9fd6419987c53c8aa1092346f55aee6dd5b75e73a79f76c30b499f35c7ab6b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:32 compute-0 systemd[1]: Started libpod-conmon-a9fd6419987c53c8aa1092346f55aee6dd5b75e73a79f76c30b499f35c7ab6b5.scope.
Nov 29 07:12:32 compute-0 podman[82743]: 2025-11-29 07:12:32.654952533 +0000 UTC m=+0.019693776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:32 compute-0 podman[82743]: 2025-11-29 07:12:32.775527966 +0000 UTC m=+0.140269219 container init a9fd6419987c53c8aa1092346f55aee6dd5b75e73a79f76c30b499f35c7ab6b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:12:32 compute-0 podman[82743]: 2025-11-29 07:12:32.784641234 +0000 UTC m=+0.149382517 container start a9fd6419987c53c8aa1092346f55aee6dd5b75e73a79f76c30b499f35c7ab6b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:32 compute-0 jolly_morse[82774]: 167 167
Nov 29 07:12:32 compute-0 podman[82743]: 2025-11-29 07:12:32.789073507 +0000 UTC m=+0.153814830 container attach a9fd6419987c53c8aa1092346f55aee6dd5b75e73a79f76c30b499f35c7ab6b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_morse, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:12:32 compute-0 systemd[1]: libpod-a9fd6419987c53c8aa1092346f55aee6dd5b75e73a79f76c30b499f35c7ab6b5.scope: Deactivated successfully.
Nov 29 07:12:32 compute-0 podman[82743]: 2025-11-29 07:12:32.79192592 +0000 UTC m=+0.156667203 container died a9fd6419987c53c8aa1092346f55aee6dd5b75e73a79f76c30b499f35c7ab6b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b53c46d7d924baf037441cfa0c67f39d68396a66e98423357d44aa27bc3d5d67-merged.mount: Deactivated successfully.
Nov 29 07:12:32 compute-0 podman[82743]: 2025-11-29 07:12:32.84198849 +0000 UTC m=+0.206729773 container remove a9fd6419987c53c8aa1092346f55aee6dd5b75e73a79f76c30b499f35c7ab6b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 29 07:12:32 compute-0 systemd[1]: libpod-conmon-a9fd6419987c53c8aa1092346f55aee6dd5b75e73a79f76c30b499f35c7ab6b5.scope: Deactivated successfully.
Nov 29 07:12:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3083565396' entity='client.admin' 
Nov 29 07:12:32 compute-0 systemd[1]: libpod-11ed352a51f9049ef94bca2ff4918fc02c5c9bdb1b03adc2441b44159b94e564.scope: Deactivated successfully.
Nov 29 07:12:32 compute-0 podman[82591]: 2025-11-29 07:12:32.876172642 +0000 UTC m=+0.698587294 container died 11ed352a51f9049ef94bca2ff4918fc02c5c9bdb1b03adc2441b44159b94e564 (image=quay.io/ceph/ceph:v18, name=compassionate_wescoff, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:32 compute-0 systemd[1]: Reloading.
Nov 29 07:12:32 compute-0 ansible-async_wrapper.py[80834]: Done in kid B.
Nov 29 07:12:32 compute-0 systemd-rc-local-generator[82831]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:32 compute-0 systemd-sysv-generator[82834]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f615fc1c9624992345717ca8fc4e574575bb3aaa3ee5dc91b4a7b055294f20e-merged.mount: Deactivated successfully.
Nov 29 07:12:33 compute-0 podman[82591]: 2025-11-29 07:12:33.209917273 +0000 UTC m=+1.032331925 container remove 11ed352a51f9049ef94bca2ff4918fc02c5c9bdb1b03adc2441b44159b94e564 (image=quay.io/ceph/ceph:v18, name=compassionate_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:12:33 compute-0 systemd[1]: libpod-conmon-11ed352a51f9049ef94bca2ff4918fc02c5c9bdb1b03adc2441b44159b94e564.scope: Deactivated successfully.
Nov 29 07:12:33 compute-0 sudo[82556]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:33 compute-0 systemd[1]: Reloading.
Nov 29 07:12:33 compute-0 systemd-rc-local-generator[82877]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:33 compute-0 systemd-sysv-generator[82881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:33 compute-0 ceph-mon[75050]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:12:33 compute-0 ceph-mon[75050]: Deploying daemon crash.compute-0 on compute-0
Nov 29 07:12:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3083565396' entity='client.admin' 
Nov 29 07:12:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:33 compute-0 sudo[82907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgewbbqusuucqirzyqucskunakaabgle ; /usr/bin/python3'
Nov 29 07:12:33 compute-0 sudo[82907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:33 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:12:33 compute-0 python3[82911]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:12:33 compute-0 podman[82936]: 2025-11-29 07:12:33.674986551 +0000 UTC m=+0.054693625 container create 3abe709a7d1c13767750371d89369ce2ada847076c51682edd170805971e5ade (image=quay.io/ceph/ceph:v18, name=silly_vaughan, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:33 compute-0 systemd[1]: Started libpod-conmon-3abe709a7d1c13767750371d89369ce2ada847076c51682edd170805971e5ade.scope.
Nov 29 07:12:33 compute-0 podman[82936]: 2025-11-29 07:12:33.65493513 +0000 UTC m=+0.034642234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17a28d56c379c7028d04e82060bbf47c168119f190c162d977c2215a62f2c7a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:33 compute-0 podman[82971]: 2025-11-29 07:12:33.756577162 +0000 UTC m=+0.044017145 container create 7eb39cf0035c22ff4e83a7371bb415c7d467398eea843a964591e85500be2230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17a28d56c379c7028d04e82060bbf47c168119f190c162d977c2215a62f2c7a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17a28d56c379c7028d04e82060bbf47c168119f190c162d977c2215a62f2c7a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:33 compute-0 podman[82936]: 2025-11-29 07:12:33.77633186 +0000 UTC m=+0.156038954 container init 3abe709a7d1c13767750371d89369ce2ada847076c51682edd170805971e5ade (image=quay.io/ceph/ceph:v18, name=silly_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:12:33 compute-0 podman[82936]: 2025-11-29 07:12:33.786285737 +0000 UTC m=+0.165992821 container start 3abe709a7d1c13767750371d89369ce2ada847076c51682edd170805971e5ade (image=quay.io/ceph/ceph:v18, name=silly_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:12:33 compute-0 podman[82936]: 2025-11-29 07:12:33.790057011 +0000 UTC m=+0.169764115 container attach 3abe709a7d1c13767750371d89369ce2ada847076c51682edd170805971e5ade (image=quay.io/ceph/ceph:v18, name=silly_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a686a88efca72eda8bc510d146a9fe8f82f95e9053f31a9a11942b114c9bc8a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a686a88efca72eda8bc510d146a9fe8f82f95e9053f31a9a11942b114c9bc8a4/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a686a88efca72eda8bc510d146a9fe8f82f95e9053f31a9a11942b114c9bc8a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a686a88efca72eda8bc510d146a9fe8f82f95e9053f31a9a11942b114c9bc8a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:33 compute-0 podman[82971]: 2025-11-29 07:12:33.811223483 +0000 UTC m=+0.098663486 container init 7eb39cf0035c22ff4e83a7371bb415c7d467398eea843a964591e85500be2230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:33 compute-0 podman[82971]: 2025-11-29 07:12:33.817736243 +0000 UTC m=+0.105176226 container start 7eb39cf0035c22ff4e83a7371bb415c7d467398eea843a964591e85500be2230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:12:33 compute-0 bash[82971]: 7eb39cf0035c22ff4e83a7371bb415c7d467398eea843a964591e85500be2230
Nov 29 07:12:33 compute-0 podman[82971]: 2025-11-29 07:12:33.739019445 +0000 UTC m=+0.026459458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:33 compute-0 systemd[1]: Started Ceph crash.compute-0 for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:12:33 compute-0 sudo[82677]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 07:12:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:33 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 82576349-cd01-47fb-a889-41ce375d24fd (Updating crash deployment (+1 -> 1))
Nov 29 07:12:33 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 82576349-cd01-47fb-a889-41ce375d24fd (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 29 07:12:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 07:12:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:33 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 674194e0-4a62-4837-bc76-b5eeb2fa6bc5 does not exist
Nov 29 07:12:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 07:12:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:33 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 9c5e1712-54db-4053-81b6-0115c5caf458 (Updating mgr deployment (+1 -> 2))
Nov 29 07:12:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.cguuye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 07:12:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cguuye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:12:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cguuye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 07:12:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 07:12:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:12:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:12:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:33 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.cguuye on compute-0
Nov 29 07:12:33 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.cguuye on compute-0
Nov 29 07:12:34 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0[82991]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 07:12:34 compute-0 sudo[82996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:34 compute-0 sudo[82996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:34 compute-0 sudo[82996]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:34 compute-0 sudo[83023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:34 compute-0 sudo[83023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:34 compute-0 sudo[83023]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:34 compute-0 sudo[83067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:34 compute-0 sudo[83067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:34 compute-0 sudo[83067]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:34 compute-0 sudo[83092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:34 compute-0 sudo[83092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:34 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0[82991]: 2025-11-29T07:12:34.232+0000 7f87a1027640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 07:12:34 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0[82991]: 2025-11-29T07:12:34.232+0000 7f87a1027640 -1 AuthRegistry(0x7f879c067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 07:12:34 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0[82991]: 2025-11-29T07:12:34.234+0000 7f87a1027640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 07:12:34 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0[82991]: 2025-11-29T07:12:34.234+0000 7f87a1027640 -1 AuthRegistry(0x7f87a1026000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 07:12:34 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0[82991]: 2025-11-29T07:12:34.234+0000 7f879ad76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 07:12:34 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0[82991]: 2025-11-29T07:12:34.235+0000 7f87a1027640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 07:12:34 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0[82991]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 07:12:34 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-crash-compute-0[82991]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 07:12:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 29 07:12:34 compute-0 ceph-mon[75050]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cguuye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:12:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cguuye", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 07:12:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:12:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:34 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4219071592' entity='client.admin' 
Nov 29 07:12:34 compute-0 systemd[1]: libpod-3abe709a7d1c13767750371d89369ce2ada847076c51682edd170805971e5ade.scope: Deactivated successfully.
Nov 29 07:12:34 compute-0 podman[82936]: 2025-11-29 07:12:34.341082009 +0000 UTC m=+0.720789143 container died 3abe709a7d1c13767750371d89369ce2ada847076c51682edd170805971e5ade (image=quay.io/ceph/ceph:v18, name=silly_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c17a28d56c379c7028d04e82060bbf47c168119f190c162d977c2215a62f2c7a-merged.mount: Deactivated successfully.
Nov 29 07:12:34 compute-0 podman[82936]: 2025-11-29 07:12:34.400265671 +0000 UTC m=+0.779972765 container remove 3abe709a7d1c13767750371d89369ce2ada847076c51682edd170805971e5ade (image=quay.io/ceph/ceph:v18, name=silly_vaughan, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:34 compute-0 systemd[1]: libpod-conmon-3abe709a7d1c13767750371d89369ce2ada847076c51682edd170805971e5ade.scope: Deactivated successfully.
Nov 29 07:12:34 compute-0 sudo[82907]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:34 compute-0 podman[83180]: 2025-11-29 07:12:34.566109994 +0000 UTC m=+0.042643011 container create a070796a5ae145b8d1032b1e8c6e6806e342e43446d678a3154a7236b6e27bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:34 compute-0 systemd[1]: Started libpod-conmon-a070796a5ae145b8d1032b1e8c6e6806e342e43446d678a3154a7236b6e27bd3.scope.
Nov 29 07:12:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:34 compute-0 podman[83180]: 2025-11-29 07:12:34.631494189 +0000 UTC m=+0.108027236 container init a070796a5ae145b8d1032b1e8c6e6806e342e43446d678a3154a7236b6e27bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:12:34 compute-0 podman[83180]: 2025-11-29 07:12:34.638118354 +0000 UTC m=+0.114651351 container start a070796a5ae145b8d1032b1e8c6e6806e342e43446d678a3154a7236b6e27bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:34 compute-0 podman[83180]: 2025-11-29 07:12:34.641132173 +0000 UTC m=+0.117665220 container attach a070796a5ae145b8d1032b1e8c6e6806e342e43446d678a3154a7236b6e27bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:34 compute-0 funny_feynman[83200]: 167 167
Nov 29 07:12:34 compute-0 systemd[1]: libpod-a070796a5ae145b8d1032b1e8c6e6806e342e43446d678a3154a7236b6e27bd3.scope: Deactivated successfully.
Nov 29 07:12:34 compute-0 podman[83180]: 2025-11-29 07:12:34.545164801 +0000 UTC m=+0.021697828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:34 compute-0 podman[83180]: 2025-11-29 07:12:34.643583535 +0000 UTC m=+0.120116532 container died a070796a5ae145b8d1032b1e8c6e6806e342e43446d678a3154a7236b6e27bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:34 compute-0 sudo[83222]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgvdjjkjdocaebqadtdioddcejxcburt ; /usr/bin/python3'
Nov 29 07:12:34 compute-0 sudo[83222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d94891317c24729a23884bdfd1c4cff5d729095eef67d6c15679c82c27397e0-merged.mount: Deactivated successfully.
Nov 29 07:12:34 compute-0 podman[83180]: 2025-11-29 07:12:34.692054974 +0000 UTC m=+0.168587971 container remove a070796a5ae145b8d1032b1e8c6e6806e342e43446d678a3154a7236b6e27bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:34 compute-0 systemd[1]: libpod-conmon-a070796a5ae145b8d1032b1e8c6e6806e342e43446d678a3154a7236b6e27bd3.scope: Deactivated successfully.
Nov 29 07:12:34 compute-0 systemd[1]: Reloading.
Nov 29 07:12:34 compute-0 python3[83227]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:12:34 compute-0 systemd-rc-local-generator[83279]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:34 compute-0 systemd-sysv-generator[83283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:34 compute-0 podman[83243]: 2025-11-29 07:12:34.84775573 +0000 UTC m=+0.051643064 container create ea67f80fd2039d5fc34d1c1cde3c5d059e35ea596b5a18c567ad75b76201cd99 (image=quay.io/ceph/ceph:v18, name=romantic_jackson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:12:34 compute-0 podman[83243]: 2025-11-29 07:12:34.83035169 +0000 UTC m=+0.034239054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:35 compute-0 systemd[1]: Started libpod-conmon-ea67f80fd2039d5fc34d1c1cde3c5d059e35ea596b5a18c567ad75b76201cd99.scope.
Nov 29 07:12:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a796d908ecb7e30ee0f38b42ef484c60401db4035eab8a38f66487ca03add655/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a796d908ecb7e30ee0f38b42ef484c60401db4035eab8a38f66487ca03add655/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a796d908ecb7e30ee0f38b42ef484c60401db4035eab8a38f66487ca03add655/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:35 compute-0 systemd[1]: Reloading.
Nov 29 07:12:35 compute-0 podman[83243]: 2025-11-29 07:12:35.074144337 +0000 UTC m=+0.278031711 container init ea67f80fd2039d5fc34d1c1cde3c5d059e35ea596b5a18c567ad75b76201cd99 (image=quay.io/ceph/ceph:v18, name=romantic_jackson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:35 compute-0 podman[83243]: 2025-11-29 07:12:35.081704625 +0000 UTC m=+0.285591999 container start ea67f80fd2039d5fc34d1c1cde3c5d059e35ea596b5a18c567ad75b76201cd99 (image=quay.io/ceph/ceph:v18, name=romantic_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:35 compute-0 podman[83243]: 2025-11-29 07:12:35.085932378 +0000 UTC m=+0.289819772 container attach ea67f80fd2039d5fc34d1c1cde3c5d059e35ea596b5a18c567ad75b76201cd99 (image=quay.io/ceph/ceph:v18, name=romantic_jackson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:35 compute-0 systemd-sysv-generator[83329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:35 compute-0 systemd-rc-local-generator[83326]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:35 compute-0 ceph-mon[75050]: Deploying daemon mgr.compute-0.cguuye on compute-0
Nov 29 07:12:35 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4219071592' entity='client.admin' 
Nov 29 07:12:35 compute-0 systemd[1]: Starting Ceph mgr.compute-0.cguuye for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: [progress INFO root] Writing back 1 completed events
Nov 29 07:12:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:12:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:12:35 compute-0 podman[83405]: 2025-11-29 07:12:35.591671837 +0000 UTC m=+0.045934222 container create 464ffb02bbd1bf5140b7c0dc0b539c4b0f50360ecaeb0539b4b8d6eeb7e73927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f3be090a11a5d3efc06d9fb12e9b37ef9476585bdc0f77dac58fe4d47181b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f3be090a11a5d3efc06d9fb12e9b37ef9476585bdc0f77dac58fe4d47181b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f3be090a11a5d3efc06d9fb12e9b37ef9476585bdc0f77dac58fe4d47181b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f3be090a11a5d3efc06d9fb12e9b37ef9476585bdc0f77dac58fe4d47181b0/merged/var/lib/ceph/mgr/ceph-compute-0.cguuye supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 29 07:12:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1206616388' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 07:12:35 compute-0 podman[83405]: 2025-11-29 07:12:35.571461718 +0000 UTC m=+0.025724103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:35 compute-0 podman[83405]: 2025-11-29 07:12:35.669317946 +0000 UTC m=+0.123580361 container init 464ffb02bbd1bf5140b7c0dc0b539c4b0f50360ecaeb0539b4b8d6eeb7e73927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:35 compute-0 podman[83405]: 2025-11-29 07:12:35.674004351 +0000 UTC m=+0.128266736 container start 464ffb02bbd1bf5140b7c0dc0b539c4b0f50360ecaeb0539b4b8d6eeb7e73927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:35 compute-0 bash[83405]: 464ffb02bbd1bf5140b7c0dc0b539c4b0f50360ecaeb0539b4b8d6eeb7e73927
Nov 29 07:12:35 compute-0 systemd[1]: Started Ceph mgr.compute-0.cguuye for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:12:35 compute-0 ceph-mgr[83426]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:12:35 compute-0 ceph-mgr[83426]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 07:12:35 compute-0 ceph-mgr[83426]: pidfile_write: ignore empty --pid-file
Nov 29 07:12:35 compute-0 sudo[83092]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:12:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 9c5e1712-54db-4053-81b6-0115c5caf458 (Updating mgr deployment (+1 -> 2))
Nov 29 07:12:35 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 9c5e1712-54db-4053-81b6-0115c5caf458 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 29 07:12:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:12:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:35 compute-0 ceph-mgr[83426]: mgr[py] Loading python module 'alerts'
Nov 29 07:12:35 compute-0 sudo[83451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:35 compute-0 sudo[83451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:35 compute-0 sudo[83451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:35 compute-0 sudo[83476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:12:35 compute-0 sudo[83476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:35 compute-0 sudo[83476]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:36 compute-0 sudo[83501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:36 compute-0 sudo[83501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:36 compute-0 sudo[83501]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:36 compute-0 sudo[83526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:36 compute-0 sudo[83526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:36 compute-0 sudo[83526]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:36 compute-0 ceph-mgr[83426]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:12:36 compute-0 ceph-mgr[83426]: mgr[py] Loading python module 'balancer'
Nov 29 07:12:36 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye[83421]: 2025-11-29T07:12:36.135+0000 7f453d6c8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 07:12:36 compute-0 sudo[83551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:36 compute-0 sudo[83551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:36 compute-0 sudo[83551]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:36 compute-0 sudo[83576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:12:36 compute-0 sudo[83576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:36 compute-0 ceph-mgr[83426]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:12:36 compute-0 ceph-mgr[83426]: mgr[py] Loading python module 'cephadm'
Nov 29 07:12:36 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye[83421]: 2025-11-29T07:12:36.390+0000 7f453d6c8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 07:12:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 29 07:12:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:12:36 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1206616388' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 07:12:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 07:12:36 compute-0 romantic_jackson[83292]: set require_min_compat_client to mimic
Nov 29 07:12:36 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 29 07:12:36 compute-0 ceph-mon[75050]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1206616388' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 07:12:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:36 compute-0 systemd[1]: libpod-ea67f80fd2039d5fc34d1c1cde3c5d059e35ea596b5a18c567ad75b76201cd99.scope: Deactivated successfully.
Nov 29 07:12:36 compute-0 podman[83627]: 2025-11-29 07:12:36.574485904 +0000 UTC m=+0.044399982 container died ea67f80fd2039d5fc34d1c1cde3c5d059e35ea596b5a18c567ad75b76201cd99 (image=quay.io/ceph/ceph:v18, name=romantic_jackson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a796d908ecb7e30ee0f38b42ef484c60401db4035eab8a38f66487ca03add655-merged.mount: Deactivated successfully.
Nov 29 07:12:36 compute-0 podman[83636]: 2025-11-29 07:12:36.637681689 +0000 UTC m=+0.079820890 container remove ea67f80fd2039d5fc34d1c1cde3c5d059e35ea596b5a18c567ad75b76201cd99 (image=quay.io/ceph/ceph:v18, name=romantic_jackson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:36 compute-0 systemd[1]: libpod-conmon-ea67f80fd2039d5fc34d1c1cde3c5d059e35ea596b5a18c567ad75b76201cd99.scope: Deactivated successfully.
Nov 29 07:12:36 compute-0 sudo[83222]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:36 compute-0 podman[83686]: 2025-11-29 07:12:36.805150897 +0000 UTC m=+0.074109747 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:36 compute-0 podman[83686]: 2025-11-29 07:12:36.899511085 +0000 UTC m=+0.168469975 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:37 compute-0 sudo[83775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlqpcubvcvbgmsqibxidaayojjuxinyh ; /usr/bin/python3'
Nov 29 07:12:37 compute-0 sudo[83775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:37 compute-0 sudo[83576]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 9335b702-91af-49f8-9f26-7e91f6e59b3e does not exist
Nov 29 07:12:37 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev aa648833-8471-4ee3-acd3-92282a9b670f does not exist
Nov 29 07:12:37 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1dc79c9d-791c-4b70-b806-7f0eea1bcc68 does not exist
Nov 29 07:12:37 compute-0 python3[83780]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:12:37 compute-0 sudo[83799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:37 compute-0 sudo[83799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-0 sudo[83799]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:37 compute-0 podman[83803]: 2025-11-29 07:12:37.337169483 +0000 UTC m=+0.047485864 container create a96e4571ac372cea2b351f5622e79461d950a2fbcf53602c1f50c126196f502e (image=quay.io/ceph/ceph:v18, name=frosty_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:37 compute-0 systemd[1]: Started libpod-conmon-a96e4571ac372cea2b351f5622e79461d950a2fbcf53602c1f50c126196f502e.scope.
Nov 29 07:12:37 compute-0 sudo[83837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:12:37 compute-0 sudo[83837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-0 sudo[83837]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 29 07:12:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:37 compute-0 podman[83803]: 2025-11-29 07:12:37.320165861 +0000 UTC m=+0.030482272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d955f3b178044d6f8268443911c1bbd00d1fb5e9b309d40fd414fec82fa8f0be/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d955f3b178044d6f8268443911c1bbd00d1fb5e9b309d40fd414fec82fa8f0be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d955f3b178044d6f8268443911c1bbd00d1fb5e9b309d40fd414fec82fa8f0be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:37 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 07:12:37 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 07:12:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:12:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:37 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 07:12:37 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 07:12:37 compute-0 podman[83803]: 2025-11-29 07:12:37.432157759 +0000 UTC m=+0.142474160 container init a96e4571ac372cea2b351f5622e79461d950a2fbcf53602c1f50c126196f502e (image=quay.io/ceph/ceph:v18, name=frosty_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:12:37 compute-0 podman[83803]: 2025-11-29 07:12:37.443805325 +0000 UTC m=+0.154121746 container start a96e4571ac372cea2b351f5622e79461d950a2fbcf53602c1f50c126196f502e (image=quay.io/ceph/ceph:v18, name=frosty_haslett, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:12:37 compute-0 podman[83803]: 2025-11-29 07:12:37.448778774 +0000 UTC m=+0.159095175 container attach a96e4571ac372cea2b351f5622e79461d950a2fbcf53602c1f50c126196f502e (image=quay.io/ceph/ceph:v18, name=frosty_haslett, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:37 compute-0 sudo[83868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:37 compute-0 sudo[83868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-0 sudo[83868]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1206616388' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 07:12:37 compute-0 ceph-mon[75050]: osdmap e3: 0 total, 0 up, 0 in
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 07:12:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:37 compute-0 sudo[83893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:37 compute-0 sudo[83893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-0 sudo[83893]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:37 compute-0 sudo[83918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:37 compute-0 sudo[83918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-0 sudo[83918]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:37 compute-0 sudo[83943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:37 compute-0 sudo[83943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:37 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:38 compute-0 podman[84014]: 2025-11-29 07:12:38.000221442 +0000 UTC m=+0.053997324 container create 9770d86922b876d6eae15b0cfd056645a46515117ed24378133274d620ee0047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:12:38 compute-0 systemd[1]: Started libpod-conmon-9770d86922b876d6eae15b0cfd056645a46515117ed24378133274d620ee0047.scope.
Nov 29 07:12:38 compute-0 podman[84014]: 2025-11-29 07:12:37.973831178 +0000 UTC m=+0.027607040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:38 compute-0 sudo[84029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:38 compute-0 sudo[84029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:38 compute-0 podman[84014]: 2025-11-29 07:12:38.094239473 +0000 UTC m=+0.148015385 container init 9770d86922b876d6eae15b0cfd056645a46515117ed24378133274d620ee0047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:38 compute-0 sudo[84029]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:38 compute-0 podman[84014]: 2025-11-29 07:12:38.106214083 +0000 UTC m=+0.159989925 container start 9770d86922b876d6eae15b0cfd056645a46515117ed24378133274d620ee0047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:38 compute-0 cranky_hodgkin[84045]: 167 167
Nov 29 07:12:38 compute-0 systemd[1]: libpod-9770d86922b876d6eae15b0cfd056645a46515117ed24378133274d620ee0047.scope: Deactivated successfully.
Nov 29 07:12:38 compute-0 podman[84014]: 2025-11-29 07:12:38.111431883 +0000 UTC m=+0.165207805 container attach 9770d86922b876d6eae15b0cfd056645a46515117ed24378133274d620ee0047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:38 compute-0 podman[84014]: 2025-11-29 07:12:38.112119716 +0000 UTC m=+0.165895558 container died 9770d86922b876d6eae15b0cfd056645a46515117ed24378133274d620ee0047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c696a1dd7e085660595548c8d5a95240c99dc1924ec04e0b5b44b09d27bfcc-merged.mount: Deactivated successfully.
Nov 29 07:12:38 compute-0 podman[84014]: 2025-11-29 07:12:38.15594293 +0000 UTC m=+0.209718742 container remove 9770d86922b876d6eae15b0cfd056645a46515117ed24378133274d620ee0047 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:12:38 compute-0 systemd[1]: libpod-conmon-9770d86922b876d6eae15b0cfd056645a46515117ed24378133274d620ee0047.scope: Deactivated successfully.
Nov 29 07:12:38 compute-0 sudo[84062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:38 compute-0 sudo[84062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:38 compute-0 sudo[84062]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:38 compute-0 sudo[83943]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.kzdpag (unknown last config time)...
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.kzdpag (unknown last config time)...
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.kzdpag", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kzdpag", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.kzdpag on compute-0
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.kzdpag on compute-0
Nov 29 07:12:38 compute-0 sudo[84101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:38 compute-0 sudo[84101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:38 compute-0 sudo[84101]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:38 compute-0 sudo[84122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:38 compute-0 sudo[84122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:38 compute-0 sudo[84122]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:38 compute-0 sudo[84147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 07:12:38 compute-0 sudo[84147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:38 compute-0 sudo[84174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:38 compute-0 sudo[84174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:38 compute-0 sudo[84174]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:38 compute-0 ceph-mgr[83426]: mgr[py] Loading python module 'crash'
Nov 29 07:12:38 compute-0 sudo[84201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:38 compute-0 sudo[84201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:38 compute-0 sudo[84201]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:38 compute-0 sudo[84228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:38 compute-0 sudo[84228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:38 compute-0 ceph-mon[75050]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:38 compute-0 ceph-mon[75050]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 07:12:38 compute-0 ceph-mon[75050]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 07:12:38 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kzdpag", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 07:12:38 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 07:12:38 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:38 compute-0 sudo[84147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: [cephadm INFO root] Added host compute-0
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 29 07:12:38 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 29 07:12:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 29 07:12:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:38 compute-0 frosty_haslett[83862]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 07:12:38 compute-0 frosty_haslett[83862]: Scheduled mon update...
Nov 29 07:12:38 compute-0 frosty_haslett[83862]: Scheduled mgr update...
Nov 29 07:12:38 compute-0 frosty_haslett[83862]: Scheduled osd.default_drive_group update...
Nov 29 07:12:38 compute-0 systemd[1]: libpod-a96e4571ac372cea2b351f5622e79461d950a2fbcf53602c1f50c126196f502e.scope: Deactivated successfully.
Nov 29 07:12:38 compute-0 podman[83803]: 2025-11-29 07:12:38.678426347 +0000 UTC m=+1.388742748 container died a96e4571ac372cea2b351f5622e79461d950a2fbcf53602c1f50c126196f502e (image=quay.io/ceph/ceph:v18, name=frosty_haslett, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:12:38 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye[83421]: 2025-11-29T07:12:38.705+0000 7f453d6c8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:12:38 compute-0 ceph-mgr[83426]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 07:12:38 compute-0 ceph-mgr[83426]: mgr[py] Loading python module 'dashboard'
Nov 29 07:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d955f3b178044d6f8268443911c1bbd00d1fb5e9b309d40fd414fec82fa8f0be-merged.mount: Deactivated successfully.
Nov 29 07:12:38 compute-0 podman[83803]: 2025-11-29 07:12:38.734472664 +0000 UTC m=+1.444789045 container remove a96e4571ac372cea2b351f5622e79461d950a2fbcf53602c1f50c126196f502e (image=quay.io/ceph/ceph:v18, name=frosty_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:12:38 compute-0 systemd[1]: libpod-conmon-a96e4571ac372cea2b351f5622e79461d950a2fbcf53602c1f50c126196f502e.scope: Deactivated successfully.
Nov 29 07:12:38 compute-0 sudo[83775]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:38 compute-0 podman[84298]: 2025-11-29 07:12:38.818354129 +0000 UTC m=+0.054515977 container create 10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:12:38 compute-0 systemd[1]: Started libpod-conmon-10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51.scope.
Nov 29 07:12:38 compute-0 podman[84298]: 2025-11-29 07:12:38.801594629 +0000 UTC m=+0.037756467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:38 compute-0 podman[84298]: 2025-11-29 07:12:38.920577938 +0000 UTC m=+0.156739796 container init 10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:38 compute-0 podman[84298]: 2025-11-29 07:12:38.93104786 +0000 UTC m=+0.167209708 container start 10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:12:38 compute-0 suspicious_pasteur[84315]: 167 167
Nov 29 07:12:38 compute-0 podman[84298]: 2025-11-29 07:12:38.935761456 +0000 UTC m=+0.171923344 container attach 10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:12:38 compute-0 systemd[1]: libpod-10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51.scope: Deactivated successfully.
Nov 29 07:12:38 compute-0 conmon[84315]: conmon 10390c4b1551550c1a5c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51.scope/container/memory.events
Nov 29 07:12:38 compute-0 podman[84298]: 2025-11-29 07:12:38.939727448 +0000 UTC m=+0.175889336 container died 10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c091d04cf56791f5077050824c7aa583dfbeddba3bd55a12d08a91818f9725a1-merged.mount: Deactivated successfully.
Nov 29 07:12:38 compute-0 podman[84298]: 2025-11-29 07:12:38.99633436 +0000 UTC m=+0.232496218 container remove 10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:39 compute-0 systemd[1]: libpod-conmon-10390c4b1551550c1a5c2b21cecf2a49abb0793bb404e223506bd802f0338b51.scope: Deactivated successfully.
Nov 29 07:12:39 compute-0 sudo[84228]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:39 compute-0 sudo[84355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqzlaqlgezyriuyrqlxxamodfpvscsyd ; /usr/bin/python3'
Nov 29 07:12:39 compute-0 sudo[84355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:12:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:39 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:39 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 sudo[84358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:39 compute-0 sudo[84358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:39 compute-0 sudo[84358]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:39 compute-0 python3[84357]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:12:39 compute-0 sudo[84383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:39 compute-0 sudo[84383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:39 compute-0 sudo[84383]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:39 compute-0 podman[84407]: 2025-11-29 07:12:39.275480112 +0000 UTC m=+0.061403103 container create 14495b50c92ce8458a0f782c4928d3d0c6b7e662d94f08cdb1da943cc6c51911 (image=quay.io/ceph/ceph:v18, name=fervent_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:39 compute-0 systemd[1]: Started libpod-conmon-14495b50c92ce8458a0f782c4928d3d0c6b7e662d94f08cdb1da943cc6c51911.scope.
Nov 29 07:12:39 compute-0 sudo[84422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:39 compute-0 sudo[84422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:39 compute-0 sudo[84422]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:39 compute-0 podman[84407]: 2025-11-29 07:12:39.243269781 +0000 UTC m=+0.029192812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:12:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70be5d77d8aa1000a7d824829bfe3c0aa412c037e808d1410b2c21261e2230a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70be5d77d8aa1000a7d824829bfe3c0aa412c037e808d1410b2c21261e2230a6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70be5d77d8aa1000a7d824829bfe3c0aa412c037e808d1410b2c21261e2230a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:39 compute-0 podman[84407]: 2025-11-29 07:12:39.364195761 +0000 UTC m=+0.150118782 container init 14495b50c92ce8458a0f782c4928d3d0c6b7e662d94f08cdb1da943cc6c51911 (image=quay.io/ceph/ceph:v18, name=fervent_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:12:39 compute-0 podman[84407]: 2025-11-29 07:12:39.376909425 +0000 UTC m=+0.162832456 container start 14495b50c92ce8458a0f782c4928d3d0c6b7e662d94f08cdb1da943cc6c51911 (image=quay.io/ceph/ceph:v18, name=fervent_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:12:39 compute-0 podman[84407]: 2025-11-29 07:12:39.381675104 +0000 UTC m=+0.167598135 container attach 14495b50c92ce8458a0f782c4928d3d0c6b7e662d94f08cdb1da943cc6c51911 (image=quay.io/ceph/ceph:v18, name=fervent_tu, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:12:39 compute-0 sudo[84452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:12:39 compute-0 sudo[84452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:12:39 compute-0 ceph-mon[75050]: Reconfiguring mgr.compute-0.kzdpag (unknown last config time)...
Nov 29 07:12:39 compute-0 ceph-mon[75050]: Reconfiguring daemon mgr.compute-0.kzdpag on compute-0
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:39 compute-0 podman[84570]: 2025-11-29 07:12:39.953247188 +0000 UTC m=+0.088483639 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:12:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 07:12:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3561235540' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:12:39 compute-0 fervent_tu[84448]: 
Nov 29 07:12:39 compute-0 fervent_tu[84448]: {"fsid":"14ff1f30-5059-58f1-9a23-69871bb275a1","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":82,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-29T07:11:14.129402+0000","services":{}},"progress_events":{}}
Nov 29 07:12:40 compute-0 systemd[1]: libpod-14495b50c92ce8458a0f782c4928d3d0c6b7e662d94f08cdb1da943cc6c51911.scope: Deactivated successfully.
Nov 29 07:12:40 compute-0 podman[84407]: 2025-11-29 07:12:40.008381111 +0000 UTC m=+0.794304132 container died 14495b50c92ce8458a0f782c4928d3d0c6b7e662d94f08cdb1da943cc6c51911 (image=quay.io/ceph/ceph:v18, name=fervent_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-70be5d77d8aa1000a7d824829bfe3c0aa412c037e808d1410b2c21261e2230a6-merged.mount: Deactivated successfully.
Nov 29 07:12:40 compute-0 podman[84570]: 2025-11-29 07:12:40.042653828 +0000 UTC m=+0.177890229 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:40 compute-0 podman[84407]: 2025-11-29 07:12:40.07903305 +0000 UTC m=+0.864956061 container remove 14495b50c92ce8458a0f782c4928d3d0c6b7e662d94f08cdb1da943cc6c51911 (image=quay.io/ceph/ceph:v18, name=fervent_tu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:12:40 compute-0 systemd[1]: libpod-conmon-14495b50c92ce8458a0f782c4928d3d0c6b7e662d94f08cdb1da943cc6c51911.scope: Deactivated successfully.
Nov 29 07:12:40 compute-0 sudo[84355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:40 compute-0 ceph-mgr[83426]: mgr[py] Loading python module 'devicehealth'
Nov 29 07:12:40 compute-0 sudo[84452]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:40 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:40 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:12:40 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:12:40 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:12:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:12:40 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:12:40 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 970a5aa1-851c-40d4-92a8-570273a101a0 does not exist
Nov 29 07:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 07:12:40 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mgr[83426]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:12:40 compute-0 ceph-mgr[83426]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 07:12:40 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye[83421]: 2025-11-29T07:12:40.427+0000 7f453d6c8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 07:12:40 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 44b965c0-1c68-4f7e-89e9-4794ee25b301 (Updating mgr deployment (-1 -> 1))
Nov 29 07:12:40 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.cguuye from compute-0 -- ports [8765]
Nov 29 07:12:40 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.cguuye from compute-0 -- ports [8765]
Nov 29 07:12:40 compute-0 sudo[84673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:40 compute-0 sudo[84673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:40 compute-0 sudo[84673]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:40 compute-0 ceph-mgr[75345]: [progress INFO root] Writing back 2 completed events
Nov 29 07:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:12:40 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 sudo[84698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:40 compute-0 sudo[84698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:40 compute-0 sudo[84698]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:40 compute-0 ceph-mon[75050]: Added host compute-0
Nov 29 07:12:40 compute-0 ceph-mon[75050]: Saving service mon spec with placement compute-0
Nov 29 07:12:40 compute-0 ceph-mon[75050]: Saving service mgr spec with placement compute-0
Nov 29 07:12:40 compute-0 ceph-mon[75050]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 07:12:40 compute-0 ceph-mon[75050]: Saving service osd.default_drive_group spec with placement compute-0
Nov 29 07:12:40 compute-0 ceph-mon[75050]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3561235540' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:40 compute-0 sudo[84723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:40 compute-0 sudo[84723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:40 compute-0 sudo[84723]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:40 compute-0 sudo[84748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --name mgr.compute-0.cguuye --force --tcp-ports 8765
Nov 29 07:12:40 compute-0 sudo[84748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:40 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.cguuye for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:12:40 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye[83421]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 07:12:40 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye[83421]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 07:12:40 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye[83421]:   from numpy import show_config as show_numpy_config
Nov 29 07:12:40 compute-0 ceph-mgr[83426]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:12:40 compute-0 ceph-mgr[83426]: mgr[py] Loading python module 'influx'
Nov 29 07:12:40 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye[83421]: 2025-11-29T07:12:40.974+0000 7f453d6c8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 07:12:41 compute-0 podman[84844]: 2025-11-29 07:12:41.199574279 +0000 UTC m=+0.093444467 container stop 464ffb02bbd1bf5140b7c0dc0b539c4b0f50360ecaeb0539b4b8d6eeb7e73927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:41 compute-0 podman[84844]: 2025-11-29 07:12:41.230204077 +0000 UTC m=+0.124074335 container died 464ffb02bbd1bf5140b7c0dc0b539c4b0f50360ecaeb0539b4b8d6eeb7e73927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3f3be090a11a5d3efc06d9fb12e9b37ef9476585bdc0f77dac58fe4d47181b0-merged.mount: Deactivated successfully.
Nov 29 07:12:41 compute-0 podman[84844]: 2025-11-29 07:12:41.292363524 +0000 UTC m=+0.186233732 container remove 464ffb02bbd1bf5140b7c0dc0b539c4b0f50360ecaeb0539b4b8d6eeb7e73927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:41 compute-0 bash[84844]: ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-cguuye
Nov 29 07:12:41 compute-0 systemd[1]: ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@mgr.compute-0.cguuye.service: Main process exited, code=exited, status=143/n/a
Nov 29 07:12:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:41 compute-0 systemd[1]: ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@mgr.compute-0.cguuye.service: Failed with result 'exit-code'.
Nov 29 07:12:41 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.cguuye for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:12:41 compute-0 systemd[1]: ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@mgr.compute-0.cguuye.service: Consumed 6.442s CPU time.
Nov 29 07:12:41 compute-0 systemd[1]: Reloading.
Nov 29 07:12:41 compute-0 ceph-mon[75050]: Removing daemon mgr.compute-0.cguuye from compute-0 -- ports [8765]
Nov 29 07:12:41 compute-0 systemd-sysv-generator[84934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:12:41 compute-0 systemd-rc-local-generator[84930]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:12:41 compute-0 sudo[84748]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:41 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.cguuye
Nov 29 07:12:41 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.cguuye
Nov 29 07:12:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.cguuye"} v 0) v1
Nov 29 07:12:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.cguuye"}]: dispatch
Nov 29 07:12:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.cguuye"}]': finished
Nov 29 07:12:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:12:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:41 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 44b965c0-1c68-4f7e-89e9-4794ee25b301 (Updating mgr deployment (-1 -> 1))
Nov 29 07:12:41 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 44b965c0-1c68-4f7e-89e9-4794ee25b301 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 29 07:12:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 07:12:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:41 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3220fe39-a680-49d5-ad2f-cf708b270bf7 does not exist
Nov 29 07:12:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:12:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:12:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:12:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:12:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:12:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:41 compute-0 sudo[84942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:41 compute-0 sudo[84942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:41 compute-0 sudo[84942]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:41 compute-0 sudo[84967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:41 compute-0 sudo[84967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:41 compute-0 sudo[84967]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:42 compute-0 sudo[84992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:42 compute-0 sudo[84992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:42 compute-0 sudo[84992]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:42 compute-0 sudo[85017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:12:42 compute-0 sudo[85017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:42 compute-0 podman[85079]: 2025-11-29 07:12:42.518660814 +0000 UTC m=+0.050270332 container create 1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:42 compute-0 systemd[1]: Started libpod-conmon-1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010.scope.
Nov 29 07:12:42 compute-0 ceph-mon[75050]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.cguuye"}]: dispatch
Nov 29 07:12:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.cguuye"}]': finished
Nov 29 07:12:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:12:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:12:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:42 compute-0 podman[85079]: 2025-11-29 07:12:42.496047373 +0000 UTC m=+0.027656901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:42 compute-0 podman[85079]: 2025-11-29 07:12:42.627865934 +0000 UTC m=+0.159475492 container init 1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:12:42 compute-0 podman[85079]: 2025-11-29 07:12:42.639148562 +0000 UTC m=+0.170758040 container start 1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_neumann, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:42 compute-0 podman[85079]: 2025-11-29 07:12:42.643389167 +0000 UTC m=+0.174998745 container attach 1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:42 compute-0 vibrant_neumann[85095]: 167 167
Nov 29 07:12:42 compute-0 systemd[1]: libpod-1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010.scope: Deactivated successfully.
Nov 29 07:12:42 compute-0 conmon[85095]: conmon 1fb991335e5cd373097b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010.scope/container/memory.events
Nov 29 07:12:42 compute-0 podman[85079]: 2025-11-29 07:12:42.649130261 +0000 UTC m=+0.180739759 container died 1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_neumann, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:12:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-da1f5b9cd215ff847b028bd68aa579af49d602b669c1e451d8bf94d9eed94ade-merged.mount: Deactivated successfully.
Nov 29 07:12:42 compute-0 podman[85079]: 2025-11-29 07:12:42.70003256 +0000 UTC m=+0.231642048 container remove 1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:12:42 compute-0 systemd[1]: libpod-conmon-1fb991335e5cd373097bc7ab7db1dd31622f235953d25134c56a78f5f20d6010.scope: Deactivated successfully.
Nov 29 07:12:42 compute-0 podman[85118]: 2025-11-29 07:12:42.943255061 +0000 UTC m=+0.070795765 container create d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:42 compute-0 systemd[1]: Started libpod-conmon-d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f.scope.
Nov 29 07:12:43 compute-0 podman[85118]: 2025-11-29 07:12:42.915564938 +0000 UTC m=+0.043105682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bcaf985c273c8d7a95cbd499074f8fef931d01d9e5976330fba843cf39f1149/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bcaf985c273c8d7a95cbd499074f8fef931d01d9e5976330fba843cf39f1149/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bcaf985c273c8d7a95cbd499074f8fef931d01d9e5976330fba843cf39f1149/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bcaf985c273c8d7a95cbd499074f8fef931d01d9e5976330fba843cf39f1149/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bcaf985c273c8d7a95cbd499074f8fef931d01d9e5976330fba843cf39f1149/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:43 compute-0 podman[85118]: 2025-11-29 07:12:43.053242727 +0000 UTC m=+0.180783421 container init d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:12:43 compute-0 podman[85118]: 2025-11-29 07:12:43.061950317 +0000 UTC m=+0.189490981 container start d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dijkstra, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:12:43 compute-0 podman[85118]: 2025-11-29 07:12:43.065727101 +0000 UTC m=+0.193267785 container attach d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:43 compute-0 ceph-mon[75050]: Removing key for mgr.compute-0.cguuye
Nov 29 07:12:43 compute-0 ceph-mon[75050]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: --> relative data size: 1.0
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8cd0a453-4c8d-429b-b547-2404357db43c
Nov 29 07:12:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8cd0a453-4c8d-429b-b547-2404357db43c"} v 0) v1
Nov 29 07:12:44 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4188917289' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8cd0a453-4c8d-429b-b547-2404357db43c"}]: dispatch
Nov 29 07:12:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 29 07:12:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:12:44 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4188917289' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8cd0a453-4c8d-429b-b547-2404357db43c"}]': finished
Nov 29 07:12:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 07:12:44 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 29 07:12:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:12:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:12:44 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:12:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4188917289' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8cd0a453-4c8d-429b-b547-2404357db43c"}]: dispatch
Nov 29 07:12:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4188917289' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8cd0a453-4c8d-429b-b547-2404357db43c"}]': finished
Nov 29 07:12:44 compute-0 ceph-mon[75050]: osdmap e4: 1 total, 0 up, 1 in
Nov 29 07:12:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 29 07:12:44 compute-0 lvm[85196]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:12:44 compute-0 lvm[85196]: VG ceph_vg0 finished
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 07:12:44 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 29 07:12:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 07:12:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/455179360' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:12:45 compute-0 brave_dijkstra[85134]:  stderr: got monmap epoch 1
Nov 29 07:12:45 compute-0 brave_dijkstra[85134]: --> Creating keyring file for osd.0
Nov 29 07:12:45 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 29 07:12:45 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 29 07:12:45 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 8cd0a453-4c8d-429b-b547-2404357db43c --setuser ceph --setgroup ceph
Nov 29 07:12:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:45 compute-0 ceph-mgr[75345]: [progress INFO root] Writing back 3 completed events
Nov 29 07:12:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:12:45 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 07:12:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:12:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/455179360' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:12:45 compute-0 ceph-mon[75050]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:45 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:12:46 compute-0 ceph-mon[75050]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 07:12:46 compute-0 ceph-mon[75050]: Cluster is now healthy
Nov 29 07:12:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:47 compute-0 ceph-mon[75050]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:47 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:45.424+0000 7f14157db740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:47 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:45.425+0000 7f14157db740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:47 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:45.425+0000 7f14157db740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:47 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:45.425+0000 7f14157db740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 29 07:12:47 compute-0 brave_dijkstra[85134]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 07:12:47 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:12:47 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 29 07:12:48 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 07:12:48 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 29 07:12:48 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:12:48 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:12:48 compute-0 brave_dijkstra[85134]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 29 07:12:48 compute-0 brave_dijkstra[85134]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 07:12:48 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:12:48 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3596f226-aedb-4f7c-95c0-eea7b670ed3d
Nov 29 07:12:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d"} v 0) v1
Nov 29 07:12:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2002035222' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d"}]: dispatch
Nov 29 07:12:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 29 07:12:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:12:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2002035222' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d"}]': finished
Nov 29 07:12:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 07:12:48 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 29 07:12:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:12:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:12:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:12:48 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:12:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:12:48 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:12:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2002035222' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d"}]: dispatch
Nov 29 07:12:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2002035222' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d"}]': finished
Nov 29 07:12:49 compute-0 ceph-mon[75050]: osdmap e5: 2 total, 0 up, 2 in
Nov 29 07:12:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:12:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:12:49 compute-0 lvm[86145]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 07:12:49 compute-0 lvm[86145]: VG ceph_vg1 finished
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 29 07:12:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 07:12:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4267496371' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]:  stderr: got monmap epoch 1
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: --> Creating keyring file for osd.1
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 29 07:12:49 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 3596f226-aedb-4f7c-95c0-eea7b670ed3d --setuser ceph --setgroup ceph
Nov 29 07:12:50 compute-0 ceph-mon[75050]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4267496371' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:12:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:49.856+0000 7f081c7a0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:49.856+0000 7f081c7a0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:49.856+0000 7f081c7a0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:49.856+0000 7f081c7a0740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:12:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:12:52 compute-0 ceph-mon[75050]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:12:52 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1ebe47c8-fe69-46c9-9931-3ba50f4dae48
Nov 29 07:12:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48"} v 0) v1
Nov 29 07:12:52 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2585141768' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48"}]: dispatch
Nov 29 07:12:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 29 07:12:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:12:53 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2585141768' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48"}]': finished
Nov 29 07:12:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 29 07:12:53 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 29 07:12:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:12:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:12:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:12:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:12:53 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:12:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:12:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:53 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:12:53 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:12:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2585141768' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48"}]: dispatch
Nov 29 07:12:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2585141768' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48"}]': finished
Nov 29 07:12:53 compute-0 ceph-mon[75050]: osdmap e6: 3 total, 0 up, 3 in
Nov 29 07:12:53 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:12:53 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:12:53 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:12:53 compute-0 lvm[87097]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:12:53 compute-0 lvm[87097]: VG ceph_vg2 finished
Nov 29 07:12:53 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 07:12:53 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 29 07:12:53 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 29 07:12:53 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 07:12:53 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:53 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 29 07:12:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 07:12:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/836465330' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:12:53 compute-0 brave_dijkstra[85134]:  stderr: got monmap epoch 1
Nov 29 07:12:54 compute-0 brave_dijkstra[85134]: --> Creating keyring file for osd.2
Nov 29 07:12:54 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 29 07:12:54 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 29 07:12:54 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 1ebe47c8-fe69-46c9-9931-3ba50f4dae48 --setuser ceph --setgroup ceph
Nov 29 07:12:54 compute-0 ceph-mon[75050]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/836465330' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 07:12:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:54.119+0000 7f0eb88df740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:54.119+0000 7f0eb88df740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:54.119+0000 7f0eb88df740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]:  stderr: 2025-11-29T07:12:54.120+0000 7f0eb88df740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 29 07:12:56 compute-0 brave_dijkstra[85134]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 29 07:12:56 compute-0 ceph-mon[75050]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:56 compute-0 systemd[1]: libpod-d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f.scope: Deactivated successfully.
Nov 29 07:12:56 compute-0 systemd[1]: libpod-d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f.scope: Consumed 6.490s CPU time.
Nov 29 07:12:56 compute-0 podman[85118]: 2025-11-29 07:12:56.545300055 +0000 UTC m=+13.672840749 container died d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bcaf985c273c8d7a95cbd499074f8fef931d01d9e5976330fba843cf39f1149-merged.mount: Deactivated successfully.
Nov 29 07:12:56 compute-0 podman[85118]: 2025-11-29 07:12:56.624538087 +0000 UTC m=+13.752078741 container remove d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:56 compute-0 systemd[1]: libpod-conmon-d5c1e5454ba8d0a27d46ff195fb7949c26dc34407b70970a408ab29465385d2f.scope: Deactivated successfully.
Nov 29 07:12:56 compute-0 sudo[85017]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:56 compute-0 sudo[88030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:56 compute-0 sudo[88030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:56 compute-0 sudo[88030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:56 compute-0 sudo[88055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:56 compute-0 sudo[88055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:56 compute-0 sudo[88055]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:56 compute-0 sudo[88080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:56 compute-0 sudo[88080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:56 compute-0 sudo[88080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:56 compute-0 sudo[88105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:12:56 compute-0 sudo[88105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:57 compute-0 podman[88170]: 2025-11-29 07:12:57.324794476 +0000 UTC m=+0.045361196 container create 262275e2a49c99a6cc25f3e019308cc0217ac7030b14c673ae72259e5b558831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:12:57 compute-0 systemd[1]: Started libpod-conmon-262275e2a49c99a6cc25f3e019308cc0217ac7030b14c673ae72259e5b558831.scope.
Nov 29 07:12:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:12:57 compute-0 podman[88170]: 2025-11-29 07:12:57.304753035 +0000 UTC m=+0.025319805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:57 compute-0 podman[88170]: 2025-11-29 07:12:57.422100008 +0000 UTC m=+0.142666748 container init 262275e2a49c99a6cc25f3e019308cc0217ac7030b14c673ae72259e5b558831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:12:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:57 compute-0 podman[88170]: 2025-11-29 07:12:57.431238319 +0000 UTC m=+0.151805049 container start 262275e2a49c99a6cc25f3e019308cc0217ac7030b14c673ae72259e5b558831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_austin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:12:57 compute-0 podman[88170]: 2025-11-29 07:12:57.436946971 +0000 UTC m=+0.157513711 container attach 262275e2a49c99a6cc25f3e019308cc0217ac7030b14c673ae72259e5b558831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_austin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:57 compute-0 relaxed_austin[88186]: 167 167
Nov 29 07:12:57 compute-0 systemd[1]: libpod-262275e2a49c99a6cc25f3e019308cc0217ac7030b14c673ae72259e5b558831.scope: Deactivated successfully.
Nov 29 07:12:57 compute-0 podman[88170]: 2025-11-29 07:12:57.438293573 +0000 UTC m=+0.158860303 container died 262275e2a49c99a6cc25f3e019308cc0217ac7030b14c673ae72259e5b558831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e91a180b061e87a3f30fc8eaf6888e64be5575cfde794758571aa5babc1c90a-merged.mount: Deactivated successfully.
Nov 29 07:12:57 compute-0 podman[88170]: 2025-11-29 07:12:57.484908936 +0000 UTC m=+0.205475656 container remove 262275e2a49c99a6cc25f3e019308cc0217ac7030b14c673ae72259e5b558831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_austin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:12:57 compute-0 systemd[1]: libpod-conmon-262275e2a49c99a6cc25f3e019308cc0217ac7030b14c673ae72259e5b558831.scope: Deactivated successfully.
Nov 29 07:12:57 compute-0 podman[88209]: 2025-11-29 07:12:57.625869505 +0000 UTC m=+0.019459245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:57 compute-0 podman[88209]: 2025-11-29 07:12:57.90293648 +0000 UTC m=+0.296526210 container create 71e1c92d34622f98936812469799a64afa176ccdf56aa29973723c60e3d0c2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_buck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:12:57 compute-0 systemd[1]: Started libpod-conmon-71e1c92d34622f98936812469799a64afa176ccdf56aa29973723c60e3d0c2b7.scope.
Nov 29 07:12:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fede9a7366a5e420b0c4b12f535c843d06b0296fa2415d9579e2e36886d0102/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fede9a7366a5e420b0c4b12f535c843d06b0296fa2415d9579e2e36886d0102/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fede9a7366a5e420b0c4b12f535c843d06b0296fa2415d9579e2e36886d0102/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fede9a7366a5e420b0c4b12f535c843d06b0296fa2415d9579e2e36886d0102/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:12:58 compute-0 podman[88209]: 2025-11-29 07:12:58.29903015 +0000 UTC m=+0.692619960 container init 71e1c92d34622f98936812469799a64afa176ccdf56aa29973723c60e3d0c2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:58 compute-0 podman[88209]: 2025-11-29 07:12:58.31144117 +0000 UTC m=+0.705030930 container start 71e1c92d34622f98936812469799a64afa176ccdf56aa29973723c60e3d0c2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_buck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:12:58 compute-0 podman[88209]: 2025-11-29 07:12:58.316690281 +0000 UTC m=+0.710280051 container attach 71e1c92d34622f98936812469799a64afa176ccdf56aa29973723c60e3d0c2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:12:58 compute-0 ceph-mon[75050]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:59 compute-0 brave_buck[88225]: {
Nov 29 07:12:59 compute-0 brave_buck[88225]:     "0": [
Nov 29 07:12:59 compute-0 brave_buck[88225]:         {
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "devices": [
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "/dev/loop3"
Nov 29 07:12:59 compute-0 brave_buck[88225]:             ],
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_name": "ceph_lv0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_size": "21470642176",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "name": "ceph_lv0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "tags": {
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.cluster_name": "ceph",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.crush_device_class": "",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.encrypted": "0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.osd_id": "0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.type": "block",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.vdo": "0"
Nov 29 07:12:59 compute-0 brave_buck[88225]:             },
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "type": "block",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "vg_name": "ceph_vg0"
Nov 29 07:12:59 compute-0 brave_buck[88225]:         }
Nov 29 07:12:59 compute-0 brave_buck[88225]:     ],
Nov 29 07:12:59 compute-0 brave_buck[88225]:     "1": [
Nov 29 07:12:59 compute-0 brave_buck[88225]:         {
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "devices": [
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "/dev/loop4"
Nov 29 07:12:59 compute-0 brave_buck[88225]:             ],
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_name": "ceph_lv1",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_size": "21470642176",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "name": "ceph_lv1",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "tags": {
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.cluster_name": "ceph",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.crush_device_class": "",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.encrypted": "0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.osd_id": "1",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.type": "block",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.vdo": "0"
Nov 29 07:12:59 compute-0 brave_buck[88225]:             },
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "type": "block",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "vg_name": "ceph_vg1"
Nov 29 07:12:59 compute-0 brave_buck[88225]:         }
Nov 29 07:12:59 compute-0 brave_buck[88225]:     ],
Nov 29 07:12:59 compute-0 brave_buck[88225]:     "2": [
Nov 29 07:12:59 compute-0 brave_buck[88225]:         {
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "devices": [
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "/dev/loop5"
Nov 29 07:12:59 compute-0 brave_buck[88225]:             ],
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_name": "ceph_lv2",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_size": "21470642176",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "name": "ceph_lv2",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "tags": {
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.cluster_name": "ceph",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.crush_device_class": "",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.encrypted": "0",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.osd_id": "2",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.type": "block",
Nov 29 07:12:59 compute-0 brave_buck[88225]:                 "ceph.vdo": "0"
Nov 29 07:12:59 compute-0 brave_buck[88225]:             },
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "type": "block",
Nov 29 07:12:59 compute-0 brave_buck[88225]:             "vg_name": "ceph_vg2"
Nov 29 07:12:59 compute-0 brave_buck[88225]:         }
Nov 29 07:12:59 compute-0 brave_buck[88225]:     ]
Nov 29 07:12:59 compute-0 brave_buck[88225]: }
Nov 29 07:12:59 compute-0 systemd[1]: libpod-71e1c92d34622f98936812469799a64afa176ccdf56aa29973723c60e3d0c2b7.scope: Deactivated successfully.
Nov 29 07:12:59 compute-0 podman[88209]: 2025-11-29 07:12:59.076718788 +0000 UTC m=+1.470308558 container died 71e1c92d34622f98936812469799a64afa176ccdf56aa29973723c60e3d0c2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fede9a7366a5e420b0c4b12f535c843d06b0296fa2415d9579e2e36886d0102-merged.mount: Deactivated successfully.
Nov 29 07:12:59 compute-0 podman[88209]: 2025-11-29 07:12:59.143175863 +0000 UTC m=+1.536765593 container remove 71e1c92d34622f98936812469799a64afa176ccdf56aa29973723c60e3d0c2b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:12:59 compute-0 systemd[1]: libpod-conmon-71e1c92d34622f98936812469799a64afa176ccdf56aa29973723c60e3d0c2b7.scope: Deactivated successfully.
Nov 29 07:12:59 compute-0 sudo[88105]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 07:12:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 07:12:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:12:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:59 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 29 07:12:59 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 29 07:12:59 compute-0 sudo[88247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:59 compute-0 sudo[88247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:59 compute-0 sudo[88247]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:59 compute-0 sudo[88272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:12:59 compute-0 sudo[88272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:59 compute-0 sudo[88272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:12:59 compute-0 sudo[88297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:12:59 compute-0 sudo[88297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:59 compute-0 sudo[88297]: pam_unix(sudo:session): session closed for user root
Nov 29 07:12:59 compute-0 sudo[88322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:12:59 compute-0 sudo[88322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:12:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 07:12:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:12:59 compute-0 podman[88386]: 2025-11-29 07:12:59.90713351 +0000 UTC m=+0.053070580 container create 0163965dbd8d40b143547ddbe446998b2543ef07625db1c29bc34bde2a43a100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_joliot, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:12:59 compute-0 systemd[1]: Started libpod-conmon-0163965dbd8d40b143547ddbe446998b2543ef07625db1c29bc34bde2a43a100.scope.
Nov 29 07:12:59 compute-0 podman[88386]: 2025-11-29 07:12:59.881115374 +0000 UTC m=+0.027052494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:12:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:12:59 compute-0 podman[88386]: 2025-11-29 07:12:59.993880328 +0000 UTC m=+0.139817398 container init 0163965dbd8d40b143547ddbe446998b2543ef07625db1c29bc34bde2a43a100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_joliot, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:00 compute-0 podman[88386]: 2025-11-29 07:13:00.001086999 +0000 UTC m=+0.147024069 container start 0163965dbd8d40b143547ddbe446998b2543ef07625db1c29bc34bde2a43a100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_joliot, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:13:00 compute-0 podman[88386]: 2025-11-29 07:13:00.004586189 +0000 UTC m=+0.150523259 container attach 0163965dbd8d40b143547ddbe446998b2543ef07625db1c29bc34bde2a43a100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_joliot, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:13:00 compute-0 fervent_joliot[88402]: 167 167
Nov 29 07:13:00 compute-0 podman[88386]: 2025-11-29 07:13:00.006675346 +0000 UTC m=+0.152612416 container died 0163965dbd8d40b143547ddbe446998b2543ef07625db1c29bc34bde2a43a100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_joliot, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:00 compute-0 systemd[1]: libpod-0163965dbd8d40b143547ddbe446998b2543ef07625db1c29bc34bde2a43a100.scope: Deactivated successfully.
Nov 29 07:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-01560ea595d649168ddedfd913bc3bde8b30366959c4f4e0f1572635998fa540-merged.mount: Deactivated successfully.
Nov 29 07:13:00 compute-0 podman[88386]: 2025-11-29 07:13:00.054573877 +0000 UTC m=+0.200510947 container remove 0163965dbd8d40b143547ddbe446998b2543ef07625db1c29bc34bde2a43a100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_joliot, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:13:00 compute-0 systemd[1]: libpod-conmon-0163965dbd8d40b143547ddbe446998b2543ef07625db1c29bc34bde2a43a100.scope: Deactivated successfully.
Nov 29 07:13:00 compute-0 podman[88436]: 2025-11-29 07:13:00.362682401 +0000 UTC m=+0.062021942 container create 09bdcede681c920f5eca59311c6614108e1f53630974f48bc670a9a651294e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate-test, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:00 compute-0 systemd[1]: Started libpod-conmon-09bdcede681c920f5eca59311c6614108e1f53630974f48bc670a9a651294e15.scope.
Nov 29 07:13:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c5004ce72cde1354e021709a3863278f19bb01d77ac20cac70489ad0c6c660/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c5004ce72cde1354e021709a3863278f19bb01d77ac20cac70489ad0c6c660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c5004ce72cde1354e021709a3863278f19bb01d77ac20cac70489ad0c6c660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c5004ce72cde1354e021709a3863278f19bb01d77ac20cac70489ad0c6c660/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5c5004ce72cde1354e021709a3863278f19bb01d77ac20cac70489ad0c6c660/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:00 compute-0 podman[88436]: 2025-11-29 07:13:00.337181379 +0000 UTC m=+0.036520980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:00 compute-0 podman[88436]: 2025-11-29 07:13:00.447484659 +0000 UTC m=+0.146824280 container init 09bdcede681c920f5eca59311c6614108e1f53630974f48bc670a9a651294e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate-test, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:13:00 compute-0 podman[88436]: 2025-11-29 07:13:00.455373532 +0000 UTC m=+0.154713043 container start 09bdcede681c920f5eca59311c6614108e1f53630974f48bc670a9a651294e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:13:00 compute-0 podman[88436]: 2025-11-29 07:13:00.460450465 +0000 UTC m=+0.159790046 container attach 09bdcede681c920f5eca59311c6614108e1f53630974f48bc670a9a651294e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate-test, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:13:00 compute-0 ceph-mon[75050]: Deploying daemon osd.0 on compute-0
Nov 29 07:13:00 compute-0 ceph-mon[75050]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:01 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate-test[88453]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 07:13:01 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate-test[88453]:                             [--no-systemd] [--no-tmpfs]
Nov 29 07:13:01 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate-test[88453]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 07:13:01 compute-0 systemd[1]: libpod-09bdcede681c920f5eca59311c6614108e1f53630974f48bc670a9a651294e15.scope: Deactivated successfully.
Nov 29 07:13:01 compute-0 podman[88436]: 2025-11-29 07:13:01.080673785 +0000 UTC m=+0.780013296 container died 09bdcede681c920f5eca59311c6614108e1f53630974f48bc670a9a651294e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate-test, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5c5004ce72cde1354e021709a3863278f19bb01d77ac20cac70489ad0c6c660-merged.mount: Deactivated successfully.
Nov 29 07:13:01 compute-0 podman[88436]: 2025-11-29 07:13:01.146280232 +0000 UTC m=+0.845619743 container remove 09bdcede681c920f5eca59311c6614108e1f53630974f48bc670a9a651294e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:01 compute-0 systemd[1]: libpod-conmon-09bdcede681c920f5eca59311c6614108e1f53630974f48bc670a9a651294e15.scope: Deactivated successfully.
Nov 29 07:13:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:01 compute-0 systemd[1]: Reloading.
Nov 29 07:13:01 compute-0 systemd-rc-local-generator[88512]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:01 compute-0 systemd-sysv-generator[88518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:01 compute-0 systemd[1]: Reloading.
Nov 29 07:13:01 compute-0 systemd-rc-local-generator[88556]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:01 compute-0 systemd-sysv-generator[88562]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:02 compute-0 systemd[1]: Starting Ceph osd.0 for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:13:02 compute-0 podman[88617]: 2025-11-29 07:13:02.356377737 +0000 UTC m=+0.057480704 container create b43a6e137873637b50020e9b4cd2f70092862ce6add78d99ad13093154052170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:02 compute-0 podman[88617]: 2025-11-29 07:13:02.325295988 +0000 UTC m=+0.026399005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9d5e22caa165c52ebc1b1d84893f4e53355d84a3dac3b8b4ec28bca9756951/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9d5e22caa165c52ebc1b1d84893f4e53355d84a3dac3b8b4ec28bca9756951/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9d5e22caa165c52ebc1b1d84893f4e53355d84a3dac3b8b4ec28bca9756951/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9d5e22caa165c52ebc1b1d84893f4e53355d84a3dac3b8b4ec28bca9756951/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e9d5e22caa165c52ebc1b1d84893f4e53355d84a3dac3b8b4ec28bca9756951/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:02 compute-0 podman[88617]: 2025-11-29 07:13:02.45521689 +0000 UTC m=+0.156319937 container init b43a6e137873637b50020e9b4cd2f70092862ce6add78d99ad13093154052170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:02 compute-0 podman[88617]: 2025-11-29 07:13:02.461209275 +0000 UTC m=+0.162312262 container start b43a6e137873637b50020e9b4cd2f70092862ce6add78d99ad13093154052170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 29 07:13:02 compute-0 podman[88617]: 2025-11-29 07:13:02.465679731 +0000 UTC m=+0.166782708 container attach b43a6e137873637b50020e9b4cd2f70092862ce6add78d99ad13093154052170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:02 compute-0 ceph-mon[75050]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate[88632]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:13:03 compute-0 bash[88617]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:13:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate[88632]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:13:03 compute-0 bash[88617]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:13:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate[88632]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:13:03 compute-0 bash[88617]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 07:13:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate[88632]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:13:03 compute-0 bash[88617]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 07:13:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate[88632]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 07:13:03 compute-0 bash[88617]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 07:13:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate[88632]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:13:03 compute-0 bash[88617]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 07:13:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate[88632]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 07:13:03 compute-0 bash[88617]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 07:13:03 compute-0 systemd[1]: libpod-b43a6e137873637b50020e9b4cd2f70092862ce6add78d99ad13093154052170.scope: Deactivated successfully.
Nov 29 07:13:03 compute-0 systemd[1]: libpod-b43a6e137873637b50020e9b4cd2f70092862ce6add78d99ad13093154052170.scope: Consumed 1.071s CPU time.
Nov 29 07:13:03 compute-0 podman[88617]: 2025-11-29 07:13:03.514391708 +0000 UTC m=+1.215494675 container died b43a6e137873637b50020e9b4cd2f70092862ce6add78d99ad13093154052170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e9d5e22caa165c52ebc1b1d84893f4e53355d84a3dac3b8b4ec28bca9756951-merged.mount: Deactivated successfully.
Nov 29 07:13:03 compute-0 podman[88617]: 2025-11-29 07:13:03.574600496 +0000 UTC m=+1.275703463 container remove b43a6e137873637b50020e9b4cd2f70092862ce6add78d99ad13093154052170 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:13:03 compute-0 podman[88812]: 2025-11-29 07:13:03.76158614 +0000 UTC m=+0.036559501 container create 9e203bb2012357f685d3a92116fa2f82a2b0bf3d53f620d86c98c827de1eec96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9054f3889d4073dbcb6c414e87b62cec47b95f3802627f1139fd900a8039e80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9054f3889d4073dbcb6c414e87b62cec47b95f3802627f1139fd900a8039e80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9054f3889d4073dbcb6c414e87b62cec47b95f3802627f1139fd900a8039e80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9054f3889d4073dbcb6c414e87b62cec47b95f3802627f1139fd900a8039e80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9054f3889d4073dbcb6c414e87b62cec47b95f3802627f1139fd900a8039e80/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:03 compute-0 podman[88812]: 2025-11-29 07:13:03.814989875 +0000 UTC m=+0.089963266 container init 9e203bb2012357f685d3a92116fa2f82a2b0bf3d53f620d86c98c827de1eec96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:03 compute-0 podman[88812]: 2025-11-29 07:13:03.825410954 +0000 UTC m=+0.100384315 container start 9e203bb2012357f685d3a92116fa2f82a2b0bf3d53f620d86c98c827de1eec96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:03 compute-0 bash[88812]: 9e203bb2012357f685d3a92116fa2f82a2b0bf3d53f620d86c98c827de1eec96
Nov 29 07:13:03 compute-0 podman[88812]: 2025-11-29 07:13:03.746711927 +0000 UTC m=+0.021685318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:03 compute-0 systemd[1]: Started Ceph osd.0 for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:13:03 compute-0 sudo[88322]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:03 compute-0 ceph-osd[88831]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:13:03 compute-0 ceph-osd[88831]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 07:13:03 compute-0 ceph-osd[88831]: pidfile_write: ignore empty --pid-file
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x558198ab3800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x558198ab3800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x558198ab3800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x558198ab3800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x5581998f5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x5581998f5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x5581998f5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x5581998f5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x5581998f5800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:13:03 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:03 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 07:13:03 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 07:13:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:13:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:03 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 29 07:13:03 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 29 07:13:03 compute-0 ceph-osd[88831]: bdev(0x558198ab3800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:13:03 compute-0 sudo[88844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:03 compute-0 sudo[88844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:03 compute-0 sudo[88844]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:04 compute-0 sudo[88871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:04 compute-0 sudo[88871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:04 compute-0 sudo[88871]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:04 compute-0 sudo[88896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:04 compute-0 sudo[88896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:04 compute-0 sudo[88896]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:04 compute-0 sudo[88921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:13:04 compute-0 sudo[88921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:04 compute-0 ceph-osd[88831]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 29 07:13:04 compute-0 ceph-osd[88831]: load: jerasure load: lrc 
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:13:04 compute-0 podman[88993]: 2025-11-29 07:13:04.520178601 +0000 UTC m=+0.042400450 container create 43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:13:04 compute-0 systemd[1]: Started libpod-conmon-43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d.scope.
Nov 29 07:13:04 compute-0 ceph-mon[75050]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:04 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:04 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:04 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 07:13:04 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:04 compute-0 podman[88993]: 2025-11-29 07:13:04.501697842 +0000 UTC m=+0.023919681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:04 compute-0 podman[88993]: 2025-11-29 07:13:04.602450354 +0000 UTC m=+0.124672203 container init 43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:13:04 compute-0 podman[88993]: 2025-11-29 07:13:04.612352318 +0000 UTC m=+0.134574137 container start 43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:04 compute-0 podman[88993]: 2025-11-29 07:13:04.616031257 +0000 UTC m=+0.138253076 container attach 43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:13:04 compute-0 silly_lichterman[89008]: 167 167
Nov 29 07:13:04 compute-0 systemd[1]: libpod-43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d.scope: Deactivated successfully.
Nov 29 07:13:04 compute-0 conmon[89008]: conmon 43404a1b2f917165364f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d.scope/container/memory.events
Nov 29 07:13:04 compute-0 podman[88993]: 2025-11-29 07:13:04.620537395 +0000 UTC m=+0.142759224 container died 43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lichterman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:13:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-158109d36194db1e2db5d020f4b47d8748908e838a8803b6e88a97ff1a9cf1eb-merged.mount: Deactivated successfully.
Nov 29 07:13:04 compute-0 podman[88993]: 2025-11-29 07:13:04.664454704 +0000 UTC m=+0.186676523 container remove 43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:13:04 compute-0 systemd[1]: libpod-conmon-43404a1b2f917165364ff543e7b848b6e31ea82d9b91d2fa3dd69df8604be52d.scope: Deactivated successfully.
Nov 29 07:13:04 compute-0 ceph-osd[88831]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 07:13:04 compute-0 ceph-osd[88831]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199976c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199977400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199977400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199977400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199977400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluefs mount
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluefs mount shared_bdev_used = 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Git sha 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: DB SUMMARY
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: DB Session ID:  B22EKDZWRF4GKB1S5WUX
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                                     Options.env: 0x558199947c70
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                                Options.info_log: 0x558198b3a8a0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.write_buffer_manager: 0x558199a50460
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.row_cache: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                              Options.wal_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.wal_compression: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Compression algorithms supported:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         kZSTD supported: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b27090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b27090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b27090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5fbad0b6-437e-4fca-b214-3adad3d96e3b
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400384740900, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400384741176, "job": 1, "event": "recovery_finished"}
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 29 07:13:04 compute-0 ceph-osd[88831]: freelist init
Nov 29 07:13:04 compute-0 ceph-osd[88831]: freelist _read_cfg
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 07:13:04 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bluefs umount
Nov 29 07:13:04 compute-0 ceph-osd[88831]: bdev(0x558199977400 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 07:13:04 compute-0 podman[89233]: 2025-11-29 07:13:04.937212401 +0000 UTC m=+0.049603571 container create e2aa7678467f283d5c199f5f43cf0c382396cea9d3783ad563491ce5bee152b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate-test, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:13:04 compute-0 systemd[1]: Started libpod-conmon-e2aa7678467f283d5c199f5f43cf0c382396cea9d3783ad563491ce5bee152b8.scope.
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bdev(0x558199977400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bdev(0x558199977400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bdev(0x558199977400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bdev(0x558199977400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bluefs mount
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bluefs mount shared_bdev_used = 4718592
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Git sha 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: DB SUMMARY
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: DB Session ID:  B22EKDZWRF4GKB1S5WUW
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                                     Options.env: 0x558199af8b60
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:13:05 compute-0 podman[89233]: 2025-11-29 07:13:04.916556582 +0000 UTC m=+0.028947832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                                Options.info_log: 0x558198b3a620
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.write_buffer_manager: 0x558199a506e0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.row_cache: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                              Options.wal_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.wal_compression: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:13:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Compression algorithms supported:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         kZSTD supported: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8c755c8ac57088c1519c0adbba0599b651fa9357d8c8e40751975e566c671/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8c755c8ac57088c1519c0adbba0599b651fa9357d8c8e40751975e566c671/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8c755c8ac57088c1519c0adbba0599b651fa9357d8c8e40751975e566c671/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8c755c8ac57088c1519c0adbba0599b651fa9357d8c8e40751975e566c671/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8c755c8ac57088c1519c0adbba0599b651fa9357d8c8e40751975e566c671/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 podman[89233]: 2025-11-29 07:13:05.036487805 +0000 UTC m=+0.148878995 container init e2aa7678467f283d5c199f5f43cf0c382396cea9d3783ad563491ce5bee152b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:13:05 compute-0 podman[89233]: 2025-11-29 07:13:05.047520882 +0000 UTC m=+0.159912052 container start e2aa7678467f283d5c199f5f43cf0c382396cea9d3783ad563491ce5bee152b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b27090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b27090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558198b3a380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x558198b27090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:05 compute-0 podman[89233]: 2025-11-29 07:13:05.05160102 +0000 UTC m=+0.163992220 container attach e2aa7678467f283d5c199f5f43cf0c382396cea9d3783ad563491ce5bee152b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5fbad0b6-437e-4fca-b214-3adad3d96e3b
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400385027051, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400385036041, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400385, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fbad0b6-437e-4fca-b214-3adad3d96e3b", "db_session_id": "B22EKDZWRF4GKB1S5WUW", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400385039714, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400385, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fbad0b6-437e-4fca-b214-3adad3d96e3b", "db_session_id": "B22EKDZWRF4GKB1S5WUW", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400385043439, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400385, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fbad0b6-437e-4fca-b214-3adad3d96e3b", "db_session_id": "B22EKDZWRF4GKB1S5WUW", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400385044763, "job": 1, "event": "recovery_finished"}
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558198c94000
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: DB pointer 0x558199a39a00
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 29 07:13:05 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b27090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b27090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b27090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:13:05 compute-0 ceph-osd[88831]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 07:13:05 compute-0 ceph-osd[88831]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 07:13:05 compute-0 ceph-osd[88831]: _get_class not permitted to load lua
Nov 29 07:13:05 compute-0 ceph-osd[88831]: _get_class not permitted to load sdk
Nov 29 07:13:05 compute-0 ceph-osd[88831]: _get_class not permitted to load test_remote_reads
Nov 29 07:13:05 compute-0 ceph-osd[88831]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 07:13:05 compute-0 ceph-osd[88831]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 07:13:05 compute-0 ceph-osd[88831]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 07:13:05 compute-0 ceph-osd[88831]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 07:13:05 compute-0 ceph-osd[88831]: osd.0 0 load_pgs
Nov 29 07:13:05 compute-0 ceph-osd[88831]: osd.0 0 load_pgs opened 0 pgs
Nov 29 07:13:05 compute-0 ceph-osd[88831]: osd.0 0 log_to_monitors true
Nov 29 07:13:05 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0[88827]: 2025-11-29T07:13:05.076+0000 7f5118993740 -1 osd.0 0 log_to_monitors true
Nov 29 07:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 29 07:13:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:13:05
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [balancer INFO root] No pools available
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 29 07:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:13:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 07:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 29 07:13:05 compute-0 ceph-mon[75050]: Deploying daemon osd.1 on compute-0
Nov 29 07:13:05 compute-0 ceph-mon[75050]: from='osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 07:13:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 29 07:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 07:13:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 07:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:13:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:05 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:05 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate-test[89250]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 07:13:05 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate-test[89250]:                             [--no-systemd] [--no-tmpfs]
Nov 29 07:13:05 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate-test[89250]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 07:13:05 compute-0 systemd[1]: libpod-e2aa7678467f283d5c199f5f43cf0c382396cea9d3783ad563491ce5bee152b8.scope: Deactivated successfully.
Nov 29 07:13:05 compute-0 podman[89233]: 2025-11-29 07:13:05.682594645 +0000 UTC m=+0.794985825 container died e2aa7678467f283d5c199f5f43cf0c382396cea9d3783ad563491ce5bee152b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate-test, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dde8c755c8ac57088c1519c0adbba0599b651fa9357d8c8e40751975e566c671-merged.mount: Deactivated successfully.
Nov 29 07:13:05 compute-0 podman[89233]: 2025-11-29 07:13:05.772011585 +0000 UTC m=+0.884402795 container remove e2aa7678467f283d5c199f5f43cf0c382396cea9d3783ad563491ce5bee152b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate-test, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:13:05 compute-0 systemd[1]: libpod-conmon-e2aa7678467f283d5c199f5f43cf0c382396cea9d3783ad563491ce5bee152b8.scope: Deactivated successfully.
Nov 29 07:13:06 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 07:13:06 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 07:13:06 compute-0 systemd[1]: Reloading.
Nov 29 07:13:06 compute-0 systemd-rc-local-generator[89527]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:06 compute-0 systemd-sysv-generator[89532]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:06 compute-0 systemd[1]: Reloading.
Nov 29 07:13:06 compute-0 systemd-rc-local-generator[89562]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:06 compute-0 systemd-sysv-generator[89565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 29 07:13:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:13:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:13:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 29 07:13:06 compute-0 ceph-osd[88831]: osd.0 0 done with init, starting boot process
Nov 29 07:13:06 compute-0 ceph-osd[88831]: osd.0 0 start_boot
Nov 29 07:13:06 compute-0 ceph-osd[88831]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 07:13:06 compute-0 ceph-osd[88831]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 07:13:06 compute-0 ceph-osd[88831]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 07:13:06 compute-0 ceph-osd[88831]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 07:13:06 compute-0 ceph-osd[88831]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 29 07:13:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 29 07:13:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:13:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:06 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:13:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:06 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:06 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2183834510; not ready for session (expect reconnect)
Nov 29 07:13:06 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:13:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:06 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:13:06 compute-0 ceph-mon[75050]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:06 compute-0 ceph-mon[75050]: from='osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 07:13:06 compute-0 ceph-mon[75050]: osdmap e7: 3 total, 0 up, 3 in
Nov 29 07:13:06 compute-0 ceph-mon[75050]: from='osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:13:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:06 compute-0 systemd[1]: Starting Ceph osd.1 for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:13:06 compute-0 podman[89625]: 2025-11-29 07:13:06.998223521 +0000 UTC m=+0.061203354 container create c5908899b315b2864cde6ce4009af30c2e333609ab3fd00a7c3f0ba4a6187667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:07 compute-0 podman[89625]: 2025-11-29 07:13:06.968090426 +0000 UTC m=+0.031070269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aad94a7fea034ced971aa18a5229a9fde7369b4821a00b5e351b330d5b391bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aad94a7fea034ced971aa18a5229a9fde7369b4821a00b5e351b330d5b391bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aad94a7fea034ced971aa18a5229a9fde7369b4821a00b5e351b330d5b391bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aad94a7fea034ced971aa18a5229a9fde7369b4821a00b5e351b330d5b391bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aad94a7fea034ced971aa18a5229a9fde7369b4821a00b5e351b330d5b391bb/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:07 compute-0 podman[89625]: 2025-11-29 07:13:07.11545129 +0000 UTC m=+0.178431113 container init c5908899b315b2864cde6ce4009af30c2e333609ab3fd00a7c3f0ba4a6187667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:07 compute-0 podman[89625]: 2025-11-29 07:13:07.12110553 +0000 UTC m=+0.184085393 container start c5908899b315b2864cde6ce4009af30c2e333609ab3fd00a7c3f0ba4a6187667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:13:07 compute-0 podman[89625]: 2025-11-29 07:13:07.140516692 +0000 UTC m=+0.203496515 container attach c5908899b315b2864cde6ce4009af30c2e333609ab3fd00a7c3f0ba4a6187667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:07 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2183834510; not ready for session (expect reconnect)
Nov 29 07:13:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:13:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:07 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:13:07 compute-0 ceph-mon[75050]: from='osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:13:07 compute-0 ceph-mon[75050]: osdmap e8: 3 total, 0 up, 3 in
Nov 29 07:13:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:07 compute-0 ceph-mon[75050]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:08 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate[89641]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:13:08 compute-0 bash[89625]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:13:08 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate[89641]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 07:13:08 compute-0 bash[89625]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 07:13:08 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate[89641]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 07:13:08 compute-0 bash[89625]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 07:13:08 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate[89641]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 07:13:08 compute-0 bash[89625]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 07:13:08 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate[89641]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 07:13:08 compute-0 bash[89625]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 07:13:08 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate[89641]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:13:08 compute-0 bash[89625]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 07:13:08 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate[89641]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 07:13:08 compute-0 bash[89625]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 07:13:08 compute-0 systemd[1]: libpod-c5908899b315b2864cde6ce4009af30c2e333609ab3fd00a7c3f0ba4a6187667.scope: Deactivated successfully.
Nov 29 07:13:08 compute-0 podman[89625]: 2025-11-29 07:13:08.304063947 +0000 UTC m=+1.367043820 container died c5908899b315b2864cde6ce4009af30c2e333609ab3fd00a7c3f0ba4a6187667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:13:08 compute-0 systemd[1]: libpod-c5908899b315b2864cde6ce4009af30c2e333609ab3fd00a7c3f0ba4a6187667.scope: Consumed 1.198s CPU time.
Nov 29 07:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aad94a7fea034ced971aa18a5229a9fde7369b4821a00b5e351b330d5b391bb-merged.mount: Deactivated successfully.
Nov 29 07:13:08 compute-0 podman[89625]: 2025-11-29 07:13:08.484232299 +0000 UTC m=+1.547212152 container remove c5908899b315b2864cde6ce4009af30c2e333609ab3fd00a7c3f0ba4a6187667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1-activate, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:08 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2183834510; not ready for session (expect reconnect)
Nov 29 07:13:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:13:08 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:13:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:08 compute-0 ceph-mon[75050]: purged_snaps scrub starts
Nov 29 07:13:08 compute-0 ceph-mon[75050]: purged_snaps scrub ok
Nov 29 07:13:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:08 compute-0 podman[89820]: 2025-11-29 07:13:08.809660888 +0000 UTC m=+0.079294016 container create 6a5fc11573d1a39f1563ae47c276c7e603d4b25acd108fa32dcccaad74ad1d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:13:08 compute-0 podman[89820]: 2025-11-29 07:13:08.756162929 +0000 UTC m=+0.025796077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2489226d9368031a7b3bda647aa318b561217e216d66893e4c29221678893d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2489226d9368031a7b3bda647aa318b561217e216d66893e4c29221678893d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2489226d9368031a7b3bda647aa318b561217e216d66893e4c29221678893d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2489226d9368031a7b3bda647aa318b561217e216d66893e4c29221678893d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2489226d9368031a7b3bda647aa318b561217e216d66893e4c29221678893d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:08 compute-0 podman[89820]: 2025-11-29 07:13:08.919873465 +0000 UTC m=+0.189506653 container init 6a5fc11573d1a39f1563ae47c276c7e603d4b25acd108fa32dcccaad74ad1d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:13:08 compute-0 podman[89820]: 2025-11-29 07:13:08.936034747 +0000 UTC m=+0.205667915 container start 6a5fc11573d1a39f1563ae47c276c7e603d4b25acd108fa32dcccaad74ad1d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:08 compute-0 bash[89820]: 6a5fc11573d1a39f1563ae47c276c7e603d4b25acd108fa32dcccaad74ad1d11
Nov 29 07:13:08 compute-0 systemd[1]: Started Ceph osd.1 for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:13:09 compute-0 ceph-osd[89840]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:13:09 compute-0 ceph-osd[89840]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: pidfile_write: ignore empty --pid-file
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f949591800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f949591800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f949591800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f949591800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a3c9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a3c9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a3c9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a3c9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a3c9800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:13:09 compute-0 sudo[88921]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 29 07:13:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 07:13:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:13:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:09 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 29 07:13:09 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 29 07:13:09 compute-0 sudo[89853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:09 compute-0 sudo[89853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:09 compute-0 sudo[89853]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:09 compute-0 sudo[89878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:09 compute-0 sudo[89878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:09 compute-0 sudo[89878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f949591800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:13:09 compute-0 sudo[89903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:09 compute-0 sudo[89903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:09 compute-0 sudo[89903]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:09 compute-0 sudo[89930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:13:09 compute-0 sudo[89930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:09 compute-0 ceph-osd[89840]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 29 07:13:09 compute-0 ceph-osd[89840]: load: jerasure load: lrc 
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:13:09 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2183834510; not ready for session (expect reconnect)
Nov 29 07:13:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:13:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:09 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:13:09 compute-0 podman[90003]: 2025-11-29 07:13:09.827412802 +0000 UTC m=+0.091784500 container create 10029440ad98f6188291a385481bd75dd8240ae03c75c82bdecff3663b95473e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:09 compute-0 ceph-osd[89840]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 07:13:09 compute-0 ceph-osd[89840]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44ac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44b400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44b400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44b400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44b400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluefs mount
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluefs mount shared_bdev_used = 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Git sha 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: DB SUMMARY
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: DB Session ID:  9ZKT6N2GG7VFAQLY477O
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                                     Options.env: 0x55f94a41bc70
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                                Options.info_log: 0x55f9496188a0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.write_buffer_manager: 0x55f94a524460
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.row_cache: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                              Options.wal_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.wal_compression: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Compression algorithms supported:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         kZSTD supported: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9496182c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9496182c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 podman[90003]: 2025-11-29 07:13:09.780870612 +0000 UTC m=+0.045242360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9496182c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9496182c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9496182c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9496182c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 systemd[1]: Started libpod-conmon-10029440ad98f6188291a385481bd75dd8240ae03c75c82bdecff3663b95473e.scope.
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9496182c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f949605090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f949605090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f949605090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3184c890-c616-4639-9d84-03c65dde9212
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400389898873, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400389899241, "job": 1, "event": "recovery_finished"}
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 29 07:13:09 compute-0 ceph-osd[89840]: freelist init
Nov 29 07:13:09 compute-0 ceph-osd[89840]: freelist _read_cfg
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 07:13:09 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bluefs umount
Nov 29 07:13:09 compute-0 ceph-osd[89840]: bdev(0x55f94a44b400 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 07:13:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:09 compute-0 podman[90003]: 2025-11-29 07:13:09.947421989 +0000 UTC m=+0.211793677 container init 10029440ad98f6188291a385481bd75dd8240ae03c75c82bdecff3663b95473e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sinoussi, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:13:09 compute-0 podman[90003]: 2025-11-29 07:13:09.959937914 +0000 UTC m=+0.224309572 container start 10029440ad98f6188291a385481bd75dd8240ae03c75c82bdecff3663b95473e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:13:09 compute-0 quirky_sinoussi[90215]: 167 167
Nov 29 07:13:09 compute-0 systemd[1]: libpod-10029440ad98f6188291a385481bd75dd8240ae03c75c82bdecff3663b95473e.scope: Deactivated successfully.
Nov 29 07:13:09 compute-0 podman[90003]: 2025-11-29 07:13:09.985227836 +0000 UTC m=+0.249599494 container attach 10029440ad98f6188291a385481bd75dd8240ae03c75c82bdecff3663b95473e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sinoussi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:13:09 compute-0 podman[90003]: 2025-11-29 07:13:09.986146558 +0000 UTC m=+0.250518256 container died 10029440ad98f6188291a385481bd75dd8240ae03c75c82bdecff3663b95473e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bcb29b9a3b4bd39ccf70f51e9149174ff822bc83edf2e76f68d03b43a1ebefd-merged.mount: Deactivated successfully.
Nov 29 07:13:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 07:13:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:10 compute-0 ceph-mon[75050]: Deploying daemon osd.2 on compute-0
Nov 29 07:13:10 compute-0 ceph-mon[75050]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bdev(0x55f94a44b400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bdev(0x55f94a44b400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bdev(0x55f94a44b400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bdev(0x55f94a44b400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bluefs mount
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bluefs mount shared_bdev_used = 4718592
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Git sha 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: DB SUMMARY
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: DB Session ID:  9ZKT6N2GG7VFAQLY477P
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                                     Options.env: 0x55f94a5cc460
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                                Options.info_log: 0x55f949618600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.write_buffer_manager: 0x55f94a524460
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.row_cache: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                              Options.wal_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.wal_compression: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Compression algorithms supported:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         kZSTD supported: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 podman[90003]: 2025-11-29 07:13:10.139188624 +0000 UTC m=+0.403560282 container remove 10029440ad98f6188291a385481bd75dd8240ae03c75c82bdecff3663b95473e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 systemd[1]: libpod-conmon-10029440ad98f6188291a385481bd75dd8240ae03c75c82bdecff3663b95473e.scope: Deactivated successfully.
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9496051f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f949605090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f949605090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f949618380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f949605090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3184c890-c616-4639-9d84-03c65dde9212
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400390161007, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400390188643, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400390, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3184c890-c616-4639-9d84-03c65dde9212", "db_session_id": "9ZKT6N2GG7VFAQLY477P", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:13:10 compute-0 sudo[90441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkdmdlmfbreykcgptvehwfonyxjdkibk ; /usr/bin/python3'
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400390207340, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400390, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3184c890-c616-4639-9d84-03c65dde9212", "db_session_id": "9ZKT6N2GG7VFAQLY477P", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:13:10 compute-0 sudo[90441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400390227794, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400390, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3184c890-c616-4639-9d84-03c65dde9212", "db_session_id": "9ZKT6N2GG7VFAQLY477P", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400390251036, "job": 1, "event": "recovery_finished"}
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 07:13:10 compute-0 python3[90444]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f949772000
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: DB pointer 0x55f94a50da00
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 29 07:13:10 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f949605090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f949605090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f949605090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:13:10 compute-0 ceph-osd[89840]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 07:13:10 compute-0 ceph-osd[89840]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 07:13:10 compute-0 ceph-osd[89840]: _get_class not permitted to load lua
Nov 29 07:13:10 compute-0 ceph-osd[89840]: _get_class not permitted to load sdk
Nov 29 07:13:10 compute-0 ceph-osd[89840]: _get_class not permitted to load test_remote_reads
Nov 29 07:13:10 compute-0 ceph-osd[89840]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 07:13:10 compute-0 ceph-osd[89840]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 07:13:10 compute-0 ceph-osd[89840]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 07:13:10 compute-0 ceph-osd[89840]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 07:13:10 compute-0 ceph-osd[89840]: osd.1 0 load_pgs
Nov 29 07:13:10 compute-0 ceph-osd[89840]: osd.1 0 load_pgs opened 0 pgs
Nov 29 07:13:10 compute-0 ceph-osd[89840]: osd.1 0 log_to_monitors true
Nov 29 07:13:10 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1[89836]: 2025-11-29T07:13:10.375+0000 7f693644f740 -1 osd.1 0 log_to_monitors true
Nov 29 07:13:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 29 07:13:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 07:13:10 compute-0 podman[90453]: 2025-11-29 07:13:10.428852388 +0000 UTC m=+0.052822008 container create 01c86ec0af5c8b91befe8c46a2a6909a95ee98f7a74305b90a14de346c296d70 (image=quay.io/ceph/ceph:v18, name=brave_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:13:10 compute-0 systemd[1]: Started libpod-conmon-01c86ec0af5c8b91befe8c46a2a6909a95ee98f7a74305b90a14de346c296d70.scope.
Nov 29 07:13:10 compute-0 podman[90504]: 2025-11-29 07:13:10.492781627 +0000 UTC m=+0.070013169 container create 40d37e03e0fbb7113fa9aacd489b18a59220132e77902fdd431c91e8c8f4a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:13:10 compute-0 podman[90453]: 2025-11-29 07:13:10.39800611 +0000 UTC m=+0.021975730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac55dcb1e74a27de33c28c8d263a60ab1407ac8e50b4b8afc2ed6273c374bac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac55dcb1e74a27de33c28c8d263a60ab1407ac8e50b4b8afc2ed6273c374bac/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac55dcb1e74a27de33c28c8d263a60ab1407ac8e50b4b8afc2ed6273c374bac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:10 compute-0 systemd[1]: Started libpod-conmon-40d37e03e0fbb7113fa9aacd489b18a59220132e77902fdd431c91e8c8f4a114.scope.
Nov 29 07:13:10 compute-0 podman[90453]: 2025-11-29 07:13:10.534227073 +0000 UTC m=+0.158196713 container init 01c86ec0af5c8b91befe8c46a2a6909a95ee98f7a74305b90a14de346c296d70 (image=quay.io/ceph/ceph:v18, name=brave_bhaskara, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:10 compute-0 podman[90453]: 2025-11-29 07:13:10.543434205 +0000 UTC m=+0.167403825 container start 01c86ec0af5c8b91befe8c46a2a6909a95ee98f7a74305b90a14de346c296d70 (image=quay.io/ceph/ceph:v18, name=brave_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26bbd6c7374b1656a6138cfdaf5a3fd038fafda806276323ff95ef90d6d79fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26bbd6c7374b1656a6138cfdaf5a3fd038fafda806276323ff95ef90d6d79fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26bbd6c7374b1656a6138cfdaf5a3fd038fafda806276323ff95ef90d6d79fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26bbd6c7374b1656a6138cfdaf5a3fd038fafda806276323ff95ef90d6d79fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26bbd6c7374b1656a6138cfdaf5a3fd038fafda806276323ff95ef90d6d79fd/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:10 compute-0 podman[90453]: 2025-11-29 07:13:10.550133894 +0000 UTC m=+0.174103524 container attach 01c86ec0af5c8b91befe8c46a2a6909a95ee98f7a74305b90a14de346c296d70 (image=quay.io/ceph/ceph:v18, name=brave_bhaskara, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 07:13:10 compute-0 podman[90504]: 2025-11-29 07:13:10.468167305 +0000 UTC m=+0.045398867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:10 compute-0 podman[90504]: 2025-11-29 07:13:10.569283023 +0000 UTC m=+0.146514595 container init 40d37e03e0fbb7113fa9aacd489b18a59220132e77902fdd431c91e8c8f4a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:13:10 compute-0 podman[90504]: 2025-11-29 07:13:10.576452203 +0000 UTC m=+0.153683745 container start 40d37e03e0fbb7113fa9aacd489b18a59220132e77902fdd431c91e8c8f4a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:10 compute-0 podman[90504]: 2025-11-29 07:13:10.585494669 +0000 UTC m=+0.162726231 container attach 40d37e03e0fbb7113fa9aacd489b18a59220132e77902fdd431c91e8c8f4a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate-test, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:13:10 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2183834510; not ready for session (expect reconnect)
Nov 29 07:13:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:13:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:10 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 07:13:10 compute-0 ceph-osd[88831]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.767 iops: 6340.292 elapsed_sec: 0.473
Nov 29 07:13:10 compute-0 ceph-osd[88831]: log_channel(cluster) log [WRN] : OSD bench result of 6340.291818 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:13:10 compute-0 ceph-osd[88831]: osd.0 0 waiting for initial osdmap
Nov 29 07:13:10 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0[88827]: 2025-11-29T07:13:10.954+0000 7f5114913640 -1 osd.0 0 waiting for initial osdmap
Nov 29 07:13:10 compute-0 ceph-osd[88831]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 07:13:10 compute-0 ceph-osd[88831]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 07:13:10 compute-0 ceph-osd[88831]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 07:13:10 compute-0 ceph-osd[88831]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 29 07:13:10 compute-0 ceph-osd[88831]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:13:10 compute-0 ceph-osd[88831]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 29 07:13:10 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-0[88827]: 2025-11-29T07:13:10.979+0000 7f510ff3b640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:13:10 compute-0 ceph-osd[88831]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:13:11 compute-0 ceph-mon[75050]: from='osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 07:13:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:11 compute-0 ceph-mon[75050]: OSD bench result of 6340.291818 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:13:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 29 07:13:11 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510] boot
Nov 29 07:13:11 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 07:13:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 07:13:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:11 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:11 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:11 compute-0 ceph-osd[88831]: osd.0 9 state: booting -> active
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 07:13:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2599455429' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:13:11 compute-0 brave_bhaskara[90517]: 
Nov 29 07:13:11 compute-0 brave_bhaskara[90517]: {"fsid":"14ff1f30-5059-58f1-9a23-69871bb275a1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":114,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":9,"num_osds":3,"num_up_osds":1,"osd_up_since":1764400391,"num_in_osds":3,"osd_in_since":1764400372,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T07:13:07.423594+0000","services":{}},"progress_events":{}}
Nov 29 07:13:11 compute-0 systemd[1]: libpod-01c86ec0af5c8b91befe8c46a2a6909a95ee98f7a74305b90a14de346c296d70.scope: Deactivated successfully.
Nov 29 07:13:11 compute-0 podman[90453]: 2025-11-29 07:13:11.193687586 +0000 UTC m=+0.817657206 container died 01c86ec0af5c8b91befe8c46a2a6909a95ee98f7a74305b90a14de346c296d70 (image=quay.io/ceph/ceph:v18, name=brave_bhaskara, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:13:11 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate-test[90525]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 07:13:11 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate-test[90525]:                             [--no-systemd] [--no-tmpfs]
Nov 29 07:13:11 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate-test[90525]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 07:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ac55dcb1e74a27de33c28c8d263a60ab1407ac8e50b4b8afc2ed6273c374bac-merged.mount: Deactivated successfully.
Nov 29 07:13:11 compute-0 podman[90453]: 2025-11-29 07:13:11.23206813 +0000 UTC m=+0.856037750 container remove 01c86ec0af5c8b91befe8c46a2a6909a95ee98f7a74305b90a14de346c296d70 (image=quay.io/ceph/ceph:v18, name=brave_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:13:11 compute-0 systemd[1]: libpod-40d37e03e0fbb7113fa9aacd489b18a59220132e77902fdd431c91e8c8f4a114.scope: Deactivated successfully.
Nov 29 07:13:11 compute-0 podman[90504]: 2025-11-29 07:13:11.240256827 +0000 UTC m=+0.817488369 container died 40d37e03e0fbb7113fa9aacd489b18a59220132e77902fdd431c91e8c8f4a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate-test, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:11 compute-0 systemd[1]: libpod-conmon-01c86ec0af5c8b91befe8c46a2a6909a95ee98f7a74305b90a14de346c296d70.scope: Deactivated successfully.
Nov 29 07:13:11 compute-0 sudo[90441]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a26bbd6c7374b1656a6138cfdaf5a3fd038fafda806276323ff95ef90d6d79fd-merged.mount: Deactivated successfully.
Nov 29 07:13:11 compute-0 podman[90504]: 2025-11-29 07:13:11.298142908 +0000 UTC m=+0.875374460 container remove 40d37e03e0fbb7113fa9aacd489b18a59220132e77902fdd431c91e8c8f4a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate-test, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:11 compute-0 systemd[1]: libpod-conmon-40d37e03e0fbb7113fa9aacd489b18a59220132e77902fdd431c91e8c8f4a114.scope: Deactivated successfully.
Nov 29 07:13:11 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 07:13:11 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 07:13:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:11 compute-0 ceph-mgr[75345]: [devicehealth INFO root] creating mgr pool
Nov 29 07:13:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 29 07:13:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 07:13:11 compute-0 sudo[90609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyzttokvhzhlfswqeyqqjovgkyprhzeh ; /usr/bin/python3'
Nov 29 07:13:11 compute-0 sudo[90609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:11 compute-0 systemd[1]: Reloading.
Nov 29 07:13:11 compute-0 systemd-rc-local-generator[90651]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:11 compute-0 systemd-sysv-generator[90654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:11 compute-0 python3[90613]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:11 compute-0 podman[90660]: 2025-11-29 07:13:11.749721026 +0000 UTC m=+0.039345890 container create 8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09 (image=quay.io/ceph/ceph:v18, name=awesome_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:13:11 compute-0 podman[90660]: 2025-11-29 07:13:11.731430605 +0000 UTC m=+0.021055489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:11 compute-0 systemd[1]: Started libpod-conmon-8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09.scope.
Nov 29 07:13:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250e3e7ff45a55c17b924650131842a294bf4b177cebe6d7b7932a2524cf706c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/250e3e7ff45a55c17b924650131842a294bf4b177cebe6d7b7932a2524cf706c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:11 compute-0 systemd[1]: Reloading.
Nov 29 07:13:11 compute-0 podman[90660]: 2025-11-29 07:13:11.938934953 +0000 UTC m=+0.228559897 container init 8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09 (image=quay.io/ceph/ceph:v18, name=awesome_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:13:11 compute-0 podman[90660]: 2025-11-29 07:13:11.950109917 +0000 UTC m=+0.239734781 container start 8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09 (image=quay.io/ceph/ceph:v18, name=awesome_mirzakhani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:11 compute-0 podman[90660]: 2025-11-29 07:13:11.953293024 +0000 UTC m=+0.242917928 container attach 8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09 (image=quay.io/ceph/ceph:v18, name=awesome_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:12 compute-0 systemd-rc-local-generator[90711]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:13:12 compute-0 systemd-sysv-generator[90715]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 07:13:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:13:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 29 07:13:12 compute-0 ceph-osd[89840]: osd.1 0 done with init, starting boot process
Nov 29 07:13:12 compute-0 ceph-osd[89840]: osd.1 0 start_boot
Nov 29 07:13:12 compute-0 ceph-osd[89840]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 07:13:12 compute-0 ceph-osd[89840]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 07:13:12 compute-0 ceph-osd[89840]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 07:13:12 compute-0 ceph-osd[89840]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 07:13:12 compute-0 ceph-osd[89840]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 07:13:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 29 07:13:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mon[75050]: from='osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 07:13:12 compute-0 ceph-mon[75050]: osd.0 [v2:192.168.122.100:6802/2183834510,v1:192.168.122.100:6803/2183834510] boot
Nov 29 07:13:12 compute-0 ceph-mon[75050]: osdmap e9: 3 total, 1 up, 3 in
Nov 29 07:13:12 compute-0 ceph-mon[75050]: from='osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2599455429' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mon[75050]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 07:13:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:12 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1848893869; not ready for session (expect reconnect)
Nov 29 07:13:12 compute-0 ceph-osd[88831]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 07:13:12 compute-0 ceph-osd[88831]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 07:13:12 compute-0 ceph-osd[88831]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:12 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:12 compute-0 systemd[1]: Starting Ceph osd.2 for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:12 compute-0 podman[90786]: 2025-11-29 07:13:12.497882697 +0000 UTC m=+0.064302797 container create f8c70c930470d94f032cf94fdb5d3bc768549dac06b100fcba347d563593c0ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:13:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:13:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3873123969' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:12 compute-0 podman[90786]: 2025-11-29 07:13:12.458339169 +0000 UTC m=+0.024759259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14fa410fe074a49ba0dbc6ff9a426ab343ac9a331f9e25d46c8563e2b10cb3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14fa410fe074a49ba0dbc6ff9a426ab343ac9a331f9e25d46c8563e2b10cb3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14fa410fe074a49ba0dbc6ff9a426ab343ac9a331f9e25d46c8563e2b10cb3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14fa410fe074a49ba0dbc6ff9a426ab343ac9a331f9e25d46c8563e2b10cb3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14fa410fe074a49ba0dbc6ff9a426ab343ac9a331f9e25d46c8563e2b10cb3a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:12 compute-0 podman[90786]: 2025-11-29 07:13:12.648573644 +0000 UTC m=+0.214993744 container init f8c70c930470d94f032cf94fdb5d3bc768549dac06b100fcba347d563593c0ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:13:12 compute-0 podman[90786]: 2025-11-29 07:13:12.653657488 +0000 UTC m=+0.220077558 container start f8c70c930470d94f032cf94fdb5d3bc768549dac06b100fcba347d563593c0ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:13:12 compute-0 podman[90786]: 2025-11-29 07:13:12.682575407 +0000 UTC m=+0.248995497 container attach f8c70c930470d94f032cf94fdb5d3bc768549dac06b100fcba347d563593c0ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:13:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 29 07:13:13 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1848893869; not ready for session (expect reconnect)
Nov 29 07:13:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:13 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 07:13:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3873123969' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 29 07:13:13 compute-0 awesome_mirzakhani[90677]: pool 'vms' created
Nov 29 07:13:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 29 07:13:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:13 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:13 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:13 compute-0 systemd[1]: libpod-8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09.scope: Deactivated successfully.
Nov 29 07:13:13 compute-0 conmon[90677]: conmon 8fb39ab6b2a3be48b48b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09.scope/container/memory.events
Nov 29 07:13:13 compute-0 podman[90660]: 2025-11-29 07:13:13.215978366 +0000 UTC m=+1.505603240 container died 8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09 (image=quay.io/ceph/ceph:v18, name=awesome_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:13:13 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 11 pg[2.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [0] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:13 compute-0 ceph-mon[75050]: from='osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:13:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 07:13:13 compute-0 ceph-mon[75050]: osdmap e10: 3 total, 1 up, 3 in
Nov 29 07:13:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 07:13:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3873123969' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-250e3e7ff45a55c17b924650131842a294bf4b177cebe6d7b7932a2524cf706c-merged.mount: Deactivated successfully.
Nov 29 07:13:13 compute-0 podman[90660]: 2025-11-29 07:13:13.379145317 +0000 UTC m=+1.668770211 container remove 8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09 (image=quay.io/ceph/ceph:v18, name=awesome_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:13 compute-0 systemd[1]: libpod-conmon-8fb39ab6b2a3be48b48b100f9737468e11a40fb2363958ad1249fe190f0bbc09.scope: Deactivated successfully.
Nov 29 07:13:13 compute-0 sudo[90609]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v36: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:13:13 compute-0 sudo[90860]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awgrufpyupcdgznylmucyfntxlniczbj ; /usr/bin/python3'
Nov 29 07:13:13 compute-0 sudo[90860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:13 compute-0 python3[90865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:13 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate[90804]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:13:13 compute-0 bash[90786]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:13:13 compute-0 podman[90875]: 2025-11-29 07:13:13.756373027 +0000 UTC m=+0.059773829 container create 0ec484f056ad757ed51ecb059d9a848094f8e8cfa8fdbefd47a6de0e6ec84ff5 (image=quay.io/ceph/ceph:v18, name=nifty_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:13 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate[90804]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 07:13:13 compute-0 bash[90786]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 07:13:13 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate[90804]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 07:13:13 compute-0 bash[90786]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 07:13:13 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate[90804]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 07:13:13 compute-0 bash[90786]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 07:13:13 compute-0 podman[90875]: 2025-11-29 07:13:13.726836499 +0000 UTC m=+0.030237351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:13 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate[90804]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:13:13 compute-0 bash[90786]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 07:13:13 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate[90804]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:13:13 compute-0 bash[90786]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 07:13:13 compute-0 systemd[1]: Started libpod-conmon-0ec484f056ad757ed51ecb059d9a848094f8e8cfa8fdbefd47a6de0e6ec84ff5.scope.
Nov 29 07:13:13 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate[90804]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 07:13:13 compute-0 bash[90786]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 07:13:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:13 compute-0 systemd[1]: libpod-f8c70c930470d94f032cf94fdb5d3bc768549dac06b100fcba347d563593c0ec.scope: Deactivated successfully.
Nov 29 07:13:13 compute-0 systemd[1]: libpod-f8c70c930470d94f032cf94fdb5d3bc768549dac06b100fcba347d563593c0ec.scope: Consumed 1.210s CPU time.
Nov 29 07:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5cd1745bd9314f55c476ba824dbb560d1b19b5e5acc86bf90fe79dd7c6ff56/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5cd1745bd9314f55c476ba824dbb560d1b19b5e5acc86bf90fe79dd7c6ff56/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:13 compute-0 podman[90875]: 2025-11-29 07:13:13.932754794 +0000 UTC m=+0.236155646 container init 0ec484f056ad757ed51ecb059d9a848094f8e8cfa8fdbefd47a6de0e6ec84ff5 (image=quay.io/ceph/ceph:v18, name=nifty_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:13 compute-0 podman[90875]: 2025-11-29 07:13:13.939453963 +0000 UTC m=+0.242854765 container start 0ec484f056ad757ed51ecb059d9a848094f8e8cfa8fdbefd47a6de0e6ec84ff5 (image=quay.io/ceph/ceph:v18, name=nifty_mirzakhani, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:13:13 compute-0 podman[90875]: 2025-11-29 07:13:13.963533069 +0000 UTC m=+0.266933911 container attach 0ec484f056ad757ed51ecb059d9a848094f8e8cfa8fdbefd47a6de0e6ec84ff5 (image=quay.io/ceph/ceph:v18, name=nifty_mirzakhani, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:13 compute-0 podman[90986]: 2025-11-29 07:13:13.981511656 +0000 UTC m=+0.109298095 container died f8c70c930470d94f032cf94fdb5d3bc768549dac06b100fcba347d563593c0ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a14fa410fe074a49ba0dbc6ff9a426ab343ac9a331f9e25d46c8563e2b10cb3a-merged.mount: Deactivated successfully.
Nov 29 07:13:14 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1848893869; not ready for session (expect reconnect)
Nov 29 07:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:14 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 29 07:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 29 07:13:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 29 07:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:14 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:14 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:14 compute-0 podman[90986]: 2025-11-29 07:13:14.252176987 +0000 UTC m=+0.379963386 container remove f8c70c930470d94f032cf94fdb5d3bc768549dac06b100fcba347d563593c0ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:13:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:14 compute-0 ceph-mon[75050]: purged_snaps scrub starts
Nov 29 07:13:14 compute-0 ceph-mon[75050]: purged_snaps scrub ok
Nov 29 07:13:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 07:13:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3873123969' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:14 compute-0 ceph-mon[75050]: osdmap e11: 3 total, 1 up, 3 in
Nov 29 07:13:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:14 compute-0 ceph-mon[75050]: pgmap v36: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:13:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:14 compute-0 ceph-mon[75050]: osdmap e12: 3 total, 1 up, 3 in
Nov 29 07:13:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [0] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:14 compute-0 podman[91060]: 2025-11-29 07:13:14.475240572 +0000 UTC m=+0.055648390 container create 2e6c1ee4769aea5e614a2b0f0b65dd1997bbdea3f16ea1e0ab78d05fbef53f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:13:14 compute-0 podman[91060]: 2025-11-29 07:13:14.439456836 +0000 UTC m=+0.019864684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:13:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1712709528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c805e15d91b111068e49a7bce3fc58c3aae073126c7e54dd0878b194a7bd75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c805e15d91b111068e49a7bce3fc58c3aae073126c7e54dd0878b194a7bd75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c805e15d91b111068e49a7bce3fc58c3aae073126c7e54dd0878b194a7bd75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c805e15d91b111068e49a7bce3fc58c3aae073126c7e54dd0878b194a7bd75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c805e15d91b111068e49a7bce3fc58c3aae073126c7e54dd0878b194a7bd75/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:14 compute-0 podman[91060]: 2025-11-29 07:13:14.579252143 +0000 UTC m=+0.159659981 container init 2e6c1ee4769aea5e614a2b0f0b65dd1997bbdea3f16ea1e0ab78d05fbef53f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:14 compute-0 podman[91060]: 2025-11-29 07:13:14.585861156 +0000 UTC m=+0.166268974 container start 2e6c1ee4769aea5e614a2b0f0b65dd1997bbdea3f16ea1e0ab78d05fbef53f23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:13:14 compute-0 bash[91060]: 2e6c1ee4769aea5e614a2b0f0b65dd1997bbdea3f16ea1e0ab78d05fbef53f23
Nov 29 07:13:14 compute-0 systemd[1]: Started Ceph osd.2 for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:13:14 compute-0 ceph-osd[91083]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:13:14 compute-0 ceph-osd[91083]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 07:13:14 compute-0 ceph-osd[91083]: pidfile_write: ignore empty --pid-file
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3dbd5800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3dbd5800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3dbd5800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3dbd5800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3ea0d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3ea0d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3ea0d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3ea0d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3ea0d800 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:13:14 compute-0 sudo[89930]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:14 compute-0 sudo[91096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:14 compute-0 sudo[91096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-0 sudo[91096]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-0 sudo[91121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:14 compute-0 sudo[91121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-0 sudo[91121]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-0 sudo[91146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:14 compute-0 ceph-osd[91083]: bdev(0x560f3dbd5800 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:13:14 compute-0 sudo[91146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:14 compute-0 sudo[91146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:14 compute-0 sudo[91171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:13:14 compute-0 sudo[91171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:15 compute-0 ceph-osd[91083]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 29 07:13:15 compute-0 ceph-osd[91083]: load: jerasure load: lrc 
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:13:15 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1848893869; not ready for session (expect reconnect)
Nov 29 07:13:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:15 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 29 07:13:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1712709528' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 29 07:13:15 compute-0 nifty_mirzakhani[90983]: pool 'volumes' created
Nov 29 07:13:15 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 29 07:13:15 compute-0 systemd[1]: libpod-0ec484f056ad757ed51ecb059d9a848094f8e8cfa8fdbefd47a6de0e6ec84ff5.scope: Deactivated successfully.
Nov 29 07:13:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:15 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:15 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:15 compute-0 ceph-mon[75050]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1712709528' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:15 compute-0 podman[91242]: 2025-11-29 07:13:15.361551673 +0000 UTC m=+0.082497493 container create b207775b6bf31ccef746bcc95dfce6f411e011e250866374677c55c3d2b9d994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:15 compute-0 podman[91242]: 2025-11-29 07:13:15.301377117 +0000 UTC m=+0.022322937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:15 compute-0 systemd[1]: Started libpod-conmon-b207775b6bf31ccef746bcc95dfce6f411e011e250866374677c55c3d2b9d994.scope.
Nov 29 07:13:15 compute-0 podman[91253]: 2025-11-29 07:13:15.4156581 +0000 UTC m=+0.088818623 container died 0ec484f056ad757ed51ecb059d9a848094f8e8cfa8fdbefd47a6de0e6ec84ff5 (image=quay.io/ceph/ceph:v18, name=nifty_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:13:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v39: 3 pgs: 3 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:13:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:15 compute-0 podman[91242]: 2025-11-29 07:13:15.506615551 +0000 UTC m=+0.227561371 container init b207775b6bf31ccef746bcc95dfce6f411e011e250866374677c55c3d2b9d994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:13:15 compute-0 podman[91242]: 2025-11-29 07:13:15.514116286 +0000 UTC m=+0.235062086 container start b207775b6bf31ccef746bcc95dfce6f411e011e250866374677c55c3d2b9d994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:13:15 compute-0 youthful_turing[91271]: 167 167
Nov 29 07:13:15 compute-0 systemd[1]: libpod-b207775b6bf31ccef746bcc95dfce6f411e011e250866374677c55c3d2b9d994.scope: Deactivated successfully.
Nov 29 07:13:15 compute-0 podman[91242]: 2025-11-29 07:13:15.533672745 +0000 UTC m=+0.254618535 container attach b207775b6bf31ccef746bcc95dfce6f411e011e250866374677c55c3d2b9d994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:13:15 compute-0 podman[91242]: 2025-11-29 07:13:15.534489933 +0000 UTC m=+0.255435733 container died b207775b6bf31ccef746bcc95dfce6f411e011e250866374677c55c3d2b9d994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bfcb85a849e424fd87cb5b00b67ce7005033b541375768247599fce1cd4d4e9-merged.mount: Deactivated successfully.
Nov 29 07:13:15 compute-0 podman[91242]: 2025-11-29 07:13:15.635687714 +0000 UTC m=+0.356633514 container remove b207775b6bf31ccef746bcc95dfce6f411e011e250866374677c55c3d2b9d994 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:13:15 compute-0 systemd[1]: libpod-conmon-b207775b6bf31ccef746bcc95dfce6f411e011e250866374677c55c3d2b9d994.scope: Deactivated successfully.
Nov 29 07:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc5cd1745bd9314f55c476ba824dbb560d1b19b5e5acc86bf90fe79dd7c6ff56-merged.mount: Deactivated successfully.
Nov 29 07:13:15 compute-0 ceph-osd[91083]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 07:13:15 compute-0 ceph-osd[91083]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea92c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea93400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea93400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea93400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea93400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluefs mount
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluefs mount shared_bdev_used = 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Git sha 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: DB SUMMARY
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: DB Session ID:  UEQKGASXMI7OCNEC065C
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                                     Options.env: 0x560f3ea5fc70
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                                Options.info_log: 0x560f3dc5c8a0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.write_buffer_manager: 0x560f3eb6c460
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.row_cache: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                              Options.wal_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.wal_compression: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Compression algorithms supported:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kZSTD supported: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc49090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc49090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc49090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 88c34d5c-8e75-4b0a-9136-a4d94053a2cf
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400395727833, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400395728178, "job": 1, "event": "recovery_finished"}
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: freelist init
Nov 29 07:13:15 compute-0 ceph-osd[91083]: freelist _read_cfg
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluefs umount
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea93400 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 07:13:15 compute-0 podman[91253]: 2025-11-29 07:13:15.75217716 +0000 UTC m=+0.425337633 container remove 0ec484f056ad757ed51ecb059d9a848094f8e8cfa8fdbefd47a6de0e6ec84ff5 (image=quay.io/ceph/ceph:v18, name=nifty_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:15 compute-0 systemd[1]: libpod-conmon-0ec484f056ad757ed51ecb059d9a848094f8e8cfa8fdbefd47a6de0e6ec84ff5.scope: Deactivated successfully.
Nov 29 07:13:15 compute-0 sudo[90860]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:15 compute-0 podman[91498]: 2025-11-29 07:13:15.829888581 +0000 UTC m=+0.075840927 container create 35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_clarke, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:13:15 compute-0 podman[91498]: 2025-11-29 07:13:15.788122811 +0000 UTC m=+0.034075207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:15 compute-0 systemd[1]: Started libpod-conmon-35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6.scope.
Nov 29 07:13:15 compute-0 sudo[91535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwxtfwnbseettfpssopnxhgdruoppqrk ; /usr/bin/python3'
Nov 29 07:13:15 compute-0 sudo[91535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5d9543c6913f7e2b20aa6ce36c58a73d142559b6359e50388563cbc846331d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5d9543c6913f7e2b20aa6ce36c58a73d142559b6359e50388563cbc846331d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5d9543c6913f7e2b20aa6ce36c58a73d142559b6359e50388563cbc846331d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5d9543c6913f7e2b20aa6ce36c58a73d142559b6359e50388563cbc846331d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea93400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea93400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea93400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bdev(0x560f3ea93400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluefs mount
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluefs mount shared_bdev_used = 4718592
Nov 29 07:13:15 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: RocksDB version: 7.9.2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Git sha 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: DB SUMMARY
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: DB Session ID:  UEQKGASXMI7OCNEC065D
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: CURRENT file:  CURRENT
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                         Options.error_if_exists: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.create_if_missing: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                                     Options.env: 0x560f3ec1ab60
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                                Options.info_log: 0x560f3dc5c600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                              Options.statistics: (nil)
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.use_fsync: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                              Options.db_log_dir: 
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.write_buffer_manager: 0x560f3eb6c460
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.unordered_write: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.row_cache: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                              Options.wal_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.two_write_queues: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.wal_compression: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.atomic_flush: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.max_background_jobs: 4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.max_background_compactions: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.max_subcompactions: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.max_open_files: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Compression algorithms supported:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kZSTD supported: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kXpressCompression supported: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kBZip2Compression supported: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kLZ4Compression supported: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kZlibCompression supported: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         kSnappyCompression supported: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:15 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5ca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc491f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc49090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc49090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:           Options.merge_operator: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560f3dc5c380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560f3dc49090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.compression: LZ4
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.num_levels: 7
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.bloom_locality: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                               Options.ttl: 2592000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                       Options.enable_blob_files: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                           Options.min_blob_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 88c34d5c-8e75-4b0a-9136-a4d94053a2cf
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400396004943, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 07:13:16 compute-0 python3[91541]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:16 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1848893869; not ready for session (expect reconnect)
Nov 29 07:13:16 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400396396550, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400396, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "88c34d5c-8e75-4b0a-9136-a4d94053a2cf", "db_session_id": "UEQKGASXMI7OCNEC065D", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:13:16 compute-0 podman[91498]: 2025-11-29 07:13:16.397687842 +0000 UTC m=+0.643640298 container init 35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_clarke, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:13:16 compute-0 podman[91498]: 2025-11-29 07:13:16.409092916 +0000 UTC m=+0.655045302 container start 35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400396411571, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400396, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "88c34d5c-8e75-4b0a-9136-a4d94053a2cf", "db_session_id": "UEQKGASXMI7OCNEC065D", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:13:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1712709528' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:16 compute-0 ceph-mon[75050]: osdmap e13: 3 total, 1 up, 3 in
Nov 29 07:13:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:16 compute-0 ceph-mon[75050]: pgmap v39: 3 pgs: 3 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:13:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:16 compute-0 podman[91498]: 2025-11-29 07:13:16.417643869 +0000 UTC m=+0.663596315 container attach 35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_clarke, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400396419759, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400396, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "88c34d5c-8e75-4b0a-9136-a4d94053a2cf", "db_session_id": "UEQKGASXMI7OCNEC065D", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400396423912, "job": 1, "event": "recovery_finished"}
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560f3ddb6000
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: DB pointer 0x560f3eb55a00
Nov 29 07:13:16 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 07:13:16 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 29 07:13:16 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc49090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc49090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc49090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.5 total, 0.5 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:13:16 compute-0 ceph-osd[91083]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 07:13:16 compute-0 ceph-osd[91083]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 07:13:16 compute-0 ceph-osd[91083]: _get_class not permitted to load lua
Nov 29 07:13:16 compute-0 ceph-osd[91083]: _get_class not permitted to load sdk
Nov 29 07:13:16 compute-0 ceph-osd[91083]: _get_class not permitted to load test_remote_reads
Nov 29 07:13:16 compute-0 ceph-osd[91083]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 07:13:16 compute-0 ceph-osd[91083]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 07:13:16 compute-0 ceph-osd[91083]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 07:13:16 compute-0 ceph-osd[91083]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 07:13:16 compute-0 ceph-osd[91083]: osd.2 0 load_pgs
Nov 29 07:13:16 compute-0 ceph-osd[91083]: osd.2 0 load_pgs opened 0 pgs
Nov 29 07:13:16 compute-0 ceph-osd[91083]: osd.2 0 log_to_monitors true
Nov 29 07:13:16 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:13:16.479+0000 7fe15ddb0740 -1 osd.2 0 log_to_monitors true
Nov 29 07:13:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 07:13:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 07:13:16 compute-0 podman[91725]: 2025-11-29 07:13:16.518868152 +0000 UTC m=+0.380078132 container create 13c22c5c588b96a394ef646c6f962f3e433566970974568085e1dc34697e47f2 (image=quay.io/ceph/ceph:v18, name=gifted_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:16 compute-0 systemd[1]: Started libpod-conmon-13c22c5c588b96a394ef646c6f962f3e433566970974568085e1dc34697e47f2.scope.
Nov 29 07:13:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88697580ec39a98a90367071ea963558df8bbffaaf3fcee01155e6b2cc615834/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88697580ec39a98a90367071ea963558df8bbffaaf3fcee01155e6b2cc615834/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:16 compute-0 podman[91725]: 2025-11-29 07:13:16.495302389 +0000 UTC m=+0.356512389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:16 compute-0 podman[91725]: 2025-11-29 07:13:16.610513315 +0000 UTC m=+0.471723325 container init 13c22c5c588b96a394ef646c6f962f3e433566970974568085e1dc34697e47f2 (image=quay.io/ceph/ceph:v18, name=gifted_hodgkin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:13:16 compute-0 podman[91725]: 2025-11-29 07:13:16.616120102 +0000 UTC m=+0.477330082 container start 13c22c5c588b96a394ef646c6f962f3e433566970974568085e1dc34697e47f2 (image=quay.io/ceph/ceph:v18, name=gifted_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:16 compute-0 podman[91725]: 2025-11-29 07:13:16.626160374 +0000 UTC m=+0.487370374 container attach 13c22c5c588b96a394ef646c6f962f3e433566970974568085e1dc34697e47f2 (image=quay.io/ceph/ceph:v18, name=gifted_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:16 compute-0 ceph-osd[89840]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 11.856 iops: 3035.181 elapsed_sec: 0.988
Nov 29 07:13:16 compute-0 ceph-osd[89840]: log_channel(cluster) log [WRN] : OSD bench result of 3035.181041 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:13:16 compute-0 ceph-osd[89840]: osd.1 0 waiting for initial osdmap
Nov 29 07:13:16 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1[89836]: 2025-11-29T07:13:16.879+0000 7f69323cf640 -1 osd.1 0 waiting for initial osdmap
Nov 29 07:13:16 compute-0 ceph-osd[89840]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 07:13:16 compute-0 ceph-osd[89840]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 29 07:13:16 compute-0 ceph-osd[89840]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 07:13:16 compute-0 ceph-osd[89840]: osd.1 13 check_osdmap_features require_osd_release unknown -> reef
Nov 29 07:13:16 compute-0 ceph-osd[89840]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:13:16 compute-0 ceph-osd[89840]: osd.1 13 set_numa_affinity not setting numa affinity
Nov 29 07:13:16 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-1[89836]: 2025-11-29T07:13:16.902+0000 7f692d9f7640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:13:16 compute-0 ceph-osd[89840]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 29 07:13:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:13:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2853161994' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:17 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1848893869; not ready for session (expect reconnect)
Nov 29 07:13:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:17 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 07:13:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:17 compute-0 practical_clarke[91539]: {
Nov 29 07:13:17 compute-0 practical_clarke[91539]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "osd_id": 2,
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "type": "bluestore"
Nov 29 07:13:17 compute-0 practical_clarke[91539]:     },
Nov 29 07:13:17 compute-0 practical_clarke[91539]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "osd_id": 1,
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "type": "bluestore"
Nov 29 07:13:17 compute-0 practical_clarke[91539]:     },
Nov 29 07:13:17 compute-0 practical_clarke[91539]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "osd_id": 0,
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:13:17 compute-0 practical_clarke[91539]:         "type": "bluestore"
Nov 29 07:13:17 compute-0 practical_clarke[91539]:     }
Nov 29 07:13:17 compute-0 practical_clarke[91539]: }
Nov 29 07:13:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v40: 3 pgs: 1 active+clean, 2 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:13:17 compute-0 systemd[1]: libpod-35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6.scope: Deactivated successfully.
Nov 29 07:13:17 compute-0 systemd[1]: libpod-35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6.scope: Consumed 1.024s CPU time.
Nov 29 07:13:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 29 07:13:17 compute-0 podman[91831]: 2025-11-29 07:13:17.481117665 +0000 UTC m=+0.029745849 container died 35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:13:17 compute-0 ceph-mon[75050]: from='osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 07:13:17 compute-0 ceph-mon[75050]: OSD bench result of 3035.181041 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:13:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2853161994' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:17 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 07:13:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2853161994' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 29 07:13:17 compute-0 gifted_hodgkin[91776]: pool 'backups' created
Nov 29 07:13:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869] boot
Nov 29 07:13:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 29 07:13:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 07:13:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:13:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 07:13:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 07:13:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:17 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:17 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 07:13:17 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 07:13:17 compute-0 ceph-osd[89840]: osd.1 14 state: booting -> active
Nov 29 07:13:17 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[10,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:17 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad5d9543c6913f7e2b20aa6ce36c58a73d142559b6359e50388563cbc846331d-merged.mount: Deactivated successfully.
Nov 29 07:13:17 compute-0 systemd[1]: libpod-13c22c5c588b96a394ef646c6f962f3e433566970974568085e1dc34697e47f2.scope: Deactivated successfully.
Nov 29 07:13:17 compute-0 podman[91725]: 2025-11-29 07:13:17.527297647 +0000 UTC m=+1.388507627 container died 13c22c5c588b96a394ef646c6f962f3e433566970974568085e1dc34697e47f2 (image=quay.io/ceph/ceph:v18, name=gifted_hodgkin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-88697580ec39a98a90367071ea963558df8bbffaaf3fcee01155e6b2cc615834-merged.mount: Deactivated successfully.
Nov 29 07:13:18 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 14 pg[4.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:18 compute-0 podman[91725]: 2025-11-29 07:13:18.450422711 +0000 UTC m=+2.311632691 container remove 13c22c5c588b96a394ef646c6f962f3e433566970974568085e1dc34697e47f2 (image=quay.io/ceph/ceph:v18, name=gifted_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:13:18 compute-0 systemd[1]: libpod-conmon-13c22c5c588b96a394ef646c6f962f3e433566970974568085e1dc34697e47f2.scope: Deactivated successfully.
Nov 29 07:13:18 compute-0 sudo[91535]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 29 07:13:18 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:13:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 29 07:13:18 compute-0 ceph-osd[91083]: osd.2 0 done with init, starting boot process
Nov 29 07:13:18 compute-0 ceph-osd[91083]: osd.2 0 start_boot
Nov 29 07:13:18 compute-0 ceph-osd[91083]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 07:13:18 compute-0 ceph-osd[91083]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 07:13:18 compute-0 ceph-osd[91083]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 07:13:18 compute-0 ceph-osd[91083]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 07:13:18 compute-0 ceph-osd[91083]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 29 07:13:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 29 07:13:18 compute-0 podman[91831]: 2025-11-29 07:13:18.524605221 +0000 UTC m=+1.073233395 container remove 35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:13:18 compute-0 systemd[1]: libpod-conmon-35442e3320d5d35e4fd6bb8f57e6703f066ed6a4092b3ca30a9693c901911ad6.scope: Deactivated successfully.
Nov 29 07:13:18 compute-0 sudo[91171]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:18 compute-0 sudo[91884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyurcsyoszxfukcpfldakpooquwncahn ; /usr/bin/python3'
Nov 29 07:13:18 compute-0 sudo[91884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:18 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:18 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/972723503; not ready for session (expect reconnect)
Nov 29 07:13:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:18 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:18 compute-0 ceph-mon[75050]: pgmap v40: 3 pgs: 1 active+clean, 2 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 29 07:13:18 compute-0 ceph-mon[75050]: from='osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 07:13:18 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2853161994' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:18 compute-0 ceph-mon[75050]: osd.1 [v2:192.168.122.100:6806/1848893869,v1:192.168.122.100:6807/1848893869] boot
Nov 29 07:13:18 compute-0 ceph-mon[75050]: osdmap e14: 3 total, 2 up, 3 in
Nov 29 07:13:18 compute-0 ceph-mon[75050]: from='osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 07:13:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 07:13:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:18 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 15 pg[2.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=15 pruub=11.622415543s) [] r=-1 lpr=15 pi=[11,15)/1 crt=0'0 mlcod 0'0 active pruub 25.220598221s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:18 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 15 pg[2.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=15 pruub=11.622415543s) [] r=-1 lpr=15 pi=[11,15)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.220598221s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:18 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[13,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:18 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[10,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:18 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 15 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:18 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:18 compute-0 python3[91886]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:18 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:18 compute-0 podman[91888]: 2025-11-29 07:13:18.838811614 +0000 UTC m=+0.060369906 container create 635155310c1eb9d225cfe3d1c760b711906dca8a88b70f332c8d5b13bc3033a2 (image=quay.io/ceph/ceph:v18, name=adoring_jennings, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:18 compute-0 sudo[91901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:18 compute-0 sudo[91901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:18 compute-0 podman[91888]: 2025-11-29 07:13:18.806820644 +0000 UTC m=+0.028378916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:18 compute-0 sudo[91901]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:18 compute-0 systemd[1]: Started libpod-conmon-635155310c1eb9d225cfe3d1c760b711906dca8a88b70f332c8d5b13bc3033a2.scope.
Nov 29 07:13:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16892addfba90e4b137621abd5889858052eff5acfebf5b7e636558aff72d1d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16892addfba90e4b137621abd5889858052eff5acfebf5b7e636558aff72d1d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:18 compute-0 sudo[91928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:13:18 compute-0 sudo[91928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:18 compute-0 sudo[91928]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:19 compute-0 podman[91888]: 2025-11-29 07:13:19.013573608 +0000 UTC m=+0.235131880 container init 635155310c1eb9d225cfe3d1c760b711906dca8a88b70f332c8d5b13bc3033a2 (image=quay.io/ceph/ceph:v18, name=adoring_jennings, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:19 compute-0 podman[91888]: 2025-11-29 07:13:19.023126957 +0000 UTC m=+0.244685209 container start 635155310c1eb9d225cfe3d1c760b711906dca8a88b70f332c8d5b13bc3033a2 (image=quay.io/ceph/ceph:v18, name=adoring_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:13:19 compute-0 sudo[91956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:19 compute-0 sudo[91956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:19 compute-0 sudo[91956]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:19 compute-0 podman[91888]: 2025-11-29 07:13:19.067922836 +0000 UTC m=+0.289481088 container attach 635155310c1eb9d225cfe3d1c760b711906dca8a88b70f332c8d5b13bc3033a2 (image=quay.io/ceph/ceph:v18, name=adoring_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:13:19 compute-0 sudo[91982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:19 compute-0 sudo[91982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:19 compute-0 sudo[91982]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:19 compute-0 sudo[92007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:19 compute-0 sudo[92007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:19 compute-0 sudo[92007]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:19 compute-0 sudo[92032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:13:19 compute-0 sudo[92032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v43: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:13:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:13:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/596388415' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:19 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/972723503; not ready for session (expect reconnect)
Nov 29 07:13:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:19 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 29 07:13:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/596388415' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 29 07:13:19 compute-0 adoring_jennings[91932]: pool 'images' created
Nov 29 07:13:19 compute-0 systemd[1]: libpod-635155310c1eb9d225cfe3d1c760b711906dca8a88b70f332c8d5b13bc3033a2.scope: Deactivated successfully.
Nov 29 07:13:19 compute-0 podman[91888]: 2025-11-29 07:13:19.658007061 +0000 UTC m=+0.879565313 container died 635155310c1eb9d225cfe3d1c760b711906dca8a88b70f332c8d5b13bc3033a2 (image=quay.io/ceph/ceph:v18, name=adoring_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:13:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 29 07:13:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:19 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:19 compute-0 ceph-mon[75050]: purged_snaps scrub starts
Nov 29 07:13:19 compute-0 ceph-mon[75050]: purged_snaps scrub ok
Nov 29 07:13:19 compute-0 ceph-mon[75050]: from='osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 07:13:19 compute-0 ceph-mon[75050]: osdmap e15: 3 total, 2 up, 3 in
Nov 29 07:13:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:19 compute-0 ceph-mon[75050]: pgmap v43: 4 pgs: 2 creating+peering, 1 active+clean, 1 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:13:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/596388415' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-16892addfba90e4b137621abd5889858052eff5acfebf5b7e636558aff72d1d6-merged.mount: Deactivated successfully.
Nov 29 07:13:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:19 compute-0 podman[91888]: 2025-11-29 07:13:19.862103143 +0000 UTC m=+1.083661395 container remove 635155310c1eb9d225cfe3d1c760b711906dca8a88b70f332c8d5b13bc3033a2 (image=quay.io/ceph/ceph:v18, name=adoring_jennings, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:19 compute-0 systemd[1]: libpod-conmon-635155310c1eb9d225cfe3d1c760b711906dca8a88b70f332c8d5b13bc3033a2.scope: Deactivated successfully.
Nov 29 07:13:19 compute-0 sudo[91884]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:19 compute-0 podman[92151]: 2025-11-29 07:13:19.929158715 +0000 UTC m=+0.288702382 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:13:20 compute-0 sudo[92209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eujhkljuhnnvakpvabpsvqlhjltzgzje ; /usr/bin/python3'
Nov 29 07:13:20 compute-0 sudo[92209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:20 compute-0 podman[92151]: 2025-11-29 07:13:20.037444202 +0000 UTC m=+0.396987889 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:20 compute-0 python3[92211]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:20 compute-0 podman[92240]: 2025-11-29 07:13:20.204652789 +0000 UTC m=+0.023298532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:20 compute-0 podman[92240]: 2025-11-29 07:13:20.303162547 +0000 UTC m=+0.121808310 container create e1ea54fe63c219feb3a9f3b3dacdc559cb33d706b21b1e3279cc06adfd1f6c4a (image=quay.io/ceph/ceph:v18, name=distracted_brattain, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:20 compute-0 systemd[1]: Started libpod-conmon-e1ea54fe63c219feb3a9f3b3dacdc559cb33d706b21b1e3279cc06adfd1f6c4a.scope.
Nov 29 07:13:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755250f5e79c96d43f0714f9e0791acfaa8fdcfca7e69885f7626f78705642b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e755250f5e79c96d43f0714f9e0791acfaa8fdcfca7e69885f7626f78705642b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:20 compute-0 podman[92240]: 2025-11-29 07:13:20.481599339 +0000 UTC m=+0.300245082 container init e1ea54fe63c219feb3a9f3b3dacdc559cb33d706b21b1e3279cc06adfd1f6c4a (image=quay.io/ceph/ceph:v18, name=distracted_brattain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:20 compute-0 podman[92240]: 2025-11-29 07:13:20.487839756 +0000 UTC m=+0.306485499 container start e1ea54fe63c219feb3a9f3b3dacdc559cb33d706b21b1e3279cc06adfd1f6c4a (image=quay.io/ceph/ceph:v18, name=distracted_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:13:20 compute-0 podman[92240]: 2025-11-29 07:13:20.522248727 +0000 UTC m=+0.340894440 container attach e1ea54fe63c219feb3a9f3b3dacdc559cb33d706b21b1e3279cc06adfd1f6c4a (image=quay.io/ceph/ceph:v18, name=distracted_brattain, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:13:20 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/972723503; not ready for session (expect reconnect)
Nov 29 07:13:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:20 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/596388415' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:20 compute-0 ceph-mon[75050]: osdmap e16: 3 total, 2 up, 3 in
Nov 29 07:13:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:20 compute-0 ceph-mon[75050]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:20 compute-0 ceph-mgr[75345]: [devicehealth INFO root] creating main.db for devicehealth
Nov 29 07:13:20 compute-0 sudo[92032]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:20 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:20 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:20 compute-0 sudo[92352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:20 compute-0 sudo[92352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:20 compute-0 sudo[92352]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:20 compute-0 sudo[92377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:20 compute-0 sudo[92377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:20 compute-0 sudo[92377]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:21 compute-0 ceph-mgr[75345]: [devicehealth INFO root] Check health
Nov 29 07:13:21 compute-0 ceph-mgr[75345]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 29 07:13:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 07:13:21 compute-0 sudo[92412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:21 compute-0 sudo[92434]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 29 07:13:21 compute-0 sudo[92434]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:13:21 compute-0 sudo[92434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 29 07:13:21 compute-0 sudo[92412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:21 compute-0 sudo[92412]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:21 compute-0 sudo[92434]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 07:13:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 07:13:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:13:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:13:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2022992207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:21 compute-0 sudo[92441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- inventory --format=json-pretty --filter-for-batch
Nov 29 07:13:21 compute-0 sudo[92441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v45: 5 pgs: 2 creating+peering, 1 active+clean, 2 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:13:21 compute-0 podman[92508]: 2025-11-29 07:13:21.437027415 +0000 UTC m=+0.085664749 container create d2d3edf5c7375a43b62a9548cc997f44eb66f408c8bba73a2328a591d480985a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 29 07:13:21 compute-0 podman[92508]: 2025-11-29 07:13:21.376914651 +0000 UTC m=+0.025552005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:21 compute-0 systemd[1]: Started libpod-conmon-d2d3edf5c7375a43b62a9548cc997f44eb66f408c8bba73a2328a591d480985a.scope.
Nov 29 07:13:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:21 compute-0 podman[92508]: 2025-11-29 07:13:21.585668193 +0000 UTC m=+0.234305547 container init d2d3edf5c7375a43b62a9548cc997f44eb66f408c8bba73a2328a591d480985a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:21 compute-0 podman[92508]: 2025-11-29 07:13:21.591940944 +0000 UTC m=+0.240578278 container start d2d3edf5c7375a43b62a9548cc997f44eb66f408c8bba73a2328a591d480985a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:13:21 compute-0 silly_meninsky[92525]: 167 167
Nov 29 07:13:21 compute-0 systemd[1]: libpod-d2d3edf5c7375a43b62a9548cc997f44eb66f408c8bba73a2328a591d480985a.scope: Deactivated successfully.
Nov 29 07:13:21 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/972723503; not ready for session (expect reconnect)
Nov 29 07:13:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:21 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:21 compute-0 podman[92508]: 2025-11-29 07:13:21.639303151 +0000 UTC m=+0.287940515 container attach d2d3edf5c7375a43b62a9548cc997f44eb66f408c8bba73a2328a591d480985a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:13:21 compute-0 podman[92508]: 2025-11-29 07:13:21.640648938 +0000 UTC m=+0.289286282 container died d2d3edf5c7375a43b62a9548cc997f44eb66f408c8bba73a2328a591d480985a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-956493d33fbba5660f755a4b60c3807b46b2f3e374ba85209346456e0067ee9d-merged.mount: Deactivated successfully.
Nov 29 07:13:21 compute-0 podman[92508]: 2025-11-29 07:13:21.794232929 +0000 UTC m=+0.442870263 container remove d2d3edf5c7375a43b62a9548cc997f44eb66f408c8bba73a2328a591d480985a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_meninsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:21 compute-0 systemd[1]: libpod-conmon-d2d3edf5c7375a43b62a9548cc997f44eb66f408c8bba73a2328a591d480985a.scope: Deactivated successfully.
Nov 29 07:13:21 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:21 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:21 compute-0 ceph-mon[75050]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 07:13:21 compute-0 ceph-mon[75050]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 07:13:21 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 07:13:21 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2022992207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:21 compute-0 ceph-mon[75050]: pgmap v45: 5 pgs: 2 creating+peering, 1 active+clean, 2 unknown; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:13:21 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 29 07:13:21 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.kzdpag(active, since 76s)
Nov 29 07:13:22 compute-0 podman[92549]: 2025-11-29 07:13:21.936380512 +0000 UTC m=+0.021561767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:22 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2022992207' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 29 07:13:22 compute-0 distracted_brattain[92275]: pool 'cephfs.cephfs.meta' created
Nov 29 07:13:22 compute-0 podman[92549]: 2025-11-29 07:13:22.059358474 +0000 UTC m=+0.144539739 container create 132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 07:13:22 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 29 07:13:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:22 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:22 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:22 compute-0 systemd[1]: libpod-e1ea54fe63c219feb3a9f3b3dacdc559cb33d706b21b1e3279cc06adfd1f6c4a.scope: Deactivated successfully.
Nov 29 07:13:22 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:22 compute-0 podman[92240]: 2025-11-29 07:13:22.109736612 +0000 UTC m=+1.928382335 container died e1ea54fe63c219feb3a9f3b3dacdc559cb33d706b21b1e3279cc06adfd1f6c4a (image=quay.io/ceph/ceph:v18, name=distracted_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:13:22 compute-0 systemd[1]: Started libpod-conmon-132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4.scope.
Nov 29 07:13:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a89503a2d18830a85d5809ab3ac519f1505bd1ea40be30846532003429d8469/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a89503a2d18830a85d5809ab3ac519f1505bd1ea40be30846532003429d8469/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a89503a2d18830a85d5809ab3ac519f1505bd1ea40be30846532003429d8469/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a89503a2d18830a85d5809ab3ac519f1505bd1ea40be30846532003429d8469/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:22 compute-0 podman[92549]: 2025-11-29 07:13:22.26349421 +0000 UTC m=+0.348675455 container init 132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:22 compute-0 podman[92549]: 2025-11-29 07:13:22.269507973 +0000 UTC m=+0.354689198 container start 132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:13:22 compute-0 podman[92549]: 2025-11-29 07:13:22.308490892 +0000 UTC m=+0.393672127 container attach 132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e755250f5e79c96d43f0714f9e0791acfaa8fdcfca7e69885f7626f78705642b-merged.mount: Deactivated successfully.
Nov 29 07:13:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:22 compute-0 podman[92240]: 2025-11-29 07:13:22.45708762 +0000 UTC m=+2.275733343 container remove e1ea54fe63c219feb3a9f3b3dacdc559cb33d706b21b1e3279cc06adfd1f6c4a (image=quay.io/ceph/ceph:v18, name=distracted_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:22 compute-0 sudo[92209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:22 compute-0 systemd[1]: libpod-conmon-e1ea54fe63c219feb3a9f3b3dacdc559cb33d706b21b1e3279cc06adfd1f6c4a.scope: Deactivated successfully.
Nov 29 07:13:22 compute-0 sudo[92607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ossfiaxrcshdpvoafderwqyzfacbwjex ; /usr/bin/python3'
Nov 29 07:13:22 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/972723503; not ready for session (expect reconnect)
Nov 29 07:13:22 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:22 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:22 compute-0 sudo[92607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:22 compute-0 python3[92609]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:22 compute-0 podman[92610]: 2025-11-29 07:13:22.812185168 +0000 UTC m=+0.023540861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:22 compute-0 podman[92610]: 2025-11-29 07:13:22.927387168 +0000 UTC m=+0.138742841 container create 27d2e1bd27c5d8be6a9629c824c2a38d50dbaea60b90f37f1736020a12c517fc (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:13:22 compute-0 ceph-mon[75050]: mgrmap e9: compute-0.kzdpag(active, since 76s)
Nov 29 07:13:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2022992207' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:22 compute-0 ceph-mon[75050]: osdmap e17: 3 total, 2 up, 3 in
Nov 29 07:13:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:22 compute-0 systemd[1]: Started libpod-conmon-27d2e1bd27c5d8be6a9629c824c2a38d50dbaea60b90f37f1736020a12c517fc.scope.
Nov 29 07:13:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3a951eee11c90b4fa81c4ae07d122c3b72c70962d095dc6c29fff48d2bc27e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3a951eee11c90b4fa81c4ae07d122c3b72c70962d095dc6c29fff48d2bc27e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:23 compute-0 podman[92610]: 2025-11-29 07:13:23.046238447 +0000 UTC m=+0.257594180 container init 27d2e1bd27c5d8be6a9629c824c2a38d50dbaea60b90f37f1736020a12c517fc (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:23 compute-0 podman[92610]: 2025-11-29 07:13:23.055426787 +0000 UTC m=+0.266782460 container start 27d2e1bd27c5d8be6a9629c824c2a38d50dbaea60b90f37f1736020a12c517fc (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 29 07:13:23 compute-0 podman[92610]: 2025-11-29 07:13:23.07283563 +0000 UTC m=+0.284191313 container attach 27d2e1bd27c5d8be6a9629c824c2a38d50dbaea60b90f37f1736020a12c517fc (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:23 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v48: 6 pgs: 4 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/972723503; not ready for session (expect reconnect)
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1740675901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]: [
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:     {
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         "available": false,
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         "ceph_device": false,
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         "lsm_data": {},
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         "lvs": [],
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         "path": "/dev/sr0",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         "rejected_reasons": [
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "Insufficient space (<5GB)",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "Has a FileSystem"
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         ],
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         "sys_api": {
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "actuators": null,
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "device_nodes": "sr0",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "devname": "sr0",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "human_readable_size": "482.00 KB",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "id_bus": "ata",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "model": "QEMU DVD-ROM",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "nr_requests": "2",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "parent": "/dev/sr0",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "partitions": {},
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "path": "/dev/sr0",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "removable": "1",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "rev": "2.5+",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "ro": "0",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "rotational": "1",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "sas_address": "",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "sas_device_handle": "",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "scheduler_mode": "mq-deadline",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "sectors": 0,
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "sectorsize": "2048",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "size": 493568.0,
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "support_discard": "2048",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "type": "disk",
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:             "vendor": "QEMU"
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:         }
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]:     }
Nov 29 07:13:23 compute-0 quizzical_feynman[92578]: ]
Nov 29 07:13:23 compute-0 systemd[1]: libpod-132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4.scope: Deactivated successfully.
Nov 29 07:13:23 compute-0 podman[92549]: 2025-11-29 07:13:23.775912812 +0000 UTC m=+1.861094037 container died 132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:23 compute-0 systemd[1]: libpod-132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4.scope: Consumed 1.514s CPU time.
Nov 29 07:13:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a89503a2d18830a85d5809ab3ac519f1505bd1ea40be30846532003429d8469-merged.mount: Deactivated successfully.
Nov 29 07:13:23 compute-0 podman[92549]: 2025-11-29 07:13:23.873160023 +0000 UTC m=+1.958341258 container remove 132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:23 compute-0 systemd[1]: libpod-conmon-132ef15fa916499e709b0ce1f8985c7a5dd602e62ed5e8137f17a1c627f47cf4.scope: Deactivated successfully.
Nov 29 07:13:23 compute-0 sudo[92441]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43692k
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43692k
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44741154: error parsing value: Value '44741154' is below minimum 939524096
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44741154: error parsing value: Value '44741154' is below minimum 939524096
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3117233e-8a2c-4c93-b438-ee58a454aa65 does not exist
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a3b1946a-f382-4de4-aa1d-1d1dcbe517ea does not exist
Nov 29 07:13:23 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b7533cb6-e51a-4dde-b93e-3034c79c19a1 does not exist
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:13:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:24 compute-0 sudo[94629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:24 compute-0 sudo[94629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:24 compute-0 sudo[94629]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 29 07:13:24 compute-0 ceph-mon[75050]: osdmap e18: 3 total, 2 up, 3 in
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: pgmap v48: 6 pgs: 4 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1740675901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:24 compute-0 sudo[94654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:24 compute-0 sudo[94654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:24 compute-0 sudo[94654]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:24 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1740675901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Nov 29 07:13:24 compute-0 frosty_mccarthy[92628]: pool 'cephfs.cephfs.data' created
Nov 29 07:13:24 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Nov 29 07:13:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:24 compute-0 systemd[1]: libpod-27d2e1bd27c5d8be6a9629c824c2a38d50dbaea60b90f37f1736020a12c517fc.scope: Deactivated successfully.
Nov 29 07:13:24 compute-0 podman[92610]: 2025-11-29 07:13:24.146732367 +0000 UTC m=+1.358088040 container died 27d2e1bd27c5d8be6a9629c824c2a38d50dbaea60b90f37f1736020a12c517fc (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:13:24 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 19 pg[7.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:24 compute-0 sudo[94680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:24 compute-0 sudo[94680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:24 compute-0 sudo[94680]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd3a951eee11c90b4fa81c4ae07d122c3b72c70962d095dc6c29fff48d2bc27e-merged.mount: Deactivated successfully.
Nov 29 07:13:24 compute-0 ceph-osd[91083]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.463 iops: 4982.465 elapsed_sec: 0.602
Nov 29 07:13:24 compute-0 ceph-osd[91083]: log_channel(cluster) log [WRN] : OSD bench result of 4982.465458 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:13:24 compute-0 podman[92610]: 2025-11-29 07:13:24.238413778 +0000 UTC m=+1.449769451 container remove 27d2e1bd27c5d8be6a9629c824c2a38d50dbaea60b90f37f1736020a12c517fc (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:24 compute-0 ceph-osd[91083]: osd.2 0 waiting for initial osdmap
Nov 29 07:13:24 compute-0 sudo[94718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:13:24 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:13:24.237+0000 7fe159d30640 -1 osd.2 0 waiting for initial osdmap
Nov 29 07:13:24 compute-0 sudo[94718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:24 compute-0 systemd[1]: libpod-conmon-27d2e1bd27c5d8be6a9629c824c2a38d50dbaea60b90f37f1736020a12c517fc.scope: Deactivated successfully.
Nov 29 07:13:24 compute-0 ceph-osd[91083]: osd.2 19 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 07:13:24 compute-0 ceph-osd[91083]: osd.2 19 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 29 07:13:24 compute-0 ceph-osd[91083]: osd.2 19 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 07:13:24 compute-0 ceph-osd[91083]: osd.2 19 check_osdmap_features require_osd_release unknown -> reef
Nov 29 07:13:24 compute-0 sudo[92607]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:24 compute-0 ceph-osd[91083]: osd.2 19 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:13:24 compute-0 ceph-osd[91083]: osd.2 19 set_numa_affinity not setting numa affinity
Nov 29 07:13:24 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:13:24.269+0000 7fe155358640 -1 osd.2 19 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 07:13:24 compute-0 ceph-osd[91083]: osd.2 19 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 29 07:13:24 compute-0 sudo[94780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwvkpnsrjmdodlslxfvtahsayaliytwu ; /usr/bin/python3'
Nov 29 07:13:24 compute-0 sudo[94780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:24 compute-0 podman[94809]: 2025-11-29 07:13:24.568320781 +0000 UTC m=+0.042919158 container create 3f829fce641873371db21b33fe574d3e2bd21cbb81bb179eae3edad35f5f9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:13:24 compute-0 python3[94789]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:24 compute-0 systemd[1]: Started libpod-conmon-3f829fce641873371db21b33fe574d3e2bd21cbb81bb179eae3edad35f5f9942.scope.
Nov 29 07:13:24 compute-0 ceph-mgr[75345]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/972723503; not ready for session (expect reconnect)
Nov 29 07:13:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:24 compute-0 ceph-mgr[75345]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 07:13:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:24 compute-0 podman[94809]: 2025-11-29 07:13:24.549353816 +0000 UTC m=+0.023952243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:24 compute-0 podman[94809]: 2025-11-29 07:13:24.648394776 +0000 UTC m=+0.122993173 container init 3f829fce641873371db21b33fe574d3e2bd21cbb81bb179eae3edad35f5f9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:24 compute-0 podman[94809]: 2025-11-29 07:13:24.656147837 +0000 UTC m=+0.130746214 container start 3f829fce641873371db21b33fe574d3e2bd21cbb81bb179eae3edad35f5f9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:24 compute-0 competent_black[94826]: 167 167
Nov 29 07:13:24 compute-0 systemd[1]: libpod-3f829fce641873371db21b33fe574d3e2bd21cbb81bb179eae3edad35f5f9942.scope: Deactivated successfully.
Nov 29 07:13:24 compute-0 podman[94825]: 2025-11-29 07:13:24.660842185 +0000 UTC m=+0.045778415 container create d581e009190fe105888869c032cb12b0d12f18d39479854f8a200db90984f44a (image=quay.io/ceph/ceph:v18, name=focused_hawking, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:24 compute-0 podman[94809]: 2025-11-29 07:13:24.668862403 +0000 UTC m=+0.143460870 container attach 3f829fce641873371db21b33fe574d3e2bd21cbb81bb179eae3edad35f5f9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:13:24 compute-0 podman[94809]: 2025-11-29 07:13:24.670550598 +0000 UTC m=+0.145149005 container died 3f829fce641873371db21b33fe574d3e2bd21cbb81bb179eae3edad35f5f9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:24 compute-0 systemd[1]: Started libpod-conmon-d581e009190fe105888869c032cb12b0d12f18d39479854f8a200db90984f44a.scope.
Nov 29 07:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d59f8fbb38fd4dbd96899fe5b7bcd2a29662c067f0f4db7b55f5ef806925f4cd-merged.mount: Deactivated successfully.
Nov 29 07:13:24 compute-0 podman[94809]: 2025-11-29 07:13:24.724337419 +0000 UTC m=+0.198935796 container remove 3f829fce641873371db21b33fe574d3e2bd21cbb81bb179eae3edad35f5f9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855d1a8dfb09422ba0e6e29e29eae3f8352d0cc1c1eac9d539937dd8dce43554/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:24 compute-0 systemd[1]: libpod-conmon-3f829fce641873371db21b33fe574d3e2bd21cbb81bb179eae3edad35f5f9942.scope: Deactivated successfully.
Nov 29 07:13:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855d1a8dfb09422ba0e6e29e29eae3f8352d0cc1c1eac9d539937dd8dce43554/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:24 compute-0 podman[94825]: 2025-11-29 07:13:24.643152084 +0000 UTC m=+0.028088344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:24 compute-0 podman[94825]: 2025-11-29 07:13:24.749308218 +0000 UTC m=+0.134244478 container init d581e009190fe105888869c032cb12b0d12f18d39479854f8a200db90984f44a (image=quay.io/ceph/ceph:v18, name=focused_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 29 07:13:24 compute-0 podman[94825]: 2025-11-29 07:13:24.755036504 +0000 UTC m=+0.139972734 container start d581e009190fe105888869c032cb12b0d12f18d39479854f8a200db90984f44a (image=quay.io/ceph/ceph:v18, name=focused_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:13:24 compute-0 podman[94825]: 2025-11-29 07:13:24.759898046 +0000 UTC m=+0.144834286 container attach d581e009190fe105888869c032cb12b0d12f18d39479854f8a200db90984f44a (image=quay.io/ceph/ceph:v18, name=focused_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:13:24 compute-0 podman[94866]: 2025-11-29 07:13:24.894351289 +0000 UTC m=+0.044972163 container create baa601252006ac1c8a85ec823e39c2fa60976055b49efec71f0dcb5684422a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:24 compute-0 systemd[1]: Started libpod-conmon-baa601252006ac1c8a85ec823e39c2fa60976055b49efec71f0dcb5684422a1c.scope.
Nov 29 07:13:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93eac979405acdc5c0cb0f1b428297f793bff333c32adf49ce28e6c5f8a429a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93eac979405acdc5c0cb0f1b428297f793bff333c32adf49ce28e6c5f8a429a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93eac979405acdc5c0cb0f1b428297f793bff333c32adf49ce28e6c5f8a429a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93eac979405acdc5c0cb0f1b428297f793bff333c32adf49ce28e6c5f8a429a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93eac979405acdc5c0cb0f1b428297f793bff333c32adf49ce28e6c5f8a429a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:24 compute-0 podman[94866]: 2025-11-29 07:13:24.974053424 +0000 UTC m=+0.124674348 container init baa601252006ac1c8a85ec823e39c2fa60976055b49efec71f0dcb5684422a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:24 compute-0 podman[94866]: 2025-11-29 07:13:24.874952402 +0000 UTC m=+0.025573306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:24 compute-0 podman[94866]: 2025-11-29 07:13:24.98088593 +0000 UTC m=+0.131506804 container start baa601252006ac1c8a85ec823e39c2fa60976055b49efec71f0dcb5684422a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:13:24 compute-0 podman[94866]: 2025-11-29 07:13:24.984750765 +0000 UTC m=+0.135371669 container attach baa601252006ac1c8a85ec823e39c2fa60976055b49efec71f0dcb5684422a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:13:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 29 07:13:25 compute-0 ceph-mon[75050]: Adjusting osd_memory_target on compute-0 to 43692k
Nov 29 07:13:25 compute-0 ceph-mon[75050]: Unable to set osd_memory_target on compute-0 to 44741154: error parsing value: Value '44741154' is below minimum 939524096
Nov 29 07:13:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1740675901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 07:13:25 compute-0 ceph-mon[75050]: osdmap e19: 3 total, 2 up, 3 in
Nov 29 07:13:25 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:25 compute-0 ceph-mon[75050]: OSD bench result of 4982.465458 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 07:13:25 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 29 07:13:25 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503] boot
Nov 29 07:13:25 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 29 07:13:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 07:13:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:25 compute-0 ceph-osd[91083]: osd.2 20 state: booting -> active
Nov 29 07:13:25 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 pi=[16,20)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:25 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 20 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 29 07:13:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/608329175' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 07:13:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v51: 7 pgs: 4 active+clean, 3 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:13:26 compute-0 happy_booth[94882]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:13:26 compute-0 happy_booth[94882]: --> relative data size: 1.0
Nov 29 07:13:26 compute-0 happy_booth[94882]: --> All data devices are unavailable
Nov 29 07:13:26 compute-0 systemd[1]: libpod-baa601252006ac1c8a85ec823e39c2fa60976055b49efec71f0dcb5684422a1c.scope: Deactivated successfully.
Nov 29 07:13:26 compute-0 podman[94866]: 2025-11-29 07:13:26.031909096 +0000 UTC m=+1.182529970 container died baa601252006ac1c8a85ec823e39c2fa60976055b49efec71f0dcb5684422a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f93eac979405acdc5c0cb0f1b428297f793bff333c32adf49ce28e6c5f8a429a-merged.mount: Deactivated successfully.
Nov 29 07:13:26 compute-0 podman[94866]: 2025-11-29 07:13:26.098951558 +0000 UTC m=+1.249572432 container remove baa601252006ac1c8a85ec823e39c2fa60976055b49efec71f0dcb5684422a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_booth, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:13:26 compute-0 systemd[1]: libpod-conmon-baa601252006ac1c8a85ec823e39c2fa60976055b49efec71f0dcb5684422a1c.scope: Deactivated successfully.
Nov 29 07:13:26 compute-0 sudo[94718]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 29 07:13:26 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/608329175' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 07:13:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 29 07:13:26 compute-0 focused_hawking[94854]: enabled application 'rbd' on pool 'vms'
Nov 29 07:13:26 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 29 07:13:26 compute-0 ceph-mon[75050]: osd.2 [v2:192.168.122.100:6810/972723503,v1:192.168.122.100:6811/972723503] boot
Nov 29 07:13:26 compute-0 ceph-mon[75050]: osdmap e20: 3 total, 3 up, 3 in
Nov 29 07:13:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 07:13:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/608329175' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 07:13:26 compute-0 ceph-mon[75050]: pgmap v51: 7 pgs: 4 active+clean, 3 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 29 07:13:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 pi=[16,20)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-0 systemd[1]: libpod-d581e009190fe105888869c032cb12b0d12f18d39479854f8a200db90984f44a.scope: Deactivated successfully.
Nov 29 07:13:26 compute-0 podman[94825]: 2025-11-29 07:13:26.180515264 +0000 UTC m=+1.565451494 container died d581e009190fe105888869c032cb12b0d12f18d39479854f8a200db90984f44a (image=quay.io/ceph/ceph:v18, name=focused_hawking, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:13:26 compute-0 sudo[94944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:26 compute-0 sudo[94944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:26 compute-0 sudo[94944]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-855d1a8dfb09422ba0e6e29e29eae3f8352d0cc1c1eac9d539937dd8dce43554-merged.mount: Deactivated successfully.
Nov 29 07:13:26 compute-0 podman[94825]: 2025-11-29 07:13:26.235619751 +0000 UTC m=+1.620555981 container remove d581e009190fe105888869c032cb12b0d12f18d39479854f8a200db90984f44a (image=quay.io/ceph/ceph:v18, name=focused_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:26 compute-0 systemd[1]: libpod-conmon-d581e009190fe105888869c032cb12b0d12f18d39479854f8a200db90984f44a.scope: Deactivated successfully.
Nov 29 07:13:26 compute-0 sudo[94977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:26 compute-0 sudo[94977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:26 compute-0 sudo[94780]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:26 compute-0 sudo[94977]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:26 compute-0 sudo[95008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:26 compute-0 sudo[95008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:26 compute-0 sudo[95008]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:26 compute-0 sudo[95033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:13:26 compute-0 sudo[95033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:26 compute-0 sudo[95079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntbkquusyvhnvmrzxgtehumnrvdukvyv ; /usr/bin/python3'
Nov 29 07:13:26 compute-0 sudo[95079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:26 compute-0 python3[95083]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:26 compute-0 podman[95098]: 2025-11-29 07:13:26.669781657 +0000 UTC m=+0.099510106 container create 21b8dd8e43360ac26423e5a3b2a27080b108ecb991af6d2abe8e12e034cf3a67 (image=quay.io/ceph/ceph:v18, name=gracious_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:26 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=20 pruub=3.596966505s) [2] r=-1 lpr=20 pi=[11,20)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.220598221s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:13:26 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=20 pruub=3.596898317s) [2] r=-1 lpr=20 pi=[11,20)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.220598221s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:13:26 compute-0 podman[95098]: 2025-11-29 07:13:26.597718089 +0000 UTC m=+0.027446557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=20) [2] r=0 lpr=20 pi=[11,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:13:26 compute-0 systemd[1]: Started libpod-conmon-21b8dd8e43360ac26423e5a3b2a27080b108ecb991af6d2abe8e12e034cf3a67.scope.
Nov 29 07:13:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14ccc0067171bfffbe66a2c0b5308c40b2bf778dc6a28dfcfc91be77cfa369e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14ccc0067171bfffbe66a2c0b5308c40b2bf778dc6a28dfcfc91be77cfa369e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=20) [2] r=0 lpr=20 pi=[11,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:13:26 compute-0 podman[95098]: 2025-11-29 07:13:26.811852617 +0000 UTC m=+0.241581165 container init 21b8dd8e43360ac26423e5a3b2a27080b108ecb991af6d2abe8e12e034cf3a67 (image=quay.io/ceph/ceph:v18, name=gracious_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:13:26 compute-0 podman[95098]: 2025-11-29 07:13:26.824680976 +0000 UTC m=+0.254409454 container start 21b8dd8e43360ac26423e5a3b2a27080b108ecb991af6d2abe8e12e034cf3a67 (image=quay.io/ceph/ceph:v18, name=gracious_lichterman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:13:26 compute-0 podman[95098]: 2025-11-29 07:13:26.832018555 +0000 UTC m=+0.261747053 container attach 21b8dd8e43360ac26423e5a3b2a27080b108ecb991af6d2abe8e12e034cf3a67 (image=quay.io/ceph/ceph:v18, name=gracious_lichterman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:13:26 compute-0 podman[95141]: 2025-11-29 07:13:26.973760286 +0000 UTC m=+0.110108292 container create fc375908ab280445783ef62616d8d4d01b86b1e9ff54f942f707bed84347d3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:26 compute-0 podman[95141]: 2025-11-29 07:13:26.891158322 +0000 UTC m=+0.027506368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:27 compute-0 systemd[1]: Started libpod-conmon-fc375908ab280445783ef62616d8d4d01b86b1e9ff54f942f707bed84347d3fa.scope.
Nov 29 07:13:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 29 07:13:27 compute-0 podman[95141]: 2025-11-29 07:13:27.288775925 +0000 UTC m=+0.425123971 container init fc375908ab280445783ef62616d8d4d01b86b1e9ff54f942f707bed84347d3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 29 07:13:27 compute-0 podman[95141]: 2025-11-29 07:13:27.294026918 +0000 UTC m=+0.430374924 container start fc375908ab280445783ef62616d8d4d01b86b1e9ff54f942f707bed84347d3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatterjee, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:13:27 compute-0 youthful_chatterjee[95157]: 167 167
Nov 29 07:13:27 compute-0 systemd[1]: libpod-fc375908ab280445783ef62616d8d4d01b86b1e9ff54f942f707bed84347d3fa.scope: Deactivated successfully.
Nov 29 07:13:27 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 29 07:13:27 compute-0 podman[95141]: 2025-11-29 07:13:27.327906329 +0000 UTC m=+0.464254315 container attach fc375908ab280445783ef62616d8d4d01b86b1e9ff54f942f707bed84347d3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:13:27 compute-0 podman[95141]: 2025-11-29 07:13:27.32832801 +0000 UTC m=+0.464675976 container died fc375908ab280445783ef62616d8d4d01b86b1e9ff54f942f707bed84347d3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:13:27 compute-0 ceph-mon[75050]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/608329175' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 07:13:27 compute-0 ceph-mon[75050]: osdmap e21: 3 total, 3 up, 3 in
Nov 29 07:13:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 29 07:13:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1459004168' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 07:13:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 1 creating+peering, 5 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-78471c1731985aba190d5f8efa6853aa694a02e627e02264fb9b7379bc6f36e6-merged.mount: Deactivated successfully.
Nov 29 07:13:27 compute-0 podman[95141]: 2025-11-29 07:13:27.858057832 +0000 UTC m=+0.994405808 container remove fc375908ab280445783ef62616d8d4d01b86b1e9ff54f942f707bed84347d3fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatterjee, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:27 compute-0 systemd[1]: libpod-conmon-fc375908ab280445783ef62616d8d4d01b86b1e9ff54f942f707bed84347d3fa.scope: Deactivated successfully.
Nov 29 07:13:28 compute-0 podman[95203]: 2025-11-29 07:13:27.996483373 +0000 UTC m=+0.027933160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:28 compute-0 podman[95203]: 2025-11-29 07:13:28.297903673 +0000 UTC m=+0.329353460 container create f947e2e2aff229d676cc7a51e2f356eb62459afd4c87216b8a8a87dfdbf80a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhaskara, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:13:28 compute-0 systemd[1]: Started libpod-conmon-f947e2e2aff229d676cc7a51e2f356eb62459afd4c87216b8a8a87dfdbf80a61.scope.
Nov 29 07:13:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939d22c6da19f99c467d01f32d6df0c88182e05756d8b6d7cfcc03d5321a0721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939d22c6da19f99c467d01f32d6df0c88182e05756d8b6d7cfcc03d5321a0721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939d22c6da19f99c467d01f32d6df0c88182e05756d8b6d7cfcc03d5321a0721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939d22c6da19f99c467d01f32d6df0c88182e05756d8b6d7cfcc03d5321a0721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 29 07:13:29 compute-0 ceph-mon[75050]: osdmap e22: 3 total, 3 up, 3 in
Nov 29 07:13:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1459004168' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 07:13:29 compute-0 ceph-mon[75050]: pgmap v54: 7 pgs: 1 creating+peering, 5 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:29 compute-0 podman[95203]: 2025-11-29 07:13:29.821239322 +0000 UTC m=+1.852689129 container init f947e2e2aff229d676cc7a51e2f356eb62459afd4c87216b8a8a87dfdbf80a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhaskara, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:29 compute-0 podman[95203]: 2025-11-29 07:13:29.828654663 +0000 UTC m=+1.860104420 container start f947e2e2aff229d676cc7a51e2f356eb62459afd4c87216b8a8a87dfdbf80a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhaskara, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:29 compute-0 podman[95203]: 2025-11-29 07:13:29.849850649 +0000 UTC m=+1.881300506 container attach f947e2e2aff229d676cc7a51e2f356eb62459afd4c87216b8a8a87dfdbf80a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:13:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1459004168' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 07:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 29 07:13:29 compute-0 gracious_lichterman[95137]: enabled application 'rbd' on pool 'volumes'
Nov 29 07:13:29 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 29 07:13:29 compute-0 systemd[1]: libpod-21b8dd8e43360ac26423e5a3b2a27080b108ecb991af6d2abe8e12e034cf3a67.scope: Deactivated successfully.
Nov 29 07:13:29 compute-0 podman[95098]: 2025-11-29 07:13:29.883521194 +0000 UTC m=+3.313249642 container died 21b8dd8e43360ac26423e5a3b2a27080b108ecb991af6d2abe8e12e034cf3a67 (image=quay.io/ceph/ceph:v18, name=gracious_lichterman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f14ccc0067171bfffbe66a2c0b5308c40b2bf778dc6a28dfcfc91be77cfa369e-merged.mount: Deactivated successfully.
Nov 29 07:13:30 compute-0 podman[95098]: 2025-11-29 07:13:30.243780453 +0000 UTC m=+3.673508901 container remove 21b8dd8e43360ac26423e5a3b2a27080b108ecb991af6d2abe8e12e034cf3a67 (image=quay.io/ceph/ceph:v18, name=gracious_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:13:30 compute-0 systemd[1]: libpod-conmon-21b8dd8e43360ac26423e5a3b2a27080b108ecb991af6d2abe8e12e034cf3a67.scope: Deactivated successfully.
Nov 29 07:13:30 compute-0 sudo[95079]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:30 compute-0 sudo[95262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqbchptjuziwooahxglljcokmvysydwj ; /usr/bin/python3'
Nov 29 07:13:30 compute-0 sudo[95262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:30 compute-0 python3[95264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]: {
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:     "0": [
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:         {
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "devices": [
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "/dev/loop3"
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             ],
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_name": "ceph_lv0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_size": "21470642176",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "name": "ceph_lv0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "tags": {
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.cluster_name": "ceph",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.crush_device_class": "",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.encrypted": "0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.osd_id": "0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.type": "block",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.vdo": "0"
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             },
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "type": "block",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "vg_name": "ceph_vg0"
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:         }
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:     ],
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:     "1": [
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:         {
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "devices": [
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "/dev/loop4"
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             ],
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_name": "ceph_lv1",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_size": "21470642176",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "name": "ceph_lv1",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "tags": {
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.cluster_name": "ceph",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.crush_device_class": "",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.encrypted": "0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.osd_id": "1",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.type": "block",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.vdo": "0"
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             },
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "type": "block",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "vg_name": "ceph_vg1"
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:         }
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:     ],
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:     "2": [
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:         {
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "devices": [
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "/dev/loop5"
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             ],
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_name": "ceph_lv2",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_size": "21470642176",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "name": "ceph_lv2",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "tags": {
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.cluster_name": "ceph",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.crush_device_class": "",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.encrypted": "0",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.osd_id": "2",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.type": "block",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:                 "ceph.vdo": "0"
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             },
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "type": "block",
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:             "vg_name": "ceph_vg2"
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:         }
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]:     ]
Nov 29 07:13:30 compute-0 sweet_bhaskara[95219]: }
Nov 29 07:13:30 compute-0 podman[95265]: 2025-11-29 07:13:30.687287773 +0000 UTC m=+0.097739797 container create 8142ddfbe82d1f89089eae41ba84d0275ea61c0977e5518d5b285571b5b9c250 (image=quay.io/ceph/ceph:v18, name=festive_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:13:30 compute-0 systemd[1]: libpod-f947e2e2aff229d676cc7a51e2f356eb62459afd4c87216b8a8a87dfdbf80a61.scope: Deactivated successfully.
Nov 29 07:13:30 compute-0 podman[95203]: 2025-11-29 07:13:30.720713651 +0000 UTC m=+2.752163448 container died f947e2e2aff229d676cc7a51e2f356eb62459afd4c87216b8a8a87dfdbf80a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhaskara, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:30 compute-0 podman[95265]: 2025-11-29 07:13:30.633114011 +0000 UTC m=+0.043566055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:30 compute-0 systemd[1]: Started libpod-conmon-8142ddfbe82d1f89089eae41ba84d0275ea61c0977e5518d5b285571b5b9c250.scope.
Nov 29 07:13:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088f2908ed088b972ffa3cb9a755e05c7aaeed7d4c0c8aadf8cedb7b0659bd1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4088f2908ed088b972ffa3cb9a755e05c7aaeed7d4c0c8aadf8cedb7b0659bd1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:30 compute-0 ceph-mon[75050]: pgmap v55: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1459004168' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 07:13:30 compute-0 ceph-mon[75050]: osdmap e23: 3 total, 3 up, 3 in
Nov 29 07:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-939d22c6da19f99c467d01f32d6df0c88182e05756d8b6d7cfcc03d5321a0721-merged.mount: Deactivated successfully.
Nov 29 07:13:31 compute-0 podman[95203]: 2025-11-29 07:13:31.086472378 +0000 UTC m=+3.117922135 container remove f947e2e2aff229d676cc7a51e2f356eb62459afd4c87216b8a8a87dfdbf80a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhaskara, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:31 compute-0 systemd[1]: libpod-conmon-f947e2e2aff229d676cc7a51e2f356eb62459afd4c87216b8a8a87dfdbf80a61.scope: Deactivated successfully.
Nov 29 07:13:31 compute-0 podman[95265]: 2025-11-29 07:13:31.118100278 +0000 UTC m=+0.528552322 container init 8142ddfbe82d1f89089eae41ba84d0275ea61c0977e5518d5b285571b5b9c250 (image=quay.io/ceph/ceph:v18, name=festive_lichterman, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:13:31 compute-0 sudo[95033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:31 compute-0 podman[95265]: 2025-11-29 07:13:31.124148992 +0000 UTC m=+0.534601016 container start 8142ddfbe82d1f89089eae41ba84d0275ea61c0977e5518d5b285571b5b9c250 (image=quay.io/ceph/ceph:v18, name=festive_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:31 compute-0 podman[95265]: 2025-11-29 07:13:31.127451362 +0000 UTC m=+0.537903386 container attach 8142ddfbe82d1f89089eae41ba84d0275ea61c0977e5518d5b285571b5b9c250 (image=quay.io/ceph/ceph:v18, name=festive_lichterman, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:31 compute-0 sudo[95300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:31 compute-0 sudo[95300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:31 compute-0 sudo[95300]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:31 compute-0 sudo[95325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:31 compute-0 sudo[95325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:31 compute-0 sudo[95325]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:31 compute-0 sudo[95350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:31 compute-0 sudo[95350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:31 compute-0 sudo[95350]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:31 compute-0 sudo[95375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:13:31 compute-0 sudo[95375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:31 compute-0 podman[95457]: 2025-11-29 07:13:31.658076258 +0000 UTC m=+0.042954707 container create b60c8ca39c7c2057605aa4631f6e80a1de85b1b2f183b7938df8de8707dd8a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:13:31 compute-0 systemd[1]: Started libpod-conmon-b60c8ca39c7c2057605aa4631f6e80a1de85b1b2f183b7938df8de8707dd8a8c.scope.
Nov 29 07:13:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:31 compute-0 podman[95457]: 2025-11-29 07:13:31.721582724 +0000 UTC m=+0.106461213 container init b60c8ca39c7c2057605aa4631f6e80a1de85b1b2f183b7938df8de8707dd8a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:13:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 29 07:13:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/715394026' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 07:13:31 compute-0 podman[95457]: 2025-11-29 07:13:31.727742551 +0000 UTC m=+0.112621010 container start b60c8ca39c7c2057605aa4631f6e80a1de85b1b2f183b7938df8de8707dd8a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:31 compute-0 objective_colden[95473]: 167 167
Nov 29 07:13:31 compute-0 systemd[1]: libpod-b60c8ca39c7c2057605aa4631f6e80a1de85b1b2f183b7938df8de8707dd8a8c.scope: Deactivated successfully.
Nov 29 07:13:31 compute-0 podman[95457]: 2025-11-29 07:13:31.732702836 +0000 UTC m=+0.117581325 container attach b60c8ca39c7c2057605aa4631f6e80a1de85b1b2f183b7938df8de8707dd8a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:13:31 compute-0 podman[95457]: 2025-11-29 07:13:31.733304722 +0000 UTC m=+0.118183191 container died b60c8ca39c7c2057605aa4631f6e80a1de85b1b2f183b7938df8de8707dd8a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:13:31 compute-0 podman[95457]: 2025-11-29 07:13:31.638560748 +0000 UTC m=+0.023439227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-19eb595b3d60bce825eb7f4e0e2cb4216a478ad87e17ee80e25b4e0b68aa7f12-merged.mount: Deactivated successfully.
Nov 29 07:13:31 compute-0 podman[95457]: 2025-11-29 07:13:31.771794359 +0000 UTC m=+0.156672818 container remove b60c8ca39c7c2057605aa4631f6e80a1de85b1b2f183b7938df8de8707dd8a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:31 compute-0 systemd[1]: libpod-conmon-b60c8ca39c7c2057605aa4631f6e80a1de85b1b2f183b7938df8de8707dd8a8c.scope: Deactivated successfully.
Nov 29 07:13:31 compute-0 podman[95497]: 2025-11-29 07:13:31.929955845 +0000 UTC m=+0.055695413 container create 24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:13:31 compute-0 systemd[1]: Started libpod-conmon-24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0.scope.
Nov 29 07:13:31 compute-0 podman[95497]: 2025-11-29 07:13:31.896819936 +0000 UTC m=+0.022559534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/138b61974226f71be74acc42fd68477df2157482dedf7c565c1df92250239773/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/138b61974226f71be74acc42fd68477df2157482dedf7c565c1df92250239773/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/138b61974226f71be74acc42fd68477df2157482dedf7c565c1df92250239773/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/138b61974226f71be74acc42fd68477df2157482dedf7c565c1df92250239773/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 29 07:13:32 compute-0 podman[95497]: 2025-11-29 07:13:32.01957733 +0000 UTC m=+0.145316928 container init 24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:13:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/715394026' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 07:13:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 29 07:13:32 compute-0 festive_lichterman[95290]: enabled application 'rbd' on pool 'backups'
Nov 29 07:13:32 compute-0 ceph-mon[75050]: pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:32 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/715394026' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 07:13:32 compute-0 podman[95497]: 2025-11-29 07:13:32.031506035 +0000 UTC m=+0.157245603 container start 24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 29 07:13:32 compute-0 podman[95497]: 2025-11-29 07:13:32.035092572 +0000 UTC m=+0.160832150 container attach 24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:32 compute-0 systemd[1]: libpod-8142ddfbe82d1f89089eae41ba84d0275ea61c0977e5518d5b285571b5b9c250.scope: Deactivated successfully.
Nov 29 07:13:32 compute-0 podman[95265]: 2025-11-29 07:13:32.042860304 +0000 UTC m=+1.453312328 container died 8142ddfbe82d1f89089eae41ba84d0275ea61c0977e5518d5b285571b5b9c250 (image=quay.io/ceph/ceph:v18, name=festive_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4088f2908ed088b972ffa3cb9a755e05c7aaeed7d4c0c8aadf8cedb7b0659bd1-merged.mount: Deactivated successfully.
Nov 29 07:13:32 compute-0 podman[95265]: 2025-11-29 07:13:32.233091062 +0000 UTC m=+1.643543096 container remove 8142ddfbe82d1f89089eae41ba84d0275ea61c0977e5518d5b285571b5b9c250 (image=quay.io/ceph/ceph:v18, name=festive_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:32 compute-0 systemd[1]: libpod-conmon-8142ddfbe82d1f89089eae41ba84d0275ea61c0977e5518d5b285571b5b9c250.scope: Deactivated successfully.
Nov 29 07:13:32 compute-0 sudo[95262]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:32 compute-0 sudo[95553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hopgbzneagvwhrgieqoxgsxolieawyeo ; /usr/bin/python3'
Nov 29 07:13:32 compute-0 sudo[95553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:32 compute-0 python3[95555]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:32 compute-0 podman[95556]: 2025-11-29 07:13:32.731487544 +0000 UTC m=+0.081862576 container create 00c7ab572d23730bc549d968b3fed332539d4a913ecf56de717eeb40ec681848 (image=quay.io/ceph/ceph:v18, name=peaceful_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:13:32 compute-0 podman[95556]: 2025-11-29 07:13:32.68386068 +0000 UTC m=+0.034235752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:32 compute-0 systemd[1]: Started libpod-conmon-00c7ab572d23730bc549d968b3fed332539d4a913ecf56de717eeb40ec681848.scope.
Nov 29 07:13:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba42fe3d9cd64ff892669b63b4078369a1b9b6471c364a2bdef1c852f1dd57d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba42fe3d9cd64ff892669b63b4078369a1b9b6471c364a2bdef1c852f1dd57d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:32 compute-0 podman[95556]: 2025-11-29 07:13:32.861931587 +0000 UTC m=+0.212306659 container init 00c7ab572d23730bc549d968b3fed332539d4a913ecf56de717eeb40ec681848 (image=quay.io/ceph/ceph:v18, name=peaceful_jepsen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:13:32 compute-0 podman[95556]: 2025-11-29 07:13:32.870575192 +0000 UTC m=+0.220950224 container start 00c7ab572d23730bc549d968b3fed332539d4a913ecf56de717eeb40ec681848 (image=quay.io/ceph/ceph:v18, name=peaceful_jepsen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:13:32 compute-0 podman[95556]: 2025-11-29 07:13:32.879265988 +0000 UTC m=+0.229641070 container attach 00c7ab572d23730bc549d968b3fed332539d4a913ecf56de717eeb40ec681848 (image=quay.io/ceph/ceph:v18, name=peaceful_jepsen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:33 compute-0 ceph-mon[75050]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/715394026' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 07:13:33 compute-0 ceph-mon[75050]: osdmap e24: 3 total, 3 up, 3 in
Nov 29 07:13:33 compute-0 funny_edison[95513]: {
Nov 29 07:13:33 compute-0 funny_edison[95513]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "osd_id": 2,
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "type": "bluestore"
Nov 29 07:13:33 compute-0 funny_edison[95513]:     },
Nov 29 07:13:33 compute-0 funny_edison[95513]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "osd_id": 1,
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "type": "bluestore"
Nov 29 07:13:33 compute-0 funny_edison[95513]:     },
Nov 29 07:13:33 compute-0 funny_edison[95513]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "osd_id": 0,
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:13:33 compute-0 funny_edison[95513]:         "type": "bluestore"
Nov 29 07:13:33 compute-0 funny_edison[95513]:     }
Nov 29 07:13:33 compute-0 funny_edison[95513]: }
Nov 29 07:13:33 compute-0 systemd[1]: libpod-24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0.scope: Deactivated successfully.
Nov 29 07:13:33 compute-0 podman[95497]: 2025-11-29 07:13:33.162613446 +0000 UTC m=+1.288353024 container died 24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:33 compute-0 systemd[1]: libpod-24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0.scope: Consumed 1.051s CPU time.
Nov 29 07:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-138b61974226f71be74acc42fd68477df2157482dedf7c565c1df92250239773-merged.mount: Deactivated successfully.
Nov 29 07:13:33 compute-0 podman[95497]: 2025-11-29 07:13:33.241140651 +0000 UTC m=+1.366880219 container remove 24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:33 compute-0 systemd[1]: libpod-conmon-24bcace459d82a268451707458551e4f35e9ad7ae09142eeebf6ce2ec6e62df0.scope: Deactivated successfully.
Nov 29 07:13:33 compute-0 sudo[95375]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:33 compute-0 sudo[95635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:33 compute-0 sudo[95635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:33 compute-0 sudo[95635]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:33 compute-0 sudo[95660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:13:33 compute-0 sudo[95660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:33 compute-0 sudo[95660]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:33 compute-0 sudo[95685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:33 compute-0 sudo[95685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:33 compute-0 sudo[95685]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 29 07:13:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2976737433' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 07:13:33 compute-0 sudo[95711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:33 compute-0 sudo[95711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:33 compute-0 sudo[95711]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:33 compute-0 sudo[95736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:33 compute-0 sudo[95736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:33 compute-0 sudo[95736]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:33 compute-0 sudo[95761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:13:33 compute-0 sudo[95761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-0 podman[95858]: 2025-11-29 07:13:34.151136315 +0000 UTC m=+0.104498870 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:34 compute-0 podman[95858]: 2025-11-29 07:13:34.252400657 +0000 UTC m=+0.205763162 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:34 compute-0 ceph-mon[75050]: pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:34 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2976737433' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 07:13:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 29 07:13:34 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2976737433' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 07:13:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 29 07:13:34 compute-0 peaceful_jepsen[95571]: enabled application 'rbd' on pool 'images'
Nov 29 07:13:34 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 29 07:13:34 compute-0 systemd[1]: libpod-00c7ab572d23730bc549d968b3fed332539d4a913ecf56de717eeb40ec681848.scope: Deactivated successfully.
Nov 29 07:13:34 compute-0 podman[95556]: 2025-11-29 07:13:34.347994994 +0000 UTC m=+1.698370026 container died 00c7ab572d23730bc549d968b3fed332539d4a913ecf56de717eeb40ec681848 (image=quay.io/ceph/ceph:v18, name=peaceful_jepsen, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ba42fe3d9cd64ff892669b63b4078369a1b9b6471c364a2bdef1c852f1dd57d-merged.mount: Deactivated successfully.
Nov 29 07:13:34 compute-0 podman[95556]: 2025-11-29 07:13:34.410592985 +0000 UTC m=+1.760968017 container remove 00c7ab572d23730bc549d968b3fed332539d4a913ecf56de717eeb40ec681848 (image=quay.io/ceph/ceph:v18, name=peaceful_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:34 compute-0 systemd[1]: libpod-conmon-00c7ab572d23730bc549d968b3fed332539d4a913ecf56de717eeb40ec681848.scope: Deactivated successfully.
Nov 29 07:13:34 compute-0 sudo[95553]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-0 sudo[95967]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bktnwoqjlvqwqpsjlagbvvffveozvdna ; /usr/bin/python3'
Nov 29 07:13:34 compute-0 sudo[95967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:34 compute-0 python3[95981]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:34 compute-0 sudo[95761]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-0 podman[96009]: 2025-11-29 07:13:34.752522485 +0000 UTC m=+0.045654371 container create 9b226fba91cf1dfb9f45a671062036e9e448c130afb026c854d52549797519b8 (image=quay.io/ceph/ceph:v18, name=pedantic_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:13:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:34 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:34 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:34 compute-0 systemd[1]: Started libpod-conmon-9b226fba91cf1dfb9f45a671062036e9e448c130afb026c854d52549797519b8.scope.
Nov 29 07:13:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c31da7b847066ef526b9041e4879927c1a2e992f4b46eb4ed74936ab7b11dd6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c31da7b847066ef526b9041e4879927c1a2e992f4b46eb4ed74936ab7b11dd6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:34 compute-0 podman[96009]: 2025-11-29 07:13:34.734822994 +0000 UTC m=+0.027954910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:34 compute-0 podman[96009]: 2025-11-29 07:13:34.831060389 +0000 UTC m=+0.124192305 container init 9b226fba91cf1dfb9f45a671062036e9e448c130afb026c854d52549797519b8 (image=quay.io/ceph/ceph:v18, name=pedantic_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 07:13:34 compute-0 sudo[96036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:34 compute-0 sudo[96036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-0 podman[96009]: 2025-11-29 07:13:34.83920927 +0000 UTC m=+0.132341156 container start 9b226fba91cf1dfb9f45a671062036e9e448c130afb026c854d52549797519b8 (image=quay.io/ceph/ceph:v18, name=pedantic_galois, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:13:34 compute-0 sudo[96036]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-0 podman[96009]: 2025-11-29 07:13:34.84325909 +0000 UTC m=+0.136390976 container attach 9b226fba91cf1dfb9f45a671062036e9e448c130afb026c854d52549797519b8 (image=quay.io/ceph/ceph:v18, name=pedantic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:13:34 compute-0 sudo[96065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:34 compute-0 sudo[96065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-0 sudo[96065]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:34 compute-0 sudo[96090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:34 compute-0 sudo[96090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:34 compute-0 sudo[96090]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:35 compute-0 sudo[96115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:13:35 compute-0 sudo[96115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:35 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2976737433' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 07:13:35 compute-0 ceph-mon[75050]: osdmap e25: 3 total, 3 up, 3 in
Nov 29 07:13:35 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:35 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 29 07:13:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1075578666' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:35 compute-0 sudo[96115]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:13:35 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:13:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:13:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:13:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1335166a-4a52-4a45-9917-f8e38adadd1e does not exist
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 936b0796-62e3-4140-b5ec-8946be5bae43 does not exist
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev bb046bf4-f4b3-4546-abf4-e18cfb249588 does not exist
Nov 29 07:13:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:13:35 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:13:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:13:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:13:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:13:35 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:13:35 compute-0 sudo[96190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:35 compute-0 sudo[96190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:35 compute-0 sudo[96190]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:35 compute-0 sudo[96215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:35 compute-0 sudo[96215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:35 compute-0 sudo[96215]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:35 compute-0 sudo[96240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:35 compute-0 sudo[96240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:35 compute-0 sudo[96240]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:35 compute-0 sudo[96265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:13:35 compute-0 sudo[96265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:36 compute-0 podman[96328]: 2025-11-29 07:13:36.081122663 +0000 UTC m=+0.066389464 container create 992004b072626258e13d1206397f57ba6e936ebc6e5555a27bef719fbda2e08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:13:36 compute-0 systemd[1]: Started libpod-conmon-992004b072626258e13d1206397f57ba6e936ebc6e5555a27bef719fbda2e08e.scope.
Nov 29 07:13:36 compute-0 podman[96328]: 2025-11-29 07:13:36.03759448 +0000 UTC m=+0.022861301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:36 compute-0 podman[96328]: 2025-11-29 07:13:36.171788837 +0000 UTC m=+0.157055648 container init 992004b072626258e13d1206397f57ba6e936ebc6e5555a27bef719fbda2e08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:13:36 compute-0 podman[96328]: 2025-11-29 07:13:36.177197824 +0000 UTC m=+0.162464625 container start 992004b072626258e13d1206397f57ba6e936ebc6e5555a27bef719fbda2e08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:36 compute-0 funny_dijkstra[96344]: 167 167
Nov 29 07:13:36 compute-0 systemd[1]: libpod-992004b072626258e13d1206397f57ba6e936ebc6e5555a27bef719fbda2e08e.scope: Deactivated successfully.
Nov 29 07:13:36 compute-0 podman[96328]: 2025-11-29 07:13:36.183250648 +0000 UTC m=+0.168517449 container attach 992004b072626258e13d1206397f57ba6e936ebc6e5555a27bef719fbda2e08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:36 compute-0 podman[96328]: 2025-11-29 07:13:36.183651659 +0000 UTC m=+0.168918460 container died 992004b072626258e13d1206397f57ba6e936ebc6e5555a27bef719fbda2e08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-98bf1e3bc098b9abb9347ed412fcb9fea4a4897cd07e0ab61787fad8483ad002-merged.mount: Deactivated successfully.
Nov 29 07:13:36 compute-0 podman[96328]: 2025-11-29 07:13:36.284045516 +0000 UTC m=+0.269312307 container remove 992004b072626258e13d1206397f57ba6e936ebc6e5555a27bef719fbda2e08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:13:36 compute-0 systemd[1]: libpod-conmon-992004b072626258e13d1206397f57ba6e936ebc6e5555a27bef719fbda2e08e.scope: Deactivated successfully.
Nov 29 07:13:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 29 07:13:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1075578666' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 07:13:36 compute-0 ceph-mon[75050]: pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:13:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:13:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:13:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:36 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1075578666' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 07:13:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 29 07:13:36 compute-0 pedantic_galois[96040]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 29 07:13:36 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 29 07:13:36 compute-0 systemd[1]: libpod-9b226fba91cf1dfb9f45a671062036e9e448c130afb026c854d52549797519b8.scope: Deactivated successfully.
Nov 29 07:13:36 compute-0 podman[96009]: 2025-11-29 07:13:36.401336213 +0000 UTC m=+1.694468129 container died 9b226fba91cf1dfb9f45a671062036e9e448c130afb026c854d52549797519b8 (image=quay.io/ceph/ceph:v18, name=pedantic_galois, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c31da7b847066ef526b9041e4879927c1a2e992f4b46eb4ed74936ab7b11dd6-merged.mount: Deactivated successfully.
Nov 29 07:13:36 compute-0 podman[96369]: 2025-11-29 07:13:36.430631009 +0000 UTC m=+0.040955094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:36 compute-0 podman[96009]: 2025-11-29 07:13:36.530152743 +0000 UTC m=+1.823284629 container remove 9b226fba91cf1dfb9f45a671062036e9e448c130afb026c854d52549797519b8 (image=quay.io/ceph/ceph:v18, name=pedantic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:13:36 compute-0 podman[96369]: 2025-11-29 07:13:36.539129097 +0000 UTC m=+0.149453162 container create 1ad6015d8c31456be78f0eaa22e22ae863348caac2b500d6ae4d17064286b51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brahmagupta, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:36 compute-0 sudo[95967]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:36 compute-0 systemd[1]: libpod-conmon-9b226fba91cf1dfb9f45a671062036e9e448c130afb026c854d52549797519b8.scope: Deactivated successfully.
Nov 29 07:13:36 compute-0 systemd[1]: Started libpod-conmon-1ad6015d8c31456be78f0eaa22e22ae863348caac2b500d6ae4d17064286b51e.scope.
Nov 29 07:13:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032978b57ead9c8fbef4fa9a571dd7bd4184d479c36edbebf9a616f3dbcf658b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032978b57ead9c8fbef4fa9a571dd7bd4184d479c36edbebf9a616f3dbcf658b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032978b57ead9c8fbef4fa9a571dd7bd4184d479c36edbebf9a616f3dbcf658b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032978b57ead9c8fbef4fa9a571dd7bd4184d479c36edbebf9a616f3dbcf658b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032978b57ead9c8fbef4fa9a571dd7bd4184d479c36edbebf9a616f3dbcf658b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:36 compute-0 podman[96369]: 2025-11-29 07:13:36.645293522 +0000 UTC m=+0.255617617 container init 1ad6015d8c31456be78f0eaa22e22ae863348caac2b500d6ae4d17064286b51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brahmagupta, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:13:36 compute-0 podman[96369]: 2025-11-29 07:13:36.652683322 +0000 UTC m=+0.263007407 container start 1ad6015d8c31456be78f0eaa22e22ae863348caac2b500d6ae4d17064286b51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:13:36 compute-0 podman[96369]: 2025-11-29 07:13:36.675261995 +0000 UTC m=+0.285586090 container attach 1ad6015d8c31456be78f0eaa22e22ae863348caac2b500d6ae4d17064286b51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brahmagupta, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:36 compute-0 sudo[96424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwhuioaxyshftnfjajosplvwzrijuhyu ; /usr/bin/python3'
Nov 29 07:13:36 compute-0 sudo[96424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:36 compute-0 python3[96426]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:36 compute-0 podman[96427]: 2025-11-29 07:13:36.873558153 +0000 UTC m=+0.039911965 container create 2fc435fa13d6f25dfe1110e9bfa74134365b9122350ee6ca3181e3d5352a7caf (image=quay.io/ceph/ceph:v18, name=youthful_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:13:36 compute-0 systemd[1]: Started libpod-conmon-2fc435fa13d6f25dfe1110e9bfa74134365b9122350ee6ca3181e3d5352a7caf.scope.
Nov 29 07:13:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6e5a9163aab5c5b89ba570b81921bdc7bdf265d360b9bdbf1761b5f19141d2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6e5a9163aab5c5b89ba570b81921bdc7bdf265d360b9bdbf1761b5f19141d2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:36 compute-0 podman[96427]: 2025-11-29 07:13:36.947623606 +0000 UTC m=+0.113977418 container init 2fc435fa13d6f25dfe1110e9bfa74134365b9122350ee6ca3181e3d5352a7caf (image=quay.io/ceph/ceph:v18, name=youthful_lederberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:36 compute-0 podman[96427]: 2025-11-29 07:13:36.855076922 +0000 UTC m=+0.021430744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:36 compute-0 podman[96427]: 2025-11-29 07:13:36.953774223 +0000 UTC m=+0.120128035 container start 2fc435fa13d6f25dfe1110e9bfa74134365b9122350ee6ca3181e3d5352a7caf (image=quay.io/ceph/ceph:v18, name=youthful_lederberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:36 compute-0 podman[96427]: 2025-11-29 07:13:36.959230401 +0000 UTC m=+0.125584223 container attach 2fc435fa13d6f25dfe1110e9bfa74134365b9122350ee6ca3181e3d5352a7caf (image=quay.io/ceph/ceph:v18, name=youthful_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:13:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1075578666' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 07:13:37 compute-0 ceph-mon[75050]: osdmap e26: 3 total, 3 up, 3 in
Nov 29 07:13:37 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 29 07:13:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2010212766' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 07:13:37 compute-0 trusting_brahmagupta[96396]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:13:37 compute-0 trusting_brahmagupta[96396]: --> relative data size: 1.0
Nov 29 07:13:37 compute-0 trusting_brahmagupta[96396]: --> All data devices are unavailable
Nov 29 07:13:37 compute-0 systemd[1]: libpod-1ad6015d8c31456be78f0eaa22e22ae863348caac2b500d6ae4d17064286b51e.scope: Deactivated successfully.
Nov 29 07:13:37 compute-0 podman[96369]: 2025-11-29 07:13:37.682181744 +0000 UTC m=+1.292505819 container died 1ad6015d8c31456be78f0eaa22e22ae863348caac2b500d6ae4d17064286b51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brahmagupta, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-032978b57ead9c8fbef4fa9a571dd7bd4184d479c36edbebf9a616f3dbcf658b-merged.mount: Deactivated successfully.
Nov 29 07:13:37 compute-0 podman[96369]: 2025-11-29 07:13:37.88228783 +0000 UTC m=+1.492611895 container remove 1ad6015d8c31456be78f0eaa22e22ae863348caac2b500d6ae4d17064286b51e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:13:37 compute-0 systemd[1]: libpod-conmon-1ad6015d8c31456be78f0eaa22e22ae863348caac2b500d6ae4d17064286b51e.scope: Deactivated successfully.
Nov 29 07:13:37 compute-0 sudo[96265]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:37 compute-0 sudo[96502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:37 compute-0 sudo[96502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:37 compute-0 sudo[96502]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:38 compute-0 sudo[96527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:38 compute-0 sudo[96527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:38 compute-0 sudo[96527]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:38 compute-0 sudo[96552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:38 compute-0 sudo[96552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:38 compute-0 sudo[96552]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:38 compute-0 sudo[96577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:13:38 compute-0 sudo[96577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 29 07:13:38 compute-0 podman[96641]: 2025-11-29 07:13:38.463952895 +0000 UTC m=+0.033556663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:38 compute-0 ceph-mon[75050]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:13:38 compute-0 ceph-mon[75050]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:38 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2010212766' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 07:13:38 compute-0 podman[96641]: 2025-11-29 07:13:38.683070798 +0000 UTC m=+0.252674486 container create 81621e44d87d64580894534d2acc691fe11e1087fa5a794422d3c4aae1c1ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2010212766' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 07:13:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 29 07:13:38 compute-0 youthful_lederberg[96443]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 29 07:13:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 29 07:13:38 compute-0 podman[96427]: 2025-11-29 07:13:38.732120401 +0000 UTC m=+1.898474213 container died 2fc435fa13d6f25dfe1110e9bfa74134365b9122350ee6ca3181e3d5352a7caf (image=quay.io/ceph/ceph:v18, name=youthful_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:38 compute-0 systemd[1]: Started libpod-conmon-81621e44d87d64580894534d2acc691fe11e1087fa5a794422d3c4aae1c1ee0f.scope.
Nov 29 07:13:38 compute-0 systemd[1]: libpod-2fc435fa13d6f25dfe1110e9bfa74134365b9122350ee6ca3181e3d5352a7caf.scope: Deactivated successfully.
Nov 29 07:13:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d6e5a9163aab5c5b89ba570b81921bdc7bdf265d360b9bdbf1761b5f19141d2-merged.mount: Deactivated successfully.
Nov 29 07:13:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 07:13:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:13:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2010212766' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 07:13:39 compute-0 ceph-mon[75050]: osdmap e27: 3 total, 3 up, 3 in
Nov 29 07:13:39 compute-0 ceph-mon[75050]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:39 compute-0 podman[96641]: 2025-11-29 07:13:39.854735591 +0000 UTC m=+1.424339279 container init 81621e44d87d64580894534d2acc691fe11e1087fa5a794422d3c4aae1c1ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:13:39 compute-0 podman[96641]: 2025-11-29 07:13:39.861202307 +0000 UTC m=+1.430805975 container start 81621e44d87d64580894534d2acc691fe11e1087fa5a794422d3c4aae1c1ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:13:39 compute-0 clever_herschel[96658]: 167 167
Nov 29 07:13:39 compute-0 systemd[1]: libpod-81621e44d87d64580894534d2acc691fe11e1087fa5a794422d3c4aae1c1ee0f.scope: Deactivated successfully.
Nov 29 07:13:39 compute-0 podman[96427]: 2025-11-29 07:13:39.871820145 +0000 UTC m=+3.038173957 container remove 2fc435fa13d6f25dfe1110e9bfa74134365b9122350ee6ca3181e3d5352a7caf (image=quay.io/ceph/ceph:v18, name=youthful_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:13:39 compute-0 podman[96641]: 2025-11-29 07:13:39.886580007 +0000 UTC m=+1.456183675 container attach 81621e44d87d64580894534d2acc691fe11e1087fa5a794422d3c4aae1c1ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:13:39 compute-0 podman[96641]: 2025-11-29 07:13:39.887121311 +0000 UTC m=+1.456724999 container died 81621e44d87d64580894534d2acc691fe11e1087fa5a794422d3c4aae1c1ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:13:39 compute-0 sudo[96424]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-429ddc409f7d22c9337e268b499691a0de6b59f98b6b2a1391cdf588801b99ff-merged.mount: Deactivated successfully.
Nov 29 07:13:39 compute-0 podman[96641]: 2025-11-29 07:13:39.957223656 +0000 UTC m=+1.526827334 container remove 81621e44d87d64580894534d2acc691fe11e1087fa5a794422d3c4aae1c1ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:39 compute-0 systemd[1]: libpod-conmon-81621e44d87d64580894534d2acc691fe11e1087fa5a794422d3c4aae1c1ee0f.scope: Deactivated successfully.
Nov 29 07:13:40 compute-0 systemd[1]: libpod-conmon-2fc435fa13d6f25dfe1110e9bfa74134365b9122350ee6ca3181e3d5352a7caf.scope: Deactivated successfully.
Nov 29 07:13:40 compute-0 podman[96693]: 2025-11-29 07:13:40.108375803 +0000 UTC m=+0.038106377 container create ddefc19b48fb8e2ee8f9941deeb531eacd5f5474cde807bc80033651f360e7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:40 compute-0 systemd[1]: Started libpod-conmon-ddefc19b48fb8e2ee8f9941deeb531eacd5f5474cde807bc80033651f360e7b4.scope.
Nov 29 07:13:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a35c5cdf6665935f4faaa453d78c8ef6658183de14bdf5c81767f0a72570a07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a35c5cdf6665935f4faaa453d78c8ef6658183de14bdf5c81767f0a72570a07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a35c5cdf6665935f4faaa453d78c8ef6658183de14bdf5c81767f0a72570a07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a35c5cdf6665935f4faaa453d78c8ef6658183de14bdf5c81767f0a72570a07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:40 compute-0 podman[96693]: 2025-11-29 07:13:40.091815613 +0000 UTC m=+0.021546217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:40 compute-0 podman[96693]: 2025-11-29 07:13:40.194239676 +0000 UTC m=+0.123970260 container init ddefc19b48fb8e2ee8f9941deeb531eacd5f5474cde807bc80033651f360e7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:40 compute-0 podman[96693]: 2025-11-29 07:13:40.200917557 +0000 UTC m=+0.130648131 container start ddefc19b48fb8e2ee8f9941deeb531eacd5f5474cde807bc80033651f360e7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:13:40 compute-0 podman[96693]: 2025-11-29 07:13:40.205665147 +0000 UTC m=+0.135395721 container attach ddefc19b48fb8e2ee8f9941deeb531eacd5f5474cde807bc80033651f360e7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:13:40 compute-0 ceph-mon[75050]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 07:13:40 compute-0 ceph-mon[75050]: Cluster is now healthy
Nov 29 07:13:40 compute-0 python3[96790]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]: {
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:     "0": [
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:         {
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "devices": [
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "/dev/loop3"
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             ],
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_name": "ceph_lv0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_size": "21470642176",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "name": "ceph_lv0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "tags": {
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.cluster_name": "ceph",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.crush_device_class": "",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.encrypted": "0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.osd_id": "0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.type": "block",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.vdo": "0"
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             },
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "type": "block",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "vg_name": "ceph_vg0"
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:         }
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:     ],
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:     "1": [
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:         {
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "devices": [
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "/dev/loop4"
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             ],
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_name": "ceph_lv1",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_size": "21470642176",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "name": "ceph_lv1",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "tags": {
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.cluster_name": "ceph",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.crush_device_class": "",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.encrypted": "0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.osd_id": "1",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.type": "block",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.vdo": "0"
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             },
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "type": "block",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "vg_name": "ceph_vg1"
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:         }
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:     ],
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:     "2": [
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:         {
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "devices": [
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "/dev/loop5"
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             ],
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_name": "ceph_lv2",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_size": "21470642176",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "name": "ceph_lv2",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "tags": {
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.cluster_name": "ceph",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.crush_device_class": "",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.encrypted": "0",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.osd_id": "2",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.type": "block",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:                 "ceph.vdo": "0"
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             },
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "type": "block",
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:             "vg_name": "ceph_vg2"
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:         }
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]:     ]
Nov 29 07:13:40 compute-0 heuristic_blackwell[96709]: }
Nov 29 07:13:40 compute-0 systemd[1]: libpod-ddefc19b48fb8e2ee8f9941deeb531eacd5f5474cde807bc80033651f360e7b4.scope: Deactivated successfully.
Nov 29 07:13:40 compute-0 podman[96693]: 2025-11-29 07:13:40.992219486 +0000 UTC m=+0.921950060 container died ddefc19b48fb8e2ee8f9941deeb531eacd5f5474cde807bc80033651f360e7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a35c5cdf6665935f4faaa453d78c8ef6658183de14bdf5c81767f0a72570a07-merged.mount: Deactivated successfully.
Nov 29 07:13:41 compute-0 podman[96693]: 2025-11-29 07:13:41.124711767 +0000 UTC m=+1.054442341 container remove ddefc19b48fb8e2ee8f9941deeb531eacd5f5474cde807bc80033651f360e7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_blackwell, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:41 compute-0 systemd[1]: libpod-conmon-ddefc19b48fb8e2ee8f9941deeb531eacd5f5474cde807bc80033651f360e7b4.scope: Deactivated successfully.
Nov 29 07:13:41 compute-0 sudo[96577]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:41 compute-0 sudo[96878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:41 compute-0 sudo[96878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:41 compute-0 sudo[96878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:41 compute-0 python3[96877]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400420.637964-36599-10986674619142/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:13:41 compute-0 sudo[96903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:41 compute-0 sudo[96903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:41 compute-0 sudo[96903]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:41 compute-0 sudo[96928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:41 compute-0 sudo[96928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:41 compute-0 sudo[96928]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:41 compute-0 sudo[96976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:13:41 compute-0 sudo[96976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:41 compute-0 podman[97070]: 2025-11-29 07:13:41.684126166 +0000 UTC m=+0.019308186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:41 compute-0 sudo[97128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvlbyifhehquztpytqeudjyodxegllnm ; /usr/bin/python3'
Nov 29 07:13:41 compute-0 sudo[97128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:41 compute-0 python3[97130]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:13:41 compute-0 sudo[97128]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:42 compute-0 sudo[97203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sorgwpqgluvaylyibwfeibchepvqejos ; /usr/bin/python3'
Nov 29 07:13:42 compute-0 sudo[97203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:42 compute-0 python3[97205]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400421.6296253-36613-18407539942098/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=a69fd6eace147817a3949b43f8ebf2868d4afaf3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:13:42 compute-0 sudo[97203]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:42 compute-0 sudo[97253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvxhbnxbiffkegjdnsatsxgklbizcyow ; /usr/bin/python3'
Nov 29 07:13:42 compute-0 sudo[97253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:42 compute-0 python3[97255]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:43 compute-0 podman[97070]: 2025-11-29 07:13:43.971901494 +0000 UTC m=+2.307083494 container create f78ef7230f6e50a22bea96a8935aadb4cd78913f0ad69e0927df168beda740fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:44 compute-0 systemd[1]: Started libpod-conmon-f78ef7230f6e50a22bea96a8935aadb4cd78913f0ad69e0927df168beda740fc.scope.
Nov 29 07:13:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:44 compute-0 ceph-mon[75050]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:44 compute-0 podman[97256]: 2025-11-29 07:13:44.092165072 +0000 UTC m=+1.160848730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:44 compute-0 podman[97256]: 2025-11-29 07:13:44.260325901 +0000 UTC m=+1.329009519 container create 948b18e32485f34ddecdefacd65f494a1e1fc0340af52d628a3ed28065b35a54 (image=quay.io/ceph/ceph:v18, name=vibrant_noyce, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 29 07:13:45 compute-0 systemd[1]: Started libpod-conmon-948b18e32485f34ddecdefacd65f494a1e1fc0340af52d628a3ed28065b35a54.scope.
Nov 29 07:13:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/161cdf523299a99e686b441a9571a5526de1fffde513e5de0e3ff1074975c9c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/161cdf523299a99e686b441a9571a5526de1fffde513e5de0e3ff1074975c9c2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/161cdf523299a99e686b441a9571a5526de1fffde513e5de0e3ff1074975c9c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:46 compute-0 podman[97256]: 2025-11-29 07:13:46.283854829 +0000 UTC m=+3.352538477 container init 948b18e32485f34ddecdefacd65f494a1e1fc0340af52d628a3ed28065b35a54 (image=quay.io/ceph/ceph:v18, name=vibrant_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:13:46 compute-0 podman[97256]: 2025-11-29 07:13:46.292992798 +0000 UTC m=+3.361676416 container start 948b18e32485f34ddecdefacd65f494a1e1fc0340af52d628a3ed28065b35a54 (image=quay.io/ceph/ceph:v18, name=vibrant_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:13:46 compute-0 podman[97256]: 2025-11-29 07:13:46.543888855 +0000 UTC m=+3.612572503 container attach 948b18e32485f34ddecdefacd65f494a1e1fc0340af52d628a3ed28065b35a54 (image=quay.io/ceph/ceph:v18, name=vibrant_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:46 compute-0 ceph-mon[75050]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:46 compute-0 podman[97070]: 2025-11-29 07:13:46.906356424 +0000 UTC m=+5.241538454 container init f78ef7230f6e50a22bea96a8935aadb4cd78913f0ad69e0927df168beda740fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:46 compute-0 podman[97070]: 2025-11-29 07:13:46.915605215 +0000 UTC m=+5.250787215 container start f78ef7230f6e50a22bea96a8935aadb4cd78913f0ad69e0927df168beda740fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:46 compute-0 jolly_dewdney[97273]: 167 167
Nov 29 07:13:46 compute-0 systemd[1]: libpod-f78ef7230f6e50a22bea96a8935aadb4cd78913f0ad69e0927df168beda740fc.scope: Deactivated successfully.
Nov 29 07:13:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 07:13:47 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/551279366' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:13:47 compute-0 podman[97070]: 2025-11-29 07:13:47.114462568 +0000 UTC m=+5.449644598 container attach f78ef7230f6e50a22bea96a8935aadb4cd78913f0ad69e0927df168beda740fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 29 07:13:47 compute-0 podman[97070]: 2025-11-29 07:13:47.115400123 +0000 UTC m=+5.450582133 container died f78ef7230f6e50a22bea96a8935aadb4cd78913f0ad69e0927df168beda740fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:13:47 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/551279366' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 07:13:47 compute-0 vibrant_noyce[97279]: 
Nov 29 07:13:47 compute-0 vibrant_noyce[97279]: [global]
Nov 29 07:13:47 compute-0 vibrant_noyce[97279]:         fsid = 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:13:47 compute-0 vibrant_noyce[97279]:         mon_host = 192.168.122.100
Nov 29 07:13:47 compute-0 systemd[1]: libpod-948b18e32485f34ddecdefacd65f494a1e1fc0340af52d628a3ed28065b35a54.scope: Deactivated successfully.
Nov 29 07:13:47 compute-0 podman[97256]: 2025-11-29 07:13:47.20802955 +0000 UTC m=+4.276713168 container died 948b18e32485f34ddecdefacd65f494a1e1fc0340af52d628a3ed28065b35a54 (image=quay.io/ceph/ceph:v18, name=vibrant_noyce, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-04bedab03ae119f75981cde174a2ae60b37bb0f0398f86d9f289978c3100a195-merged.mount: Deactivated successfully.
Nov 29 07:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-161cdf523299a99e686b441a9571a5526de1fffde513e5de0e3ff1074975c9c2-merged.mount: Deactivated successfully.
Nov 29 07:13:47 compute-0 podman[97256]: 2025-11-29 07:13:47.293572365 +0000 UTC m=+4.362255983 container remove 948b18e32485f34ddecdefacd65f494a1e1fc0340af52d628a3ed28065b35a54 (image=quay.io/ceph/ceph:v18, name=vibrant_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:13:47 compute-0 systemd[1]: libpod-conmon-948b18e32485f34ddecdefacd65f494a1e1fc0340af52d628a3ed28065b35a54.scope: Deactivated successfully.
Nov 29 07:13:47 compute-0 sudo[97253]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:47 compute-0 podman[97070]: 2025-11-29 07:13:47.33169628 +0000 UTC m=+5.666878280 container remove f78ef7230f6e50a22bea96a8935aadb4cd78913f0ad69e0927df168beda740fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dewdney, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:13:47 compute-0 systemd[1]: libpod-conmon-f78ef7230f6e50a22bea96a8935aadb4cd78913f0ad69e0927df168beda740fc.scope: Deactivated successfully.
Nov 29 07:13:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:47 compute-0 sudo[97364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heguqjpcamjiqoxdopamsupujqkrhatb ; /usr/bin/python3'
Nov 29 07:13:47 compute-0 sudo[97364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:47 compute-0 podman[97362]: 2025-11-29 07:13:47.514018324 +0000 UTC m=+0.058824490 container create 7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:13:47 compute-0 systemd[1]: Started libpod-conmon-7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228.scope.
Nov 29 07:13:47 compute-0 podman[97362]: 2025-11-29 07:13:47.482766505 +0000 UTC m=+0.027572651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78237ff9d4135408068e11099e0bdb42636ba8d5db26ee818d5303856076ee43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:47 compute-0 python3[97371]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78237ff9d4135408068e11099e0bdb42636ba8d5db26ee818d5303856076ee43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78237ff9d4135408068e11099e0bdb42636ba8d5db26ee818d5303856076ee43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78237ff9d4135408068e11099e0bdb42636ba8d5db26ee818d5303856076ee43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:47 compute-0 podman[97362]: 2025-11-29 07:13:47.625331248 +0000 UTC m=+0.170137384 container init 7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:13:47 compute-0 podman[97362]: 2025-11-29 07:13:47.632795831 +0000 UTC m=+0.177601957 container start 7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:47 compute-0 podman[97362]: 2025-11-29 07:13:47.636109411 +0000 UTC m=+0.180915557 container attach 7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:47 compute-0 ceph-mon[75050]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/551279366' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 07:13:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/551279366' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 07:13:47 compute-0 ceph-mon[75050]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:47 compute-0 podman[97385]: 2025-11-29 07:13:47.671990376 +0000 UTC m=+0.036462811 container create 95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b (image=quay.io/ceph/ceph:v18, name=competent_euler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:13:47 compute-0 systemd[1]: Started libpod-conmon-95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b.scope.
Nov 29 07:13:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba797a659e34b9a77daf7b9750557ac7eecd306329519620ea66777f707e968e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba797a659e34b9a77daf7b9750557ac7eecd306329519620ea66777f707e968e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba797a659e34b9a77daf7b9750557ac7eecd306329519620ea66777f707e968e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:47 compute-0 podman[97385]: 2025-11-29 07:13:47.733569419 +0000 UTC m=+0.098041854 container init 95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b (image=quay.io/ceph/ceph:v18, name=competent_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:13:47 compute-0 podman[97385]: 2025-11-29 07:13:47.743752065 +0000 UTC m=+0.108224470 container start 95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b (image=quay.io/ceph/ceph:v18, name=competent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:13:47 compute-0 podman[97385]: 2025-11-29 07:13:47.74795533 +0000 UTC m=+0.112427755 container attach 95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b (image=quay.io/ceph/ceph:v18, name=competent_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:13:47 compute-0 podman[97385]: 2025-11-29 07:13:47.656992458 +0000 UTC m=+0.021464883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 29 07:13:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3429597650' entity='client.admin' 
Nov 29 07:13:48 compute-0 competent_euler[97400]: set ssl_option
Nov 29 07:13:48 compute-0 systemd[1]: libpod-95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b.scope: Deactivated successfully.
Nov 29 07:13:48 compute-0 conmon[97400]: conmon 95e5598575ef5ab7359f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b.scope/container/memory.events
Nov 29 07:13:48 compute-0 podman[97385]: 2025-11-29 07:13:48.482274731 +0000 UTC m=+0.846747136 container died 95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b (image=quay.io/ceph/ceph:v18, name=competent_euler, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba797a659e34b9a77daf7b9750557ac7eecd306329519620ea66777f707e968e-merged.mount: Deactivated successfully.
Nov 29 07:13:48 compute-0 podman[97385]: 2025-11-29 07:13:48.533308448 +0000 UTC m=+0.897780853 container remove 95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b (image=quay.io/ceph/ceph:v18, name=competent_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:13:48 compute-0 systemd[1]: libpod-conmon-95e5598575ef5ab7359fb9875863bfac35d50bbbbb026604071e8e5cffa91a0b.scope: Deactivated successfully.
Nov 29 07:13:48 compute-0 sudo[97364]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:48 compute-0 sudo[97482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owwcsnklyalqpxnsfvbdftfyifqtaxxz ; /usr/bin/python3'
Nov 29 07:13:48 compute-0 sudo[97482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]: {
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "osd_id": 2,
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "type": "bluestore"
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:     },
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "osd_id": 1,
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "type": "bluestore"
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:     },
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "osd_id": 0,
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:         "type": "bluestore"
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]:     }
Nov 29 07:13:48 compute-0 elastic_mclaren[97382]: }
Nov 29 07:13:48 compute-0 systemd[1]: libpod-7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228.scope: Deactivated successfully.
Nov 29 07:13:48 compute-0 systemd[1]: libpod-7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228.scope: Consumed 1.179s CPU time.
Nov 29 07:13:48 compute-0 podman[97362]: 2025-11-29 07:13:48.814520459 +0000 UTC m=+1.359326595 container died 7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-78237ff9d4135408068e11099e0bdb42636ba8d5db26ee818d5303856076ee43-merged.mount: Deactivated successfully.
Nov 29 07:13:48 compute-0 python3[97484]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:48 compute-0 podman[97362]: 2025-11-29 07:13:48.876184115 +0000 UTC m=+1.420990241 container remove 7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:13:48 compute-0 systemd[1]: libpod-conmon-7d50f090dc0918b192aa4c85cd67a605bfe7791fb61a6e2ce6721ce21bc72228.scope: Deactivated successfully.
Nov 29 07:13:48 compute-0 sudo[96976]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:48 compute-0 podman[97502]: 2025-11-29 07:13:48.925072082 +0000 UTC m=+0.044868960 container create 43bd3b7f8bcf1c99fc9b9332be0ed11e95cee5956c22d21fc8fefded93679344 (image=quay.io/ceph/ceph:v18, name=festive_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:13:48 compute-0 systemd[1]: Started libpod-conmon-43bd3b7f8bcf1c99fc9b9332be0ed11e95cee5956c22d21fc8fefded93679344.scope.
Nov 29 07:13:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:13:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:48 compute-0 sudo[97515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:48 compute-0 sudo[97515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a35275b6b2c5a7819e9061639c56d7e4698edc063d703b855264b1ff5089d7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a35275b6b2c5a7819e9061639c56d7e4698edc063d703b855264b1ff5089d7b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a35275b6b2c5a7819e9061639c56d7e4698edc063d703b855264b1ff5089d7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:48 compute-0 sudo[97515]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:48 compute-0 podman[97502]: 2025-11-29 07:13:48.997347566 +0000 UTC m=+0.117144474 container init 43bd3b7f8bcf1c99fc9b9332be0ed11e95cee5956c22d21fc8fefded93679344 (image=quay.io/ceph/ceph:v18, name=festive_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:48 compute-0 podman[97502]: 2025-11-29 07:13:48.904699149 +0000 UTC m=+0.024496027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:49 compute-0 podman[97502]: 2025-11-29 07:13:49.005920339 +0000 UTC m=+0.125717217 container start 43bd3b7f8bcf1c99fc9b9332be0ed11e95cee5956c22d21fc8fefded93679344 (image=quay.io/ceph/ceph:v18, name=festive_pike, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:49 compute-0 podman[97502]: 2025-11-29 07:13:49.009387713 +0000 UTC m=+0.129184601 container attach 43bd3b7f8bcf1c99fc9b9332be0ed11e95cee5956c22d21fc8fefded93679344 (image=quay.io/ceph/ceph:v18, name=festive_pike, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:13:49 compute-0 sudo[97545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:13:49 compute-0 sudo[97545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:49 compute-0 sudo[97545]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:49 compute-0 sudo[97571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:49 compute-0 sudo[97571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:49 compute-0 sudo[97571]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:49 compute-0 sudo[97596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:49 compute-0 sudo[97596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:49 compute-0 sudo[97596]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:49 compute-0 sudo[97621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:49 compute-0 sudo[97621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:49 compute-0 sudo[97621]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:49 compute-0 sudo[97646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:13:49 compute-0 sudo[97646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3429597650' entity='client.admin' 
Nov 29 07:13:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:49 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14242 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:13:49 compute-0 ceph-mgr[75345]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 29 07:13:49 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 29 07:13:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 07:13:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:49 compute-0 festive_pike[97538]: Scheduled rgw.rgw update...
Nov 29 07:13:49 compute-0 systemd[1]: libpod-43bd3b7f8bcf1c99fc9b9332be0ed11e95cee5956c22d21fc8fefded93679344.scope: Deactivated successfully.
Nov 29 07:13:49 compute-0 podman[97502]: 2025-11-29 07:13:49.700487471 +0000 UTC m=+0.820284349 container died 43bd3b7f8bcf1c99fc9b9332be0ed11e95cee5956c22d21fc8fefded93679344 (image=quay.io/ceph/ceph:v18, name=festive_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a35275b6b2c5a7819e9061639c56d7e4698edc063d703b855264b1ff5089d7b-merged.mount: Deactivated successfully.
Nov 29 07:13:49 compute-0 podman[97502]: 2025-11-29 07:13:49.743348115 +0000 UTC m=+0.863144983 container remove 43bd3b7f8bcf1c99fc9b9332be0ed11e95cee5956c22d21fc8fefded93679344 (image=quay.io/ceph/ceph:v18, name=festive_pike, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:13:49 compute-0 systemd[1]: libpod-conmon-43bd3b7f8bcf1c99fc9b9332be0ed11e95cee5956c22d21fc8fefded93679344.scope: Deactivated successfully.
Nov 29 07:13:49 compute-0 sudo[97482]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:50 compute-0 podman[97771]: 2025-11-29 07:13:50.086286793 +0000 UTC m=+0.361123463 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:13:50 compute-0 podman[97771]: 2025-11-29 07:13:50.182873837 +0000 UTC m=+0.457710497 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:50 compute-0 ceph-mon[75050]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:50 compute-0 sudo[97646]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 0ada0fca-f5c3-446a-bc94-6d6935252f78 does not exist
Nov 29 07:13:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a00d73ff-56f2-4951-ae81-2dbf4c6dea49 does not exist
Nov 29 07:13:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 8c883768-22c0-4b0d-929d-30ba8f336c2b does not exist
Nov 29 07:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:50 compute-0 sudo[97923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:50 compute-0 sudo[97923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:50 compute-0 sudo[97923]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:50 compute-0 sudo[97976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:50 compute-0 sudo[97976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:50 compute-0 sudo[97976]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:50 compute-0 sudo[98025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:50 compute-0 sudo[98025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:50 compute-0 sudo[98025]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:50 compute-0 sudo[98050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:13:50 compute-0 sudo[98050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:50 compute-0 python3[98023]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:13:51 compute-0 podman[98186]: 2025-11-29 07:13:51.188793978 +0000 UTC m=+0.057173724 container create 7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:13:51 compute-0 systemd[1]: Started libpod-conmon-7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede.scope.
Nov 29 07:13:51 compute-0 python3[98178]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400430.6387837-36654-34734015849769/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:13:51 compute-0 podman[98186]: 2025-11-29 07:13:51.155591555 +0000 UTC m=+0.023971321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:51 compute-0 podman[98186]: 2025-11-29 07:13:51.294644904 +0000 UTC m=+0.163024660 container init 7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swartz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:13:51 compute-0 podman[98186]: 2025-11-29 07:13:51.301166921 +0000 UTC m=+0.169546667 container start 7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swartz, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:13:51 compute-0 gifted_swartz[98203]: 167 167
Nov 29 07:13:51 compute-0 systemd[1]: libpod-7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede.scope: Deactivated successfully.
Nov 29 07:13:51 compute-0 podman[98186]: 2025-11-29 07:13:51.307719309 +0000 UTC m=+0.176099055 container attach 7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:51 compute-0 conmon[98203]: conmon 7053f507b08d2e65972a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede.scope/container/memory.events
Nov 29 07:13:51 compute-0 podman[98186]: 2025-11-29 07:13:51.308927392 +0000 UTC m=+0.177307158 container died 7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c166a069886d65d9bcd1db460c0144c1ceeb743f9267b7e2ff98f7787789a9f1-merged.mount: Deactivated successfully.
Nov 29 07:13:51 compute-0 podman[98186]: 2025-11-29 07:13:51.356106494 +0000 UTC m=+0.224486240 container remove 7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:13:51 compute-0 systemd[1]: libpod-conmon-7053f507b08d2e65972a0951e57c5a4bcb63e41e1d7035c1e8bf7fd11168cede.scope: Deactivated successfully.
Nov 29 07:13:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:51 compute-0 ceph-mon[75050]: from='client.14242 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:13:51 compute-0 ceph-mon[75050]: Saving service rgw.rgw spec with placement compute-0
Nov 29 07:13:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:13:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:13:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:13:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:13:51 compute-0 podman[98250]: 2025-11-29 07:13:51.508939727 +0000 UTC m=+0.044096710 container create 790eeb185f784bfdf6318c8ddb63d9ec94d2ab5b04e274396c7e32cae24cd0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:13:51 compute-0 systemd[1]: Started libpod-conmon-790eeb185f784bfdf6318c8ddb63d9ec94d2ab5b04e274396c7e32cae24cd0d4.scope.
Nov 29 07:13:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2b0881f7849c318f66e8521b09615ba737ceaad83236c678d5712c40df968b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2b0881f7849c318f66e8521b09615ba737ceaad83236c678d5712c40df968b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2b0881f7849c318f66e8521b09615ba737ceaad83236c678d5712c40df968b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2b0881f7849c318f66e8521b09615ba737ceaad83236c678d5712c40df968b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc2b0881f7849c318f66e8521b09615ba737ceaad83236c678d5712c40df968b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:51 compute-0 podman[98250]: 2025-11-29 07:13:51.490387322 +0000 UTC m=+0.025544335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:13:51 compute-0 podman[98250]: 2025-11-29 07:13:51.595365184 +0000 UTC m=+0.130522187 container init 790eeb185f784bfdf6318c8ddb63d9ec94d2ab5b04e274396c7e32cae24cd0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:13:51 compute-0 podman[98250]: 2025-11-29 07:13:51.602080177 +0000 UTC m=+0.137237160 container start 790eeb185f784bfdf6318c8ddb63d9ec94d2ab5b04e274396c7e32cae24cd0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:13:51 compute-0 podman[98250]: 2025-11-29 07:13:51.610560408 +0000 UTC m=+0.145717391 container attach 790eeb185f784bfdf6318c8ddb63d9ec94d2ab5b04e274396c7e32cae24cd0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:13:51 compute-0 sudo[98294]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aejpqxtjcbjoktijbhjgkrtnplzwqsgj ; /usr/bin/python3'
Nov 29 07:13:51 compute-0 sudo[98294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:13:51 compute-0 python3[98296]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:13:51 compute-0 podman[98297]: 2025-11-29 07:13:51.827628305 +0000 UTC m=+0.048588341 container create deccae8b30c1db40b60325613e89837b013ee8e26feec9976519dc7cb0e64962 (image=quay.io/ceph/ceph:v18, name=admiring_ramanujan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:13:51 compute-0 systemd[1]: Started libpod-conmon-deccae8b30c1db40b60325613e89837b013ee8e26feec9976519dc7cb0e64962.scope.
Nov 29 07:13:51 compute-0 podman[98297]: 2025-11-29 07:13:51.799940722 +0000 UTC m=+0.020900788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:13:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b6d8fd036c64eeecfb33d1e08af6537155555e7df31b842bfb239ea0ac6d2d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b6d8fd036c64eeecfb33d1e08af6537155555e7df31b842bfb239ea0ac6d2d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b6d8fd036c64eeecfb33d1e08af6537155555e7df31b842bfb239ea0ac6d2d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:13:51 compute-0 podman[98297]: 2025-11-29 07:13:51.925306719 +0000 UTC m=+0.146266775 container init deccae8b30c1db40b60325613e89837b013ee8e26feec9976519dc7cb0e64962 (image=quay.io/ceph/ceph:v18, name=admiring_ramanujan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:13:51 compute-0 podman[98297]: 2025-11-29 07:13:51.931216569 +0000 UTC m=+0.152176605 container start deccae8b30c1db40b60325613e89837b013ee8e26feec9976519dc7cb0e64962 (image=quay.io/ceph/ceph:v18, name=admiring_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:13:51 compute-0 podman[98297]: 2025-11-29 07:13:51.935355772 +0000 UTC m=+0.156315828 container attach deccae8b30c1db40b60325613e89837b013ee8e26feec9976519dc7cb0e64962 (image=quay.io/ceph/ceph:v18, name=admiring_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:13:52 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:13:52 compute-0 ceph-mgr[75345]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 07:13:52 compute-0 stoic_montalcini[98266]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:13:52 compute-0 stoic_montalcini[98266]: --> relative data size: 1.0
Nov 29 07:13:52 compute-0 stoic_montalcini[98266]: --> All data devices are unavailable
Nov 29 07:13:52 compute-0 systemd[1]: libpod-790eeb185f784bfdf6318c8ddb63d9ec94d2ab5b04e274396c7e32cae24cd0d4.scope: Deactivated successfully.
Nov 29 07:13:52 compute-0 podman[98250]: 2025-11-29 07:13:52.661549533 +0000 UTC m=+1.196706516 container died 790eeb185f784bfdf6318c8ddb63d9ec94d2ab5b04e274396c7e32cae24cd0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:13:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 29 07:13:52 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 07:13:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 29 07:13:52 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:13:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 29 07:13:52 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:13:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 29 07:13:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 07:13:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 07:13:52 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0[75046]: 2025-11-29T07:13:52.989+0000 7f27a9d35640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 07:13:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:53 compute-0 ceph-mon[75050]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 07:13:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e2 new map
Nov 29 07:13:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:13:52.990561+0000
                                           modified        2025-11-29T07:13:52.990604+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 29 07:13:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 29 07:13:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 29 07:13:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 29 07:13:58 compute-0 ceph-mgr[75345]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 29 07:13:58 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 29 07:13:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 07:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc2b0881f7849c318f66e8521b09615ba737ceaad83236c678d5712c40df968b-merged.mount: Deactivated successfully.
Nov 29 07:13:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:13:58 compute-0 ceph-mgr[75345]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 07:13:58 compute-0 systemd[1]: libpod-deccae8b30c1db40b60325613e89837b013ee8e26feec9976519dc7cb0e64962.scope: Deactivated successfully.
Nov 29 07:13:58 compute-0 ceph-mon[75050]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:13:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 07:13:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:13:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 07:13:58 compute-0 ceph-mon[75050]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 07:13:58 compute-0 ceph-mon[75050]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 07:13:58 compute-0 ceph-mon[75050]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:58 compute-0 ceph-mon[75050]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:58 compute-0 ceph-mon[75050]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:13:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 07:13:58 compute-0 ceph-mon[75050]: osdmap e28: 3 total, 3 up, 3 in
Nov 29 07:13:58 compute-0 ceph-mon[75050]: fsmap cephfs:0
Nov 29 07:13:59 compute-0 podman[98250]: 2025-11-29 07:13:59.047650095 +0000 UTC m=+7.582807088 container remove 790eeb185f784bfdf6318c8ddb63d9ec94d2ab5b04e274396c7e32cae24cd0d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:13:59 compute-0 systemd[1]: libpod-conmon-790eeb185f784bfdf6318c8ddb63d9ec94d2ab5b04e274396c7e32cae24cd0d4.scope: Deactivated successfully.
Nov 29 07:13:59 compute-0 sudo[98050]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:59 compute-0 podman[98297]: 2025-11-29 07:13:59.109707551 +0000 UTC m=+7.330667587 container died deccae8b30c1db40b60325613e89837b013ee8e26feec9976519dc7cb0e64962 (image=quay.io/ceph/ceph:v18, name=admiring_ramanujan, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:13:59 compute-0 sudo[98386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:59 compute-0 sudo[98386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:59 compute-0 sudo[98386]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:59 compute-0 sudo[98411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:13:59 compute-0 sudo[98411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:59 compute-0 sudo[98411]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:59 compute-0 sudo[98436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:13:59 compute-0 sudo[98436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:59 compute-0 sudo[98436]: pam_unix(sudo:session): session closed for user root
Nov 29 07:13:59 compute-0 sudo[98461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:13:59 compute-0 sudo[98461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:13:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:01 compute-0 ceph-mon[75050]: Saving service mds.cephfs spec with placement compute-0
Nov 29 07:14:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-50b6d8fd036c64eeecfb33d1e08af6537155555e7df31b842bfb239ea0ac6d2d-merged.mount: Deactivated successfully.
Nov 29 07:14:03 compute-0 podman[98297]: 2025-11-29 07:14:03.02688475 +0000 UTC m=+11.247844786 container remove deccae8b30c1db40b60325613e89837b013ee8e26feec9976519dc7cb0e64962 (image=quay.io/ceph/ceph:v18, name=admiring_ramanujan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:03 compute-0 sudo[98294]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:03 compute-0 systemd[1]: libpod-conmon-deccae8b30c1db40b60325613e89837b013ee8e26feec9976519dc7cb0e64962.scope: Deactivated successfully.
Nov 29 07:14:03 compute-0 sudo[98551]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdxdebihrkpdrqnykuyxclxkbohcqjvo ; /usr/bin/python3'
Nov 29 07:14:03 compute-0 sudo[98551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:03 compute-0 podman[98549]: 2025-11-29 07:14:03.256377676 +0000 UTC m=+0.025760771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:03 compute-0 python3[98562]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:03 compute-0 ceph-mon[75050]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:03 compute-0 ceph-mon[75050]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:03 compute-0 podman[98549]: 2025-11-29 07:14:03.919284457 +0000 UTC m=+0.688667552 container create b175c7a38f970ac10653665e572e7620f4145822782ea1fbdbc6164fb3e1ace5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:04 compute-0 systemd[1]: Started libpod-conmon-b175c7a38f970ac10653665e572e7620f4145822782ea1fbdbc6164fb3e1ace5.scope.
Nov 29 07:14:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:04 compute-0 podman[98566]: 2025-11-29 07:14:04.002484847 +0000 UTC m=+0.454312194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:04 compute-0 podman[98566]: 2025-11-29 07:14:04.378813453 +0000 UTC m=+0.830640800 container create c8720493bff1668bbf3defbe8d8f51846b049d57304d7564d9347c09c5843fa8 (image=quay.io/ceph/ceph:v18, name=boring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:14:04 compute-0 systemd[1]: Started libpod-conmon-c8720493bff1668bbf3defbe8d8f51846b049d57304d7564d9347c09c5843fa8.scope.
Nov 29 07:14:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e29ba6cddb773bc8b0fa1e2af3a2fcacf01bc22690d649c49d4f11edcbc912c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e29ba6cddb773bc8b0fa1e2af3a2fcacf01bc22690d649c49d4f11edcbc912c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e29ba6cddb773bc8b0fa1e2af3a2fcacf01bc22690d649c49d4f11edcbc912c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:04 compute-0 podman[98566]: 2025-11-29 07:14:04.610642381 +0000 UTC m=+1.062469708 container init c8720493bff1668bbf3defbe8d8f51846b049d57304d7564d9347c09c5843fa8 (image=quay.io/ceph/ceph:v18, name=boring_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:14:04 compute-0 podman[98566]: 2025-11-29 07:14:04.622376631 +0000 UTC m=+1.074203958 container start c8720493bff1668bbf3defbe8d8f51846b049d57304d7564d9347c09c5843fa8 (image=quay.io/ceph/ceph:v18, name=boring_perlman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:04 compute-0 podman[98566]: 2025-11-29 07:14:04.627573211 +0000 UTC m=+1.079400658 container attach c8720493bff1668bbf3defbe8d8f51846b049d57304d7564d9347c09c5843fa8 (image=quay.io/ceph/ceph:v18, name=boring_perlman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:14:04 compute-0 podman[98549]: 2025-11-29 07:14:04.813863323 +0000 UTC m=+1.583246418 container init b175c7a38f970ac10653665e572e7620f4145822782ea1fbdbc6164fb3e1ace5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:14:04 compute-0 podman[98549]: 2025-11-29 07:14:04.819172307 +0000 UTC m=+1.588555402 container start b175c7a38f970ac10653665e572e7620f4145822782ea1fbdbc6164fb3e1ace5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:14:04 compute-0 cool_fermat[98581]: 167 167
Nov 29 07:14:04 compute-0 systemd[1]: libpod-b175c7a38f970ac10653665e572e7620f4145822782ea1fbdbc6164fb3e1ace5.scope: Deactivated successfully.
Nov 29 07:14:04 compute-0 podman[98549]: 2025-11-29 07:14:04.960187208 +0000 UTC m=+1.729570283 container attach b175c7a38f970ac10653665e572e7620f4145822782ea1fbdbc6164fb3e1ace5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:14:04 compute-0 podman[98549]: 2025-11-29 07:14:04.96132097 +0000 UTC m=+1.730704045 container died b175c7a38f970ac10653665e572e7620f4145822782ea1fbdbc6164fb3e1ace5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:05 compute-0 ceph-mon[75050]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-52df9a0678cc2ed4d10ab0fbf27f0435f60c0488bc469c113ce176ae21fb2275-merged.mount: Deactivated successfully.
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 29 07:14:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 07:14:05 compute-0 podman[98549]: 2025-11-29 07:14:05.270798638 +0000 UTC m=+2.040181713 container remove b175c7a38f970ac10653665e572e7620f4145822782ea1fbdbc6164fb3e1ace5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermat, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:05 compute-0 systemd[1]: libpod-conmon-b175c7a38f970ac10653665e572e7620f4145822782ea1fbdbc6164fb3e1ace5.scope: Deactivated successfully.
Nov 29 07:14:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:05 compute-0 boring_perlman[98586]: Scheduled mds.cephfs update...
Nov 29 07:14:05 compute-0 systemd[1]: libpod-c8720493bff1668bbf3defbe8d8f51846b049d57304d7564d9347c09c5843fa8.scope: Deactivated successfully.
Nov 29 07:14:05 compute-0 podman[98566]: 2025-11-29 07:14:05.366731114 +0000 UTC m=+1.818558441 container died c8720493bff1668bbf3defbe8d8f51846b049d57304d7564d9347c09c5843fa8 (image=quay.io/ceph/ceph:v18, name=boring_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:14:05
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['vms', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e29ba6cddb773bc8b0fa1e2af3a2fcacf01bc22690d649c49d4f11edcbc912c-merged.mount: Deactivated successfully.
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:14:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:14:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:14:05 compute-0 podman[98638]: 2025-11-29 07:14:05.449277607 +0000 UTC m=+0.056474095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:05 compute-0 podman[98566]: 2025-11-29 07:14:05.665625745 +0000 UTC m=+2.117453072 container remove c8720493bff1668bbf3defbe8d8f51846b049d57304d7564d9347c09c5843fa8 (image=quay.io/ceph/ceph:v18, name=boring_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:05 compute-0 sudo[98551]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:05 compute-0 podman[98638]: 2025-11-29 07:14:05.722309485 +0000 UTC m=+0.329506183 container create cdd3436495199fc8f9cf9b59c06294eb5da7faae289ccbdabf823e71f7091f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:05 compute-0 systemd[1]: libpod-conmon-c8720493bff1668bbf3defbe8d8f51846b049d57304d7564d9347c09c5843fa8.scope: Deactivated successfully.
Nov 29 07:14:05 compute-0 systemd[1]: Started libpod-conmon-cdd3436495199fc8f9cf9b59c06294eb5da7faae289ccbdabf823e71f7091f9d.scope.
Nov 29 07:14:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8467f9d73565c6b77ddf9df78c58d10b4d60f00f0fb75a39394fc7d454358c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8467f9d73565c6b77ddf9df78c58d10b4d60f00f0fb75a39394fc7d454358c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8467f9d73565c6b77ddf9df78c58d10b4d60f00f0fb75a39394fc7d454358c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8467f9d73565c6b77ddf9df78c58d10b4d60f00f0fb75a39394fc7d454358c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:05 compute-0 podman[98638]: 2025-11-29 07:14:05.891633656 +0000 UTC m=+0.498830144 container init cdd3436495199fc8f9cf9b59c06294eb5da7faae289ccbdabf823e71f7091f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:05 compute-0 podman[98638]: 2025-11-29 07:14:05.905275927 +0000 UTC m=+0.512472395 container start cdd3436495199fc8f9cf9b59c06294eb5da7faae289ccbdabf823e71f7091f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:14:05 compute-0 podman[98638]: 2025-11-29 07:14:05.909922082 +0000 UTC m=+0.517118570 container attach cdd3436495199fc8f9cf9b59c06294eb5da7faae289ccbdabf823e71f7091f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:14:06 compute-0 sudo[98740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqobzqgmxbnqgqkyawvuebitxbbkgbmp ; /usr/bin/python3'
Nov 29 07:14:06 compute-0 sudo[98740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 29 07:14:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 29 07:14:06 compute-0 ceph-mon[75050]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 07:14:06 compute-0 ceph-mon[75050]: Saving service mds.cephfs spec with placement compute-0
Nov 29 07:14:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:06 compute-0 ceph-mon[75050]: pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 29 07:14:06 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev a58cbc1f-cdba-4ab8-bdb1-c47dfd50ce00 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 07:14:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:14:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:06 compute-0 python3[98742]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 07:14:06 compute-0 sudo[98740]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:06 compute-0 sudo[98815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkijxztfhpycqbcwwcrdhjgmkalziutf ; /usr/bin/python3'
Nov 29 07:14:06 compute-0 sudo[98815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:06 compute-0 elastic_jemison[98660]: {
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:     "0": [
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:         {
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "devices": [
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "/dev/loop3"
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             ],
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_name": "ceph_lv0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_size": "21470642176",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "name": "ceph_lv0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "tags": {
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.crush_device_class": "",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.encrypted": "0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.osd_id": "0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.type": "block",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.vdo": "0"
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             },
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "type": "block",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "vg_name": "ceph_vg0"
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:         }
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:     ],
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:     "1": [
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:         {
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "devices": [
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "/dev/loop4"
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             ],
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_name": "ceph_lv1",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_size": "21470642176",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "name": "ceph_lv1",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "tags": {
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.crush_device_class": "",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.encrypted": "0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.osd_id": "1",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.type": "block",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.vdo": "0"
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             },
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "type": "block",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "vg_name": "ceph_vg1"
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:         }
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:     ],
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:     "2": [
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:         {
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "devices": [
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "/dev/loop5"
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             ],
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_name": "ceph_lv2",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_size": "21470642176",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "name": "ceph_lv2",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "tags": {
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.crush_device_class": "",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.encrypted": "0",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.osd_id": "2",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.type": "block",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:                 "ceph.vdo": "0"
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             },
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "type": "block",
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:             "vg_name": "ceph_vg2"
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:         }
Nov 29 07:14:06 compute-0 elastic_jemison[98660]:     ]
Nov 29 07:14:06 compute-0 elastic_jemison[98660]: }
Nov 29 07:14:06 compute-0 systemd[1]: libpod-cdd3436495199fc8f9cf9b59c06294eb5da7faae289ccbdabf823e71f7091f9d.scope: Deactivated successfully.
Nov 29 07:14:06 compute-0 podman[98638]: 2025-11-29 07:14:06.767591055 +0000 UTC m=+1.374787533 container died cdd3436495199fc8f9cf9b59c06294eb5da7faae289ccbdabf823e71f7091f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:14:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea8467f9d73565c6b77ddf9df78c58d10b4d60f00f0fb75a39394fc7d454358c-merged.mount: Deactivated successfully.
Nov 29 07:14:06 compute-0 python3[98817]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400446.1063204-36684-25388379150858/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=a78ba328f966a508b10a905b8c648b006cefb08a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:14:06 compute-0 podman[98638]: 2025-11-29 07:14:06.83031537 +0000 UTC m=+1.437511838 container remove cdd3436495199fc8f9cf9b59c06294eb5da7faae289ccbdabf823e71f7091f9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:14:06 compute-0 sudo[98815]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:06 compute-0 systemd[1]: libpod-conmon-cdd3436495199fc8f9cf9b59c06294eb5da7faae289ccbdabf823e71f7091f9d.scope: Deactivated successfully.
Nov 29 07:14:06 compute-0 sudo[98461]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:06 compute-0 sudo[98836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:06 compute-0 sudo[98836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:06 compute-0 sudo[98836]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:06 compute-0 sudo[98883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:06 compute-0 sudo[98883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:06 compute-0 sudo[98883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:07 compute-0 sudo[98908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:07 compute-0 sudo[98908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:07 compute-0 sudo[98908]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:07 compute-0 sudo[98933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:14:07 compute-0 sudo[98933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:07 compute-0 sudo[98981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfdelphxyumzqbtkukeytidgpxatneby ; /usr/bin/python3'
Nov 29 07:14:07 compute-0 sudo[98981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:07 compute-0 python3[98983]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 29 07:14:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 29 07:14:07 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 29 07:14:07 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 39fb0d9f-482d-4875-a97b-049c5684866f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 07:14:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:14:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:07 compute-0 ceph-mon[75050]: osdmap e29: 3 total, 3 up, 3 in
Nov 29 07:14:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:07 compute-0 podman[99022]: 2025-11-29 07:14:07.428307907 +0000 UTC m=+0.047411389 container create 174d0826844930e9e3caf7ebcc77a86e5fda36c9f6f7a8272f9fb1c1ad0d7d0a (image=quay.io/ceph/ceph:v18, name=distracted_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:14:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:07 compute-0 podman[99023]: 2025-11-29 07:14:07.450669065 +0000 UTC m=+0.059824996 container create 0395f49d912f28fcffbbe81767a4b566c782a7db11e70e5c97a6e1d611315b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_napier, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:14:07 compute-0 systemd[1]: Started libpod-conmon-174d0826844930e9e3caf7ebcc77a86e5fda36c9f6f7a8272f9fb1c1ad0d7d0a.scope.
Nov 29 07:14:07 compute-0 systemd[1]: Started libpod-conmon-0395f49d912f28fcffbbe81767a4b566c782a7db11e70e5c97a6e1d611315b21.scope.
Nov 29 07:14:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e342733389f276f9544bf981c2ca45c6ee00e8aaf95f2e54b80db89908827e97/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e342733389f276f9544bf981c2ca45c6ee00e8aaf95f2e54b80db89908827e97/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:07 compute-0 podman[99022]: 2025-11-29 07:14:07.408879089 +0000 UTC m=+0.027982591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:07 compute-0 podman[99022]: 2025-11-29 07:14:07.508154857 +0000 UTC m=+0.127258439 container init 174d0826844930e9e3caf7ebcc77a86e5fda36c9f6f7a8272f9fb1c1ad0d7d0a (image=quay.io/ceph/ceph:v18, name=distracted_ramanujan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:07 compute-0 podman[99023]: 2025-11-29 07:14:07.418623004 +0000 UTC m=+0.027778955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:07 compute-0 podman[99023]: 2025-11-29 07:14:07.514296374 +0000 UTC m=+0.123452335 container init 0395f49d912f28fcffbbe81767a4b566c782a7db11e70e5c97a6e1d611315b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_napier, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:07 compute-0 podman[99022]: 2025-11-29 07:14:07.515894297 +0000 UTC m=+0.134997779 container start 174d0826844930e9e3caf7ebcc77a86e5fda36c9f6f7a8272f9fb1c1ad0d7d0a (image=quay.io/ceph/ceph:v18, name=distracted_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:14:07 compute-0 podman[99022]: 2025-11-29 07:14:07.520100841 +0000 UTC m=+0.139204323 container attach 174d0826844930e9e3caf7ebcc77a86e5fda36c9f6f7a8272f9fb1c1ad0d7d0a (image=quay.io/ceph/ceph:v18, name=distracted_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:07 compute-0 podman[99023]: 2025-11-29 07:14:07.520717748 +0000 UTC m=+0.129873679 container start 0395f49d912f28fcffbbe81767a4b566c782a7db11e70e5c97a6e1d611315b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:07 compute-0 lucid_napier[99055]: 167 167
Nov 29 07:14:07 compute-0 podman[99023]: 2025-11-29 07:14:07.524348156 +0000 UTC m=+0.133504117 container attach 0395f49d912f28fcffbbe81767a4b566c782a7db11e70e5c97a6e1d611315b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_napier, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:07 compute-0 systemd[1]: libpod-0395f49d912f28fcffbbe81767a4b566c782a7db11e70e5c97a6e1d611315b21.scope: Deactivated successfully.
Nov 29 07:14:07 compute-0 podman[99023]: 2025-11-29 07:14:07.525363094 +0000 UTC m=+0.134519035 container died 0395f49d912f28fcffbbe81767a4b566c782a7db11e70e5c97a6e1d611315b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_napier, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-951d8aaf054fcc1fd72f5a41ca0fe6b0dd6e9f8bcc07e8dd9ad5ce0dd2cf8d58-merged.mount: Deactivated successfully.
Nov 29 07:14:07 compute-0 podman[99023]: 2025-11-29 07:14:07.568165617 +0000 UTC m=+0.177321548 container remove 0395f49d912f28fcffbbe81767a4b566c782a7db11e70e5c97a6e1d611315b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:07 compute-0 systemd[1]: libpod-conmon-0395f49d912f28fcffbbe81767a4b566c782a7db11e70e5c97a6e1d611315b21.scope: Deactivated successfully.
Nov 29 07:14:07 compute-0 podman[99080]: 2025-11-29 07:14:07.725209504 +0000 UTC m=+0.048343004 container create 4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elion, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:14:07 compute-0 systemd[1]: Started libpod-conmon-4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40.scope.
Nov 29 07:14:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73dc5237fe9dd11a5f85e148cd4449552ab645611c84b5188997671167db0a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73dc5237fe9dd11a5f85e148cd4449552ab645611c84b5188997671167db0a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73dc5237fe9dd11a5f85e148cd4449552ab645611c84b5188997671167db0a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73dc5237fe9dd11a5f85e148cd4449552ab645611c84b5188997671167db0a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:07 compute-0 podman[99080]: 2025-11-29 07:14:07.703166325 +0000 UTC m=+0.026299775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:08 compute-0 podman[99080]: 2025-11-29 07:14:08.001058809 +0000 UTC m=+0.324192249 container init 4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elion, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:14:08 compute-0 podman[99080]: 2025-11-29 07:14:08.012328145 +0000 UTC m=+0.335461565 container start 4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:08 compute-0 podman[99080]: 2025-11-29 07:14:08.116985629 +0000 UTC m=+0.440119039 container attach 4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elion, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:14:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 29 07:14:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2407125646' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 07:14:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2407125646' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 07:14:08 compute-0 systemd[1]: libpod-174d0826844930e9e3caf7ebcc77a86e5fda36c9f6f7a8272f9fb1c1ad0d7d0a.scope: Deactivated successfully.
Nov 29 07:14:08 compute-0 podman[99022]: 2025-11-29 07:14:08.229217738 +0000 UTC m=+0.848321250 container died 174d0826844930e9e3caf7ebcc77a86e5fda36c9f6f7a8272f9fb1c1ad0d7d0a (image=quay.io/ceph/ceph:v18, name=distracted_ramanujan, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e342733389f276f9544bf981c2ca45c6ee00e8aaf95f2e54b80db89908827e97-merged.mount: Deactivated successfully.
Nov 29 07:14:08 compute-0 podman[99022]: 2025-11-29 07:14:08.286484534 +0000 UTC m=+0.905588016 container remove 174d0826844930e9e3caf7ebcc77a86e5fda36c9f6f7a8272f9fb1c1ad0d7d0a (image=quay.io/ceph/ceph:v18, name=distracted_ramanujan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:14:08 compute-0 systemd[1]: libpod-conmon-174d0826844930e9e3caf7ebcc77a86e5fda36c9f6f7a8272f9fb1c1ad0d7d0a.scope: Deactivated successfully.
Nov 29 07:14:08 compute-0 sudo[98981]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 29 07:14:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 29 07:14:08 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 29 07:14:08 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 52b9f2c5-884e-44b0-b086-4266d8ae85aa (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 07:14:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:14:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:08 compute-0 ceph-mon[75050]: osdmap e30: 3 total, 3 up, 3 in
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:08 compute-0 ceph-mon[75050]: pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2407125646' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2407125646' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:08 compute-0 ceph-mon[75050]: osdmap e31: 3 total, 3 up, 3 in
Nov 29 07:14:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:08 compute-0 sudo[99166]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bntheifbkblyvectqmxhswziexwpqfzl ; /usr/bin/python3'
Nov 29 07:14:08 compute-0 sudo[99166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:08 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=14/15 n=0 ec=13/13 lis/c=14/14 les/c/f=15/15/0 sis=31 pruub=13.785488129s) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active pruub 72.314682007s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:08 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 31 pg[2.0( empty local-lis/les=20/21 n=0 ec=11/11 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=13.910026550s) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 66.336860657s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:08 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=14/15 n=0 ec=13/13 lis/c=14/14 les/c/f=15/15/0 sis=31 pruub=13.785488129s) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown pruub 72.314682007s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:08 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 31 pg[2.0( empty local-lis/les=20/21 n=0 ec=11/11 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=13.910026550s) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown pruub 66.336860657s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 python3[99171]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:09 compute-0 condescending_elion[99097]: {
Nov 29 07:14:09 compute-0 condescending_elion[99097]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "osd_id": 2,
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "type": "bluestore"
Nov 29 07:14:09 compute-0 condescending_elion[99097]:     },
Nov 29 07:14:09 compute-0 condescending_elion[99097]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "osd_id": 1,
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "type": "bluestore"
Nov 29 07:14:09 compute-0 condescending_elion[99097]:     },
Nov 29 07:14:09 compute-0 condescending_elion[99097]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "osd_id": 0,
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:14:09 compute-0 condescending_elion[99097]:         "type": "bluestore"
Nov 29 07:14:09 compute-0 condescending_elion[99097]:     }
Nov 29 07:14:09 compute-0 condescending_elion[99097]: }
Nov 29 07:14:09 compute-0 podman[99188]: 2025-11-29 07:14:09.100943983 +0000 UTC m=+0.049890547 container create 0ca7eac3f38f0683df2a6c9ad3f2e7ce732424026d5f0c9f3c9bd9a81b557c64 (image=quay.io/ceph/ceph:v18, name=upbeat_feynman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:14:09 compute-0 systemd[1]: libpod-4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40.scope: Deactivated successfully.
Nov 29 07:14:09 compute-0 podman[99080]: 2025-11-29 07:14:09.104215321 +0000 UTC m=+1.427348731 container died 4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 29 07:14:09 compute-0 systemd[1]: libpod-4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40.scope: Consumed 1.086s CPU time.
Nov 29 07:14:09 compute-0 systemd[1]: Started libpod-conmon-0ca7eac3f38f0683df2a6c9ad3f2e7ce732424026d5f0c9f3c9bd9a81b557c64.scope.
Nov 29 07:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b73dc5237fe9dd11a5f85e148cd4449552ab645611c84b5188997671167db0a7-merged.mount: Deactivated successfully.
Nov 29 07:14:09 compute-0 podman[99188]: 2025-11-29 07:14:09.07913392 +0000 UTC m=+0.028080504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e3feef02291127ec705276ba55eb7c484615846b4277dd80fcac1af6fb50e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e3feef02291127ec705276ba55eb7c484615846b4277dd80fcac1af6fb50e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:09 compute-0 podman[99188]: 2025-11-29 07:14:09.196818737 +0000 UTC m=+0.145765321 container init 0ca7eac3f38f0683df2a6c9ad3f2e7ce732424026d5f0c9f3c9bd9a81b557c64 (image=quay.io/ceph/ceph:v18, name=upbeat_feynman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:09 compute-0 podman[99188]: 2025-11-29 07:14:09.205886344 +0000 UTC m=+0.154832908 container start 0ca7eac3f38f0683df2a6c9ad3f2e7ce732424026d5f0c9f3c9bd9a81b557c64 (image=quay.io/ceph/ceph:v18, name=upbeat_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:14:09 compute-0 podman[99080]: 2025-11-29 07:14:09.209086001 +0000 UTC m=+1.532219421 container remove 4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elion, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:09 compute-0 podman[99188]: 2025-11-29 07:14:09.218584099 +0000 UTC m=+0.167530673 container attach 0ca7eac3f38f0683df2a6c9ad3f2e7ce732424026d5f0c9f3c9bd9a81b557c64 (image=quay.io/ceph/ceph:v18, name=upbeat_feynman, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:14:09 compute-0 systemd[1]: libpod-conmon-4ddaca89962d61c9463a1d0d2752de7890663c5b8d56df942248b1e67ca83c40.scope: Deactivated successfully.
Nov 29 07:14:09 compute-0 sudo[98933]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:09 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev f87b8d8d-7e96-4ff7-9b4d-baefc7303ca8 (Updating rgw.rgw deployment (+1 -> 1))
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qxekyl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qxekyl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qxekyl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:09 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.qxekyl on compute-0
Nov 29 07:14:09 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.qxekyl on compute-0
Nov 29 07:14:09 compute-0 sudo[99222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:09 compute-0 sudo[99222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:09 compute-0 sudo[99222]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:09 compute-0 sudo[99247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 29 07:14:09 compute-0 sudo[99247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:09 compute-0 sudo[99247]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 29 07:14:09 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 38d1b8c5-5aa5-4607-bcda-d49764b363a6 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1f( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1c( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.b( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1e( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1d( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.a( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.9( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.8( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.6( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.5( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.4( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.3( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.2( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.7( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.c( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.e( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.f( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.d( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.10( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.11( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1f( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1e( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.13( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.12( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.7( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.14( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.6( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.4( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.15( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.2( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.b( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.16( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.12( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.17( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.18( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1b( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.17( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.18( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.19( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=14/15 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.19( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1a( empty local-lis/les=20/21 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1c( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.b( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1f( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1d( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v85: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1e( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.9( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.8( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.5( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.a( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.0( empty local-lis/les=31/32 n=0 ec=11/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.6( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.4( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.2( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.3( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.7( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.c( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.e( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.f( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.d( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.10( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.13( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.11( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.12( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.14( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.15( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=31/32 n=0 ec=13/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1a( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.18( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.16( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.17( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.19( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 32 pg[2.1b( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=20/20 les/c/f=21/21/0 sis=31) [2] r=0 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=14/14 les/c/f=15/15/0 sis=31) [1] r=0 lpr=31 pi=[14,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:09 compute-0 sudo[99272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:09 compute-0 sudo[99272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:09 compute-0 sudo[99272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:09 compute-0 sudo[99297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:14:09 compute-0 sudo[99297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:09 compute-0 podman[99382]: 2025-11-29 07:14:09.831329907 +0000 UTC m=+0.040713137 container create a5263b3b9296c867debbb80bbb3a724f4f5a3a3b2b628ef451131d03d5f26e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shtern, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:14:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 07:14:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812818647' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:14:09 compute-0 upbeat_feynman[99218]: 
Nov 29 07:14:09 compute-0 upbeat_feynman[99218]: {"fsid":"14ff1f30-5059-58f1-9a23-69871bb275a1","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":172,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1764400405,"num_in_osds":3,"osd_in_since":1764400372,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83750912,"bytes_avail":64328175616,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T07:13:07.423594+0000","services":{}},"progress_events":{"39fb0d9f-482d-4875-a97b-049c5684866f":{"message":"PG autoscaler increasing pool 3 PGs from 1 to 32 (0s)\n      [............................] ","progress":0,"add_to_ceph_s":false},"a58cbc1f-cdba-4ab8-bdb1-c47dfd50ce00":{"message":"PG autoscaler increasing pool 2 PGs from 1 to 32 (0s)\n      [............................] ","progress":0,"add_to_ceph_s":false}}}
Nov 29 07:14:09 compute-0 systemd[1]: Started libpod-conmon-a5263b3b9296c867debbb80bbb3a724f4f5a3a3b2b628ef451131d03d5f26e0e.scope.
Nov 29 07:14:09 compute-0 systemd[1]: libpod-0ca7eac3f38f0683df2a6c9ad3f2e7ce732424026d5f0c9f3c9bd9a81b557c64.scope: Deactivated successfully.
Nov 29 07:14:09 compute-0 podman[99188]: 2025-11-29 07:14:09.86969602 +0000 UTC m=+0.818642584 container died 0ca7eac3f38f0683df2a6c9ad3f2e7ce732424026d5f0c9f3c9bd9a81b557c64 (image=quay.io/ceph/ceph:v18, name=upbeat_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-46e3feef02291127ec705276ba55eb7c484615846b4277dd80fcac1af6fb50e5-merged.mount: Deactivated successfully.
Nov 29 07:14:09 compute-0 podman[99382]: 2025-11-29 07:14:09.813638857 +0000 UTC m=+0.023022117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:09 compute-0 podman[99382]: 2025-11-29 07:14:09.914750864 +0000 UTC m=+0.124134114 container init a5263b3b9296c867debbb80bbb3a724f4f5a3a3b2b628ef451131d03d5f26e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:09 compute-0 podman[99382]: 2025-11-29 07:14:09.920731347 +0000 UTC m=+0.130114577 container start a5263b3b9296c867debbb80bbb3a724f4f5a3a3b2b628ef451131d03d5f26e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shtern, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:14:09 compute-0 trusting_shtern[99400]: 167 167
Nov 29 07:14:09 compute-0 podman[99188]: 2025-11-29 07:14:09.92419817 +0000 UTC m=+0.873144734 container remove 0ca7eac3f38f0683df2a6c9ad3f2e7ce732424026d5f0c9f3c9bd9a81b557c64 (image=quay.io/ceph/ceph:v18, name=upbeat_feynman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:09 compute-0 systemd[1]: libpod-a5263b3b9296c867debbb80bbb3a724f4f5a3a3b2b628ef451131d03d5f26e0e.scope: Deactivated successfully.
Nov 29 07:14:09 compute-0 systemd[1]: libpod-conmon-0ca7eac3f38f0683df2a6c9ad3f2e7ce732424026d5f0c9f3c9bd9a81b557c64.scope: Deactivated successfully.
Nov 29 07:14:09 compute-0 podman[99382]: 2025-11-29 07:14:09.93300192 +0000 UTC m=+0.142385170 container attach a5263b3b9296c867debbb80bbb3a724f4f5a3a3b2b628ef451131d03d5f26e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:14:09 compute-0 podman[99382]: 2025-11-29 07:14:09.933449522 +0000 UTC m=+0.142832752 container died a5263b3b9296c867debbb80bbb3a724f4f5a3a3b2b628ef451131d03d5f26e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shtern, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:09 compute-0 sudo[99166]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-731eba383a241a23448bbdf8b0d33c8963b1ceaa3e4cd12dfb52ba99318e7b0b-merged.mount: Deactivated successfully.
Nov 29 07:14:09 compute-0 podman[99382]: 2025-11-29 07:14:09.973792058 +0000 UTC m=+0.183175288 container remove a5263b3b9296c867debbb80bbb3a724f4f5a3a3b2b628ef451131d03d5f26e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shtern, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:09 compute-0 systemd[1]: libpod-conmon-a5263b3b9296c867debbb80bbb3a724f4f5a3a3b2b628ef451131d03d5f26e0e.scope: Deactivated successfully.
Nov 29 07:14:10 compute-0 systemd[1]: Reloading.
Nov 29 07:14:10 compute-0 systemd-rc-local-generator[99480]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:14:10 compute-0 systemd-sysv-generator[99486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qxekyl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.qxekyl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:10 compute-0 ceph-mon[75050]: Deploying daemon rgw.rgw.compute-0.qxekyl on compute-0
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:10 compute-0 ceph-mon[75050]: osdmap e32: 3 total, 3 up, 3 in
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 07:14:10 compute-0 ceph-mon[75050]: pgmap v85: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1812818647' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:14:10 compute-0 sudo[99460]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uybgfuznmxflbxpaapwfugvudjhkcqzg ; /usr/bin/python3'
Nov 29 07:14:10 compute-0 sudo[99460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:10 compute-0 systemd[1]: Reloading.
Nov 29 07:14:10 compute-0 systemd-sysv-generator[99529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:14:10 compute-0 systemd-rc-local-generator[99525]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 29 07:14:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 07:14:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 29 07:14:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 29 07:14:10 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 48941c1a-5094-4f0b-8728-b9f9c1c58784 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 07:14:10 compute-0 python3[99495]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:10 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=11.742144585s) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active pruub 65.696365356s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:14:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:10 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=20/21 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=11.742144585s) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown pruub 65.696365356s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:10 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 29 07:14:10 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 29 07:14:10 compute-0 podman[99535]: 2025-11-29 07:14:10.505577437 +0000 UTC m=+0.055784567 container create 3d93891b6df734471847b6b59ef564b9739f858402cf7220345d6fd98554bfd7 (image=quay.io/ceph/ceph:v18, name=exciting_pike, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:14:10 compute-0 ceph-mgr[75345]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 29 07:14:10 compute-0 podman[99535]: 2025-11-29 07:14:10.489445449 +0000 UTC m=+0.039652599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:10 compute-0 systemd[1]: Started libpod-conmon-3d93891b6df734471847b6b59ef564b9739f858402cf7220345d6fd98554bfd7.scope.
Nov 29 07:14:10 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 33 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=33 pruub=12.136978149s) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active pruub 77.669227600s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:10 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 33 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=33 pruub=12.136978149s) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown pruub 77.669227600s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:10 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.qxekyl for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:14:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831463933fe06b2e23cae9e58b52f3da0c14f1461e673571ff5232a7b95d5c34/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/831463933fe06b2e23cae9e58b52f3da0c14f1461e673571ff5232a7b95d5c34/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:10 compute-0 podman[99535]: 2025-11-29 07:14:10.642841607 +0000 UTC m=+0.193048767 container init 3d93891b6df734471847b6b59ef564b9739f858402cf7220345d6fd98554bfd7 (image=quay.io/ceph/ceph:v18, name=exciting_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:14:10 compute-0 podman[99535]: 2025-11-29 07:14:10.651308627 +0000 UTC m=+0.201515757 container start 3d93891b6df734471847b6b59ef564b9739f858402cf7220345d6fd98554bfd7 (image=quay.io/ceph/ceph:v18, name=exciting_pike, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:10 compute-0 podman[99535]: 2025-11-29 07:14:10.656680133 +0000 UTC m=+0.206887363 container attach 3d93891b6df734471847b6b59ef564b9739f858402cf7220345d6fd98554bfd7 (image=quay.io/ceph/ceph:v18, name=exciting_pike, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:14:10 compute-0 podman[99603]: 2025-11-29 07:14:10.833177918 +0000 UTC m=+0.048883810 container create 4edd4e145deee38b7ccf448ee661159c19a1b05ba5a7fb95dd92a6fb13e29fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-rgw-rgw-compute-0-qxekyl, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bfdfdf558c247ee2513ed0119bd7c3a2b7280136a4dd1b92c24543f63b52341/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bfdfdf558c247ee2513ed0119bd7c3a2b7280136a4dd1b92c24543f63b52341/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bfdfdf558c247ee2513ed0119bd7c3a2b7280136a4dd1b92c24543f63b52341/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bfdfdf558c247ee2513ed0119bd7c3a2b7280136a4dd1b92c24543f63b52341/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.qxekyl supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:10 compute-0 podman[99603]: 2025-11-29 07:14:10.898180424 +0000 UTC m=+0.113886336 container init 4edd4e145deee38b7ccf448ee661159c19a1b05ba5a7fb95dd92a6fb13e29fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-rgw-rgw-compute-0-qxekyl, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:14:10 compute-0 podman[99603]: 2025-11-29 07:14:10.90286039 +0000 UTC m=+0.118566282 container start 4edd4e145deee38b7ccf448ee661159c19a1b05ba5a7fb95dd92a6fb13e29fbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-rgw-rgw-compute-0-qxekyl, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:10 compute-0 bash[99603]: 4edd4e145deee38b7ccf448ee661159c19a1b05ba5a7fb95dd92a6fb13e29fbf
Nov 29 07:14:10 compute-0 podman[99603]: 2025-11-29 07:14:10.812878646 +0000 UTC m=+0.028584568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:10 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.qxekyl for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:14:10 compute-0 sudo[99297]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:14:10 compute-0 radosgw[99623]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:14:10 compute-0 radosgw[99623]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 07:14:10 compute-0 radosgw[99623]: framework: beast
Nov 29 07:14:10 compute-0 radosgw[99623]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 29 07:14:10 compute-0 radosgw[99623]: init_numa not setting numa affinity
Nov 29 07:14:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:14:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 07:14:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:10 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev f87b8d8d-7e96-4ff7-9b4d-baefc7303ca8 (Updating rgw.rgw deployment (+1 -> 1))
Nov 29 07:14:10 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event f87b8d8d-7e96-4ff7-9b4d-baefc7303ca8 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 29 07:14:10 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 29 07:14:10 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 29 07:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 07:14:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 07:14:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:11 compute-0 sudo[99704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:11 compute-0 sudo[99704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:11 compute-0 sudo[99704]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:11 compute-0 sudo[99729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:14:11 compute-0 sudo[99729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:11 compute-0 sudo[99729]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:11 compute-0 sudo[99754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:11 compute-0 sudo[99754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:11 compute-0 sudo[99754]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:11 compute-0 sudo[99779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:11 compute-0 sudo[99779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:11 compute-0 sudo[99779]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:14:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869722298' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:14:11 compute-0 exciting_pike[99552]: 
Nov 29 07:14:11 compute-0 exciting_pike[99552]: {"epoch":1,"fsid":"14ff1f30-5059-58f1-9a23-69871bb275a1","modified":"2025-11-29T07:11:11.715435Z","created":"2025-11-29T07:11:11.715435Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 29 07:14:11 compute-0 exciting_pike[99552]: dumped monmap epoch 1
Nov 29 07:14:11 compute-0 sudo[99804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:11 compute-0 sudo[99804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:11 compute-0 sudo[99804]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:11 compute-0 systemd[1]: libpod-3d93891b6df734471847b6b59ef564b9739f858402cf7220345d6fd98554bfd7.scope: Deactivated successfully.
Nov 29 07:14:11 compute-0 podman[99535]: 2025-11-29 07:14:11.300314409 +0000 UTC m=+0.850521549 container died 3d93891b6df734471847b6b59ef564b9739f858402cf7220345d6fd98554bfd7 (image=quay.io/ceph/ceph:v18, name=exciting_pike, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-831463933fe06b2e23cae9e58b52f3da0c14f1461e673571ff5232a7b95d5c34-merged.mount: Deactivated successfully.
Nov 29 07:14:11 compute-0 podman[99535]: 2025-11-29 07:14:11.351092389 +0000 UTC m=+0.901299519 container remove 3d93891b6df734471847b6b59ef564b9739f858402cf7220345d6fd98554bfd7 (image=quay.io/ceph/ceph:v18, name=exciting_pike, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:14:11 compute-0 sudo[99832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:14:11 compute-0 sudo[99832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:11 compute-0 systemd[1]: libpod-conmon-3d93891b6df734471847b6b59ef564b9739f858402cf7220345d6fd98554bfd7.scope: Deactivated successfully.
Nov 29 07:14:11 compute-0 sudo[99460]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 29 07:14:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 29 07:14:11 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 29 07:14:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 07:14:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 9309c8e4-541d-4fad-9547-3a549eb2f392 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1d( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1c( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1e( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1f( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.10( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.11( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.12( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.13( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.14( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.15( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.16( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.17( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.8( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.9( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev a58cbc1f-cdba-4ab8-bdb1-c47dfd50ce00 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.a( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event a58cbc1f-cdba-4ab8-bdb1-c47dfd50ce00 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.b( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 39fb0d9f-482d-4875-a97b-049c5684866f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.6( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.7( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.4( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 39fb0d9f-482d-4875-a97b-049c5684866f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 52b9f2c5-884e-44b0-b086-4266d8ae85aa (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.2( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.5( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.3( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.f( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 52b9f2c5-884e-44b0-b086-4266d8ae85aa (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.e( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.d( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.c( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1b( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 38d1b8c5-5aa5-4607-bcda-d49764b363a6 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1a( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.19( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 38d1b8c5-5aa5-4607-bcda-d49764b363a6 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.18( empty local-lis/les=20/21 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 48941c1a-5094-4f0b-8728-b9f9c1c58784 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 48941c1a-5094-4f0b-8728-b9f9c1c58784 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 9309c8e4-541d-4fad-9547-3a549eb2f392 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 9309c8e4-541d-4fad-9547-3a549eb2f392 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:11 compute-0 ceph-mon[75050]: osdmap e33: 3 total, 3 up, 3 in
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:14:11 compute-0 ceph-mon[75050]: 3.1 scrub starts
Nov 29 07:14:11 compute-0 ceph-mon[75050]: 3.1 scrub ok
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2869722298' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1e( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.7( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.b( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.4( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.f( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.10( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.11( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.12( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.16( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.17( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=14/15 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.0( empty local-lis/les=33/34 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v88: 132 pgs: 1 peering, 94 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.e( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 34 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=20/20 les/c/f=21/21/0 sis=33) [2] r=0 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 07:14:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1e( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1d( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1f( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.b( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.3( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.19( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.0( empty local-lis/les=33/34 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.c( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.6( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.15( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.16( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.17( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 34 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=14/14 les/c/f=15/15/0 sis=33) [0] r=0 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:11 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:11 compute-0 sudo[99968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxbcqrphuegymyqqrhngbwreuqogktia ; /usr/bin/python3'
Nov 29 07:14:11 compute-0 sudo[99968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:11 compute-0 podman[99951]: 2025-11-29 07:14:11.822829347 +0000 UTC m=+0.080052497 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:14:11 compute-0 podman[99951]: 2025-11-29 07:14:11.917612122 +0000 UTC m=+0.174835252 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:11 compute-0 python3[99975]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:11 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 29 07:14:11 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 29 07:14:11 compute-0 podman[99990]: 2025-11-29 07:14:11.986227496 +0000 UTC m=+0.041758576 container create 1984a5dd746f19739c76e375344f737d6a8ecde942af8279e0e8226c9af15732 (image=quay.io/ceph/ceph:v18, name=fervent_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:14:12 compute-0 systemd[1]: Started libpod-conmon-1984a5dd746f19739c76e375344f737d6a8ecde942af8279e0e8226c9af15732.scope.
Nov 29 07:14:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/737e4ec10508f08452ed613b9bea0378e4daff75385062cb3b1baeb37d6669cd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/737e4ec10508f08452ed613b9bea0378e4daff75385062cb3b1baeb37d6669cd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:12 compute-0 podman[99990]: 2025-11-29 07:14:11.967795066 +0000 UTC m=+0.023326166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:12 compute-0 podman[99990]: 2025-11-29 07:14:12.074793443 +0000 UTC m=+0.130324543 container init 1984a5dd746f19739c76e375344f737d6a8ecde942af8279e0e8226c9af15732 (image=quay.io/ceph/ceph:v18, name=fervent_einstein, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:12 compute-0 podman[99990]: 2025-11-29 07:14:12.082103611 +0000 UTC m=+0.137634701 container start 1984a5dd746f19739c76e375344f737d6a8ecde942af8279e0e8226c9af15732 (image=quay.io/ceph/ceph:v18, name=fervent_einstein, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:14:12 compute-0 podman[99990]: 2025-11-29 07:14:12.085390621 +0000 UTC m=+0.140921701 container attach 1984a5dd746f19739c76e375344f737d6a8ecde942af8279e0e8226c9af15732 (image=quay.io/ceph/ceph:v18, name=fervent_einstein, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:12 compute-0 sudo[99832]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 29 07:14:12 compute-0 ceph-mon[75050]: Saving service rgw.rgw spec with placement compute-0
Nov 29 07:14:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:14:12 compute-0 ceph-mon[75050]: osdmap e34: 3 total, 3 up, 3 in
Nov 29 07:14:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 07:14:12 compute-0 ceph-mon[75050]: pgmap v88: 132 pgs: 1 peering, 94 unknown, 37 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 07:14:12 compute-0 ceph-mon[75050]: 4.1 scrub starts
Nov 29 07:14:12 compute-0 ceph-mon[75050]: 4.1 scrub ok
Nov 29 07:14:12 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 29 07:14:12 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:14:12 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 35 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35 pruub=8.614796638s) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active pruub 70.788169861s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:12 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 35 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35 pruub=8.614796638s) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown pruub 70.788169861s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:12 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 35 pg[8.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:12 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 916e42e7-b4a7-4f19-a150-cbe081778e31 does not exist
Nov 29 07:14:12 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 75b35438-3072-466f-b34d-cc83321339fc does not exist
Nov 29 07:14:12 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c2c8ff38-6662-4825-a809-49ed446699d2 does not exist
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:12 compute-0 sudo[100144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:12 compute-0 sudo[100144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:12 compute-0 sudo[100144]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:12 compute-0 sudo[100171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:12 compute-0 sudo[100171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:12 compute-0 sudo[100171]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 29 07:14:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/35778660' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 07:14:12 compute-0 fervent_einstein[100032]: [client.openstack]
Nov 29 07:14:12 compute-0 fervent_einstein[100032]:         key = AQBznCppAAAAABAATpsmuZlSZuS833gbXPyFSA==
Nov 29 07:14:12 compute-0 fervent_einstein[100032]:         caps mgr = "allow *"
Nov 29 07:14:12 compute-0 fervent_einstein[100032]:         caps mon = "profile rbd"
Nov 29 07:14:12 compute-0 fervent_einstein[100032]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 29 07:14:12 compute-0 sudo[100196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:12 compute-0 sudo[100196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:12 compute-0 systemd[1]: libpod-1984a5dd746f19739c76e375344f737d6a8ecde942af8279e0e8226c9af15732.scope: Deactivated successfully.
Nov 29 07:14:12 compute-0 podman[99990]: 2025-11-29 07:14:12.752759243 +0000 UTC m=+0.808290323 container died 1984a5dd746f19739c76e375344f737d6a8ecde942af8279e0e8226c9af15732 (image=quay.io/ceph/ceph:v18, name=fervent_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:14:12 compute-0 sudo[100196]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-737e4ec10508f08452ed613b9bea0378e4daff75385062cb3b1baeb37d6669cd-merged.mount: Deactivated successfully.
Nov 29 07:14:12 compute-0 systemd[76617]: Starting Mark boot as successful...
Nov 29 07:14:12 compute-0 systemd[76617]: Finished Mark boot as successful.
Nov 29 07:14:12 compute-0 podman[99990]: 2025-11-29 07:14:12.80857495 +0000 UTC m=+0.864106030 container remove 1984a5dd746f19739c76e375344f737d6a8ecde942af8279e0e8226c9af15732 (image=quay.io/ceph/ceph:v18, name=fervent_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:12 compute-0 sudo[100224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:14:12 compute-0 sudo[100224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:12 compute-0 systemd[1]: libpod-conmon-1984a5dd746f19739c76e375344f737d6a8ecde942af8279e0e8226c9af15732.scope: Deactivated successfully.
Nov 29 07:14:12 compute-0 sudo[99968]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:13 compute-0 podman[100301]: 2025-11-29 07:14:13.101343724 +0000 UTC m=+0.040904382 container create eadcfd77c11b45b6794ada77ca439670e257052c82f6528d014cc208576c080a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:14:13 compute-0 podman[100301]: 2025-11-29 07:14:13.082129331 +0000 UTC m=+0.021690019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:13 compute-0 systemd[1]: Started libpod-conmon-eadcfd77c11b45b6794ada77ca439670e257052c82f6528d014cc208576c080a.scope.
Nov 29 07:14:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:13 compute-0 podman[100301]: 2025-11-29 07:14:13.331427455 +0000 UTC m=+0.270988143 container init eadcfd77c11b45b6794ada77ca439670e257052c82f6528d014cc208576c080a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:14:13 compute-0 podman[100301]: 2025-11-29 07:14:13.338072006 +0000 UTC m=+0.277632664 container start eadcfd77c11b45b6794ada77ca439670e257052c82f6528d014cc208576c080a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:13 compute-0 epic_matsumoto[100317]: 167 167
Nov 29 07:14:13 compute-0 systemd[1]: libpod-eadcfd77c11b45b6794ada77ca439670e257052c82f6528d014cc208576c080a.scope: Deactivated successfully.
Nov 29 07:14:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v90: 178 pgs: 2 peering, 77 unknown, 99 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1017 B/s rd, 1017 B/s wr, 1 op/s
Nov 29 07:14:13 compute-0 podman[100301]: 2025-11-29 07:14:13.452726731 +0000 UTC m=+0.392287389 container attach eadcfd77c11b45b6794ada77ca439670e257052c82f6528d014cc208576c080a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:14:13 compute-0 podman[100301]: 2025-11-29 07:14:13.453117452 +0000 UTC m=+0.392678110 container died eadcfd77c11b45b6794ada77ca439670e257052c82f6528d014cc208576c080a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 29 07:14:13 compute-0 ceph-mon[75050]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 07:14:13 compute-0 ceph-mon[75050]: 2.1 scrub starts
Nov 29 07:14:13 compute-0 ceph-mon[75050]: osdmap e35: 3 total, 3 up, 3 in
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/35778660' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 07:14:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 29 07:14:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 07:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3636543de61b92c87e538ad1daa9490cb7b6336595d65a17b5879e78173f3840-merged.mount: Deactivated successfully.
Nov 29 07:14:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 29 07:14:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1c( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.12( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.11( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.16( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.15( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.17( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.5( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.7( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.c( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.d( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1f( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.19( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1a( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=19/20 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 07:14:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 podman[100301]: 2025-11-29 07:14:13.786594942 +0000 UTC m=+0.726155600 container remove eadcfd77c11b45b6794ada77ca439670e257052c82f6528d014cc208576c080a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.12( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.13( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.10( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.15( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.16( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.14( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.11( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.b( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.17( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.9( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.4( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.6( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.0( empty local-lis/les=35/36 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.2( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.7( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.3( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.5( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.18( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.8( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1b( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.19( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 36 pg[7.1d( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=19/19 les/c/f=20/20/0 sis=35) [1] r=0 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:13 compute-0 systemd[1]: libpod-conmon-eadcfd77c11b45b6794ada77ca439670e257052c82f6528d014cc208576c080a.scope: Deactivated successfully.
Nov 29 07:14:13 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 29 07:14:13 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 29 07:14:14 compute-0 podman[100343]: 2025-11-29 07:14:13.929002742 +0000 UTC m=+0.024132637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:14 compute-0 podman[100343]: 2025-11-29 07:14:14.151444695 +0000 UTC m=+0.246574560 container create 525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 35 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=12.946634293s) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active pruub 82.044776917s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=12.946634293s) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown pruub 82.044776917s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.1( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.4( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.6( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.9( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.b( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.c( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 36 pg[6.f( empty local-lis/les=17/18 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:14 compute-0 systemd[1]: Started libpod-conmon-525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54.scope.
Nov 29 07:14:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3265ba7c54cbf4cce71cc9aea9f411d01f952f617da3dfb6c2e467f758309e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3265ba7c54cbf4cce71cc9aea9f411d01f952f617da3dfb6c2e467f758309e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3265ba7c54cbf4cce71cc9aea9f411d01f952f617da3dfb6c2e467f758309e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3265ba7c54cbf4cce71cc9aea9f411d01f952f617da3dfb6c2e467f758309e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c3265ba7c54cbf4cce71cc9aea9f411d01f952f617da3dfb6c2e467f758309e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:14 compute-0 podman[100343]: 2025-11-29 07:14:14.2856144 +0000 UTC m=+0.380744285 container init 525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nobel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:14:14 compute-0 podman[100343]: 2025-11-29 07:14:14.292765525 +0000 UTC m=+0.387895390 container start 525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:14:14 compute-0 podman[100343]: 2025-11-29 07:14:14.296714972 +0000 UTC m=+0.391844847 container attach 525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nobel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:14:14 compute-0 sudo[100511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cotxfmcfhjrshmntsnoyylxzowlcfepc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764400454.0135396-36756-3439241121725/async_wrapper.py j35055352347 30 /home/zuul/.ansible/tmp/ansible-tmp-1764400454.0135396-36756-3439241121725/AnsiballZ_command.py _'
Nov 29 07:14:14 compute-0 sudo[100511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:14 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 29 07:14:14 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 29 07:14:14 compute-0 ansible-async_wrapper.py[100513]: Invoked with j35055352347 30 /home/zuul/.ansible/tmp/ansible-tmp-1764400454.0135396-36756-3439241121725/AnsiballZ_command.py _
Nov 29 07:14:14 compute-0 ansible-async_wrapper.py[100516]: Starting module and watcher
Nov 29 07:14:14 compute-0 ansible-async_wrapper.py[100516]: Start watching 100517 (30)
Nov 29 07:14:14 compute-0 ansible-async_wrapper.py[100517]: Start module (100517)
Nov 29 07:14:14 compute-0 ansible-async_wrapper.py[100513]: Return async_wrapper task started.
Nov 29 07:14:14 compute-0 sudo[100511]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 29 07:14:14 compute-0 ceph-mon[75050]: 2.1 scrub ok
Nov 29 07:14:14 compute-0 ceph-mon[75050]: pgmap v90: 178 pgs: 2 peering, 77 unknown, 99 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1017 B/s rd, 1017 B/s wr, 1 op/s
Nov 29 07:14:14 compute-0 ceph-mon[75050]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 07:14:14 compute-0 ceph-mon[75050]: osdmap e36: 3 total, 3 up, 3 in
Nov 29 07:14:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 07:14:14 compute-0 ceph-mon[75050]: 4.2 scrub starts
Nov 29 07:14:14 compute-0 ceph-mon[75050]: 4.2 scrub ok
Nov 29 07:14:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 07:14:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 29 07:14:14 compute-0 python3[100518]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 29 07:14:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 37 pg[9.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.a( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.b( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.8( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.6( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.1( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.4( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.9( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.3( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.f( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.e( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.2( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.c( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.d( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=35/37 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.5( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 37 pg[6.7( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=17/17 les/c/f=18/18/0 sis=35) [0] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:14 compute-0 podman[100519]: 2025-11-29 07:14:14.729656106 +0000 UTC m=+0.050393081 container create ceacd19b5a32dc3c374a6ce537c6ef1e9a612abc877b211b89314a78e0c92859 (image=quay.io/ceph/ceph:v18, name=admiring_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:14 compute-0 systemd[1]: Started libpod-conmon-ceacd19b5a32dc3c374a6ce537c6ef1e9a612abc877b211b89314a78e0c92859.scope.
Nov 29 07:14:14 compute-0 podman[100519]: 2025-11-29 07:14:14.705835098 +0000 UTC m=+0.026572093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b3f440e9660319ee96aca1869c76e4093fc4d0694b54b6141e46ed20bdb1cc7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b3f440e9660319ee96aca1869c76e4093fc4d0694b54b6141e46ed20bdb1cc7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:14 compute-0 podman[100519]: 2025-11-29 07:14:14.82959552 +0000 UTC m=+0.150332515 container init ceacd19b5a32dc3c374a6ce537c6ef1e9a612abc877b211b89314a78e0c92859 (image=quay.io/ceph/ceph:v18, name=admiring_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:14 compute-0 podman[100519]: 2025-11-29 07:14:14.835678945 +0000 UTC m=+0.156415920 container start ceacd19b5a32dc3c374a6ce537c6ef1e9a612abc877b211b89314a78e0c92859 (image=quay.io/ceph/ceph:v18, name=admiring_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:14:14 compute-0 podman[100519]: 2025-11-29 07:14:14.838986776 +0000 UTC m=+0.159723751 container attach ceacd19b5a32dc3c374a6ce537c6ef1e9a612abc877b211b89314a78e0c92859 (image=quay.io/ceph/ceph:v18, name=admiring_franklin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:14:15 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:14:15 compute-0 admiring_franklin[100536]: 
Nov 29 07:14:15 compute-0 admiring_franklin[100536]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 07:14:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v93: 179 pgs: 2 peering, 78 unknown, 99 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1019 B/s rd, 1019 B/s wr, 1 op/s
Nov 29 07:14:15 compute-0 wizardly_nobel[100457]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:14:15 compute-0 wizardly_nobel[100457]: --> relative data size: 1.0
Nov 29 07:14:15 compute-0 wizardly_nobel[100457]: --> All data devices are unavailable
Nov 29 07:14:15 compute-0 systemd[1]: libpod-ceacd19b5a32dc3c374a6ce537c6ef1e9a612abc877b211b89314a78e0c92859.scope: Deactivated successfully.
Nov 29 07:14:15 compute-0 podman[100519]: 2025-11-29 07:14:15.462412364 +0000 UTC m=+0.783149329 container died ceacd19b5a32dc3c374a6ce537c6ef1e9a612abc877b211b89314a78e0c92859 (image=quay.io/ceph/ceph:v18, name=admiring_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:14:15 compute-0 systemd[1]: libpod-525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54.scope: Deactivated successfully.
Nov 29 07:14:15 compute-0 systemd[1]: libpod-525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54.scope: Consumed 1.116s CPU time.
Nov 29 07:14:15 compute-0 ceph-mgr[75345]: [progress INFO root] Writing back 10 completed events
Nov 29 07:14:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:14:15 compute-0 sudo[100653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhutlblrncrkrcetmwypnlknccofqdwh ; /usr/bin/python3'
Nov 29 07:14:15 compute-0 sudo[100653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 29 07:14:15 compute-0 python3[100655]: ansible-ansible.legacy.async_status Invoked with jid=j35055352347.100513 mode=status _async_dir=/root/.ansible_async
Nov 29 07:14:15 compute-0 sudo[100653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 29 07:14:15 compute-0 ceph-mon[75050]: 2.2 scrub starts
Nov 29 07:14:15 compute-0 ceph-mon[75050]: 2.2 scrub ok
Nov 29 07:14:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 07:14:15 compute-0 ceph-mon[75050]: osdmap e37: 3 total, 3 up, 3 in
Nov 29 07:14:15 compute-0 ceph-mon[75050]: from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:14:15 compute-0 ceph-mon[75050]: pgmap v93: 179 pgs: 2 peering, 78 unknown, 99 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1019 B/s rd, 1019 B/s wr, 1 op/s
Nov 29 07:14:15 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 29 07:14:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 07:14:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b3f440e9660319ee96aca1869c76e4093fc4d0694b54b6141e46ed20bdb1cc7-merged.mount: Deactivated successfully.
Nov 29 07:14:15 compute-0 podman[100519]: 2025-11-29 07:14:15.941216743 +0000 UTC m=+1.261953718 container remove ceacd19b5a32dc3c374a6ce537c6ef1e9a612abc877b211b89314a78e0c92859 (image=quay.io/ceph/ceph:v18, name=admiring_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:14:15 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 29 07:14:15 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 29 07:14:16 compute-0 podman[100343]: 2025-11-29 07:14:16.005555461 +0000 UTC m=+2.100685346 container died 525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c3265ba7c54cbf4cce71cc9aea9f411d01f952f617da3dfb6c2e467f758309e-merged.mount: Deactivated successfully.
Nov 29 07:14:16 compute-0 systemd[1]: libpod-conmon-ceacd19b5a32dc3c374a6ce537c6ef1e9a612abc877b211b89314a78e0c92859.scope: Deactivated successfully.
Nov 29 07:14:16 compute-0 podman[100343]: 2025-11-29 07:14:16.063570848 +0000 UTC m=+2.158700713 container remove 525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_nobel, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:14:16 compute-0 systemd[1]: libpod-conmon-525aabe85ca1a7238c606c05932a1a0c40294d99aa6a17a400e2400c4ee9ce54.scope: Deactivated successfully.
Nov 29 07:14:16 compute-0 sudo[100224]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:16 compute-0 sudo[100660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:16 compute-0 sudo[100660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:16 compute-0 sudo[100660]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:16 compute-0 sudo[100685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:16 compute-0 sudo[100685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:16 compute-0 sudo[100685]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:16 compute-0 sudo[100710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:16 compute-0 sudo[100710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:16 compute-0 sudo[100710]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:16 compute-0 sudo[100735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:14:16 compute-0 sudo[100735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 38 pg[10.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [2] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:16 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.2 deep-scrub starts
Nov 29 07:14:16 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.2 deep-scrub ok
Nov 29 07:14:16 compute-0 podman[100798]: 2025-11-29 07:14:16.709515988 +0000 UTC m=+0.025045502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 29 07:14:16 compute-0 ansible-async_wrapper.py[100517]: Module complete (100517)
Nov 29 07:14:17 compute-0 sudo[100858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxonmcjpelwqgktdexybcqfknwjfavkk ; /usr/bin/python3'
Nov 29 07:14:17 compute-0 sudo[100858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:17 compute-0 podman[100798]: 2025-11-29 07:14:17.162813534 +0000 UTC m=+0.478343028 container create 17816fc84d88d8f7ba62542c43ade32f6eaee8d3f1b4b0a0bb2504cd108887d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:14:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 07:14:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 29 07:14:17 compute-0 python3[100860]: ansible-ansible.legacy.async_status Invoked with jid=j35055352347.100513 mode=status _async_dir=/root/.ansible_async
Nov 29 07:14:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 29 07:14:17 compute-0 sudo[100858]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:17 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:17 compute-0 ceph-mon[75050]: osdmap e38: 3 total, 3 up, 3 in
Nov 29 07:14:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 07:14:17 compute-0 ceph-mon[75050]: 4.3 scrub starts
Nov 29 07:14:17 compute-0 ceph-mon[75050]: 4.3 scrub ok
Nov 29 07:14:17 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 39 pg[10.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [2] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:17 compute-0 systemd[1]: Started libpod-conmon-17816fc84d88d8f7ba62542c43ade32f6eaee8d3f1b4b0a0bb2504cd108887d6.scope.
Nov 29 07:14:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:17 compute-0 podman[100798]: 2025-11-29 07:14:17.248947365 +0000 UTC m=+0.564476879 container init 17816fc84d88d8f7ba62542c43ade32f6eaee8d3f1b4b0a0bb2504cd108887d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:14:17 compute-0 podman[100798]: 2025-11-29 07:14:17.256286953 +0000 UTC m=+0.571816437 container start 17816fc84d88d8f7ba62542c43ade32f6eaee8d3f1b4b0a0bb2504cd108887d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:17 compute-0 dazzling_bartik[100864]: 167 167
Nov 29 07:14:17 compute-0 systemd[1]: libpod-17816fc84d88d8f7ba62542c43ade32f6eaee8d3f1b4b0a0bb2504cd108887d6.scope: Deactivated successfully.
Nov 29 07:14:17 compute-0 podman[100798]: 2025-11-29 07:14:17.261274809 +0000 UTC m=+0.576804323 container attach 17816fc84d88d8f7ba62542c43ade32f6eaee8d3f1b4b0a0bb2504cd108887d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:14:17 compute-0 podman[100798]: 2025-11-29 07:14:17.261507755 +0000 UTC m=+0.577037249 container died 17816fc84d88d8f7ba62542c43ade32f6eaee8d3f1b4b0a0bb2504cd108887d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-00d8b03affcbfc6b1fe099ba50452c5067198d73428f9e79ad669f033efaa321-merged.mount: Deactivated successfully.
Nov 29 07:14:17 compute-0 podman[100798]: 2025-11-29 07:14:17.31689772 +0000 UTC m=+0.632427244 container remove 17816fc84d88d8f7ba62542c43ade32f6eaee8d3f1b4b0a0bb2504cd108887d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 29 07:14:17 compute-0 systemd[1]: libpod-conmon-17816fc84d88d8f7ba62542c43ade32f6eaee8d3f1b4b0a0bb2504cd108887d6.scope: Deactivated successfully.
Nov 29 07:14:17 compute-0 sudo[100929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfvojbxdrocfpawrbuevnetecbvesccy ; /usr/bin/python3'
Nov 29 07:14:17 compute-0 sudo[100929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:17 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 29 07:14:17 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 29 07:14:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v96: 180 pgs: 1 peering, 33 unknown, 146 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:17 compute-0 podman[100951]: 2025-11-29 07:14:17.481555014 +0000 UTC m=+0.043182404 container create 0ad83bfb3cfb5a359934b27096a88c292be5fa71916be71b286aa822f6d809ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:17 compute-0 python3[100945]: ansible-ansible.legacy.async_status Invoked with jid=j35055352347.100513 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 07:14:17 compute-0 sudo[100929]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:17 compute-0 systemd[1]: Started libpod-conmon-0ad83bfb3cfb5a359934b27096a88c292be5fa71916be71b286aa822f6d809ea.scope.
Nov 29 07:14:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a15a82977c97f993f0b6b00082c5671587f926112612daa1a47b031604b733/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a15a82977c97f993f0b6b00082c5671587f926112612daa1a47b031604b733/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a15a82977c97f993f0b6b00082c5671587f926112612daa1a47b031604b733/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a15a82977c97f993f0b6b00082c5671587f926112612daa1a47b031604b733/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:17 compute-0 podman[100951]: 2025-11-29 07:14:17.465076687 +0000 UTC m=+0.026704117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:17 compute-0 podman[100951]: 2025-11-29 07:14:17.573656527 +0000 UTC m=+0.135283957 container init 0ad83bfb3cfb5a359934b27096a88c292be5fa71916be71b286aa822f6d809ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:17 compute-0 podman[100951]: 2025-11-29 07:14:17.580957145 +0000 UTC m=+0.142584545 container start 0ad83bfb3cfb5a359934b27096a88c292be5fa71916be71b286aa822f6d809ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:14:17 compute-0 podman[100951]: 2025-11-29 07:14:17.584738078 +0000 UTC m=+0.146365498 container attach 0ad83bfb3cfb5a359934b27096a88c292be5fa71916be71b286aa822f6d809ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:18 compute-0 sudo[100995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baderpfrzxamddmaducmeagxffrwbmqc ; /usr/bin/python3'
Nov 29 07:14:18 compute-0 sudo[100995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 29 07:14:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 29 07:14:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 29 07:14:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 07:14:18 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1199245091' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:14:18 compute-0 ceph-mon[75050]: 3.2 deep-scrub starts
Nov 29 07:14:18 compute-0 ceph-mon[75050]: 3.2 deep-scrub ok
Nov 29 07:14:18 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/722835664' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 07:14:18 compute-0 ceph-mon[75050]: osdmap e39: 3 total, 3 up, 3 in
Nov 29 07:14:18 compute-0 ceph-mon[75050]: pgmap v96: 180 pgs: 1 peering, 33 unknown, 146 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:18 compute-0 python3[100997]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:18 compute-0 podman[100998]: 2025-11-29 07:14:18.290649178 +0000 UTC m=+0.044333636 container create 3b80d57af10daee235ae510439777a712e21de6bf31329415d7a471a8e4ac90c (image=quay.io/ceph/ceph:v18, name=thirsty_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:14:18 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:14:18 compute-0 systemd[1]: Started libpod-conmon-3b80d57af10daee235ae510439777a712e21de6bf31329415d7a471a8e4ac90c.scope.
Nov 29 07:14:18 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 29 07:14:18 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:18 compute-0 podman[100998]: 2025-11-29 07:14:18.272182556 +0000 UTC m=+0.025866914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db88cdff1dd3aa63ae4f737aa05bd0f5a8aeeaf53516bec19c379f96eda4be3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db88cdff1dd3aa63ae4f737aa05bd0f5a8aeeaf53516bec19c379f96eda4be3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:18 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 29 07:14:18 compute-0 podman[100998]: 2025-11-29 07:14:18.384623211 +0000 UTC m=+0.138307579 container init 3b80d57af10daee235ae510439777a712e21de6bf31329415d7a471a8e4ac90c (image=quay.io/ceph/ceph:v18, name=thirsty_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:14:18 compute-0 podman[100998]: 2025-11-29 07:14:18.391070276 +0000 UTC m=+0.144754614 container start 3b80d57af10daee235ae510439777a712e21de6bf31329415d7a471a8e4ac90c (image=quay.io/ceph/ceph:v18, name=thirsty_euclid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:18 compute-0 podman[100998]: 2025-11-29 07:14:18.394600472 +0000 UTC m=+0.148284810 container attach 3b80d57af10daee235ae510439777a712e21de6bf31329415d7a471a8e4ac90c (image=quay.io/ceph/ceph:v18, name=thirsty_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:18 compute-0 clever_benz[100967]: {
Nov 29 07:14:18 compute-0 clever_benz[100967]:     "0": [
Nov 29 07:14:18 compute-0 clever_benz[100967]:         {
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "devices": [
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "/dev/loop3"
Nov 29 07:14:18 compute-0 clever_benz[100967]:             ],
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_name": "ceph_lv0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_size": "21470642176",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "name": "ceph_lv0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "tags": {
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.crush_device_class": "",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.encrypted": "0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.osd_id": "0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.type": "block",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.vdo": "0"
Nov 29 07:14:18 compute-0 clever_benz[100967]:             },
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "type": "block",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "vg_name": "ceph_vg0"
Nov 29 07:14:18 compute-0 clever_benz[100967]:         }
Nov 29 07:14:18 compute-0 clever_benz[100967]:     ],
Nov 29 07:14:18 compute-0 clever_benz[100967]:     "1": [
Nov 29 07:14:18 compute-0 clever_benz[100967]:         {
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "devices": [
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "/dev/loop4"
Nov 29 07:14:18 compute-0 clever_benz[100967]:             ],
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_name": "ceph_lv1",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_size": "21470642176",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "name": "ceph_lv1",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "tags": {
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.crush_device_class": "",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.encrypted": "0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.osd_id": "1",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.type": "block",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.vdo": "0"
Nov 29 07:14:18 compute-0 clever_benz[100967]:             },
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "type": "block",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "vg_name": "ceph_vg1"
Nov 29 07:14:18 compute-0 clever_benz[100967]:         }
Nov 29 07:14:18 compute-0 clever_benz[100967]:     ],
Nov 29 07:14:18 compute-0 clever_benz[100967]:     "2": [
Nov 29 07:14:18 compute-0 clever_benz[100967]:         {
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "devices": [
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "/dev/loop5"
Nov 29 07:14:18 compute-0 clever_benz[100967]:             ],
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_name": "ceph_lv2",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_size": "21470642176",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "name": "ceph_lv2",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "tags": {
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.crush_device_class": "",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.encrypted": "0",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.osd_id": "2",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.type": "block",
Nov 29 07:14:18 compute-0 clever_benz[100967]:                 "ceph.vdo": "0"
Nov 29 07:14:18 compute-0 clever_benz[100967]:             },
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "type": "block",
Nov 29 07:14:18 compute-0 clever_benz[100967]:             "vg_name": "ceph_vg2"
Nov 29 07:14:18 compute-0 clever_benz[100967]:         }
Nov 29 07:14:18 compute-0 clever_benz[100967]:     ]
Nov 29 07:14:18 compute-0 clever_benz[100967]: }
Nov 29 07:14:18 compute-0 systemd[1]: libpod-0ad83bfb3cfb5a359934b27096a88c292be5fa71916be71b286aa822f6d809ea.scope: Deactivated successfully.
Nov 29 07:14:18 compute-0 podman[100951]: 2025-11-29 07:14:18.442070072 +0000 UTC m=+1.003697492 container died 0ad83bfb3cfb5a359934b27096a88c292be5fa71916be71b286aa822f6d809ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-39a15a82977c97f993f0b6b00082c5671587f926112612daa1a47b031604b733-merged.mount: Deactivated successfully.
Nov 29 07:14:18 compute-0 podman[100951]: 2025-11-29 07:14:18.505543546 +0000 UTC m=+1.067170966 container remove 0ad83bfb3cfb5a359934b27096a88c292be5fa71916be71b286aa822f6d809ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:18 compute-0 systemd[1]: libpod-conmon-0ad83bfb3cfb5a359934b27096a88c292be5fa71916be71b286aa822f6d809ea.scope: Deactivated successfully.
Nov 29 07:14:18 compute-0 sudo[100735]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:18 compute-0 sudo[101034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:18 compute-0 sudo[101034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:18 compute-0 sudo[101034]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:18 compute-0 sudo[101059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:18 compute-0 sudo[101059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:18 compute-0 sudo[101059]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:18 compute-0 sudo[101084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:18 compute-0 sudo[101084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:18 compute-0 sudo[101084]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:18 compute-0 sudo[101109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:14:18 compute-0 sudo[101109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:19 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:14:19 compute-0 thirsty_euclid[101016]: 
Nov 29 07:14:19 compute-0 thirsty_euclid[101016]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 07:14:19 compute-0 systemd[1]: libpod-3b80d57af10daee235ae510439777a712e21de6bf31329415d7a471a8e4ac90c.scope: Deactivated successfully.
Nov 29 07:14:19 compute-0 podman[100998]: 2025-11-29 07:14:19.044662024 +0000 UTC m=+0.798346372 container died 3b80d57af10daee235ae510439777a712e21de6bf31329415d7a471a8e4ac90c (image=quay.io/ceph/ceph:v18, name=thirsty_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6db88cdff1dd3aa63ae4f737aa05bd0f5a8aeeaf53516bec19c379f96eda4be3-merged.mount: Deactivated successfully.
Nov 29 07:14:19 compute-0 podman[100998]: 2025-11-29 07:14:19.08831522 +0000 UTC m=+0.841999558 container remove 3b80d57af10daee235ae510439777a712e21de6bf31329415d7a471a8e4ac90c (image=quay.io/ceph/ceph:v18, name=thirsty_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:14:19 compute-0 systemd[1]: libpod-conmon-3b80d57af10daee235ae510439777a712e21de6bf31329415d7a471a8e4ac90c.scope: Deactivated successfully.
Nov 29 07:14:19 compute-0 podman[101194]: 2025-11-29 07:14:19.093782978 +0000 UTC m=+0.051387047 container create f715e336799cdc27ad4bd2c222c237fba3380a6ad979e60d6acc232277e86912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:14:19 compute-0 sudo[100995]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:19 compute-0 systemd[1]: Started libpod-conmon-f715e336799cdc27ad4bd2c222c237fba3380a6ad979e60d6acc232277e86912.scope.
Nov 29 07:14:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:19 compute-0 podman[101194]: 2025-11-29 07:14:19.158916419 +0000 UTC m=+0.116520508 container init f715e336799cdc27ad4bd2c222c237fba3380a6ad979e60d6acc232277e86912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:14:19 compute-0 podman[101194]: 2025-11-29 07:14:19.069125269 +0000 UTC m=+0.026729368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:19 compute-0 podman[101194]: 2025-11-29 07:14:19.165388584 +0000 UTC m=+0.122992663 container start f715e336799cdc27ad4bd2c222c237fba3380a6ad979e60d6acc232277e86912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:19 compute-0 podman[101194]: 2025-11-29 07:14:19.168476278 +0000 UTC m=+0.126080347 container attach f715e336799cdc27ad4bd2c222c237fba3380a6ad979e60d6acc232277e86912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 29 07:14:19 compute-0 zen_morse[101219]: 167 167
Nov 29 07:14:19 compute-0 systemd[1]: libpod-f715e336799cdc27ad4bd2c222c237fba3380a6ad979e60d6acc232277e86912.scope: Deactivated successfully.
Nov 29 07:14:19 compute-0 podman[101194]: 2025-11-29 07:14:19.16965706 +0000 UTC m=+0.127261139 container died f715e336799cdc27ad4bd2c222c237fba3380a6ad979e60d6acc232277e86912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c8de6710df8827412d5ff68a90e6a81e5ae5b06b1b2289f1cfc65e386d27e74-merged.mount: Deactivated successfully.
Nov 29 07:14:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 29 07:14:19 compute-0 podman[101194]: 2025-11-29 07:14:19.20459677 +0000 UTC m=+0.162200839 container remove f715e336799cdc27ad4bd2c222c237fba3380a6ad979e60d6acc232277e86912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:14:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1199245091' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 07:14:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 29 07:14:19 compute-0 systemd[1]: libpod-conmon-f715e336799cdc27ad4bd2c222c237fba3380a6ad979e60d6acc232277e86912.scope: Deactivated successfully.
Nov 29 07:14:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 29 07:14:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 07:14:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1199245091' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:14:19 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 41 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:19 compute-0 ceph-mon[75050]: 3.3 scrub starts
Nov 29 07:14:19 compute-0 ceph-mon[75050]: 3.3 scrub ok
Nov 29 07:14:19 compute-0 ceph-mon[75050]: osdmap e40: 3 total, 3 up, 3 in
Nov 29 07:14:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1199245091' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 07:14:19 compute-0 ceph-mon[75050]: 3.4 scrub starts
Nov 29 07:14:19 compute-0 ceph-mon[75050]: 3.4 scrub ok
Nov 29 07:14:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1199245091' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 07:14:19 compute-0 ceph-mon[75050]: osdmap e41: 3 total, 3 up, 3 in
Nov 29 07:14:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1199245091' entity='client.rgw.rgw.compute-0.qxekyl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 07:14:19 compute-0 podman[101243]: 2025-11-29 07:14:19.355719596 +0000 UTC m=+0.040901153 container create 08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:14:19 compute-0 systemd[1]: Started libpod-conmon-08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b.scope.
Nov 29 07:14:19 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Nov 29 07:14:19 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Nov 29 07:14:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36182dd576c222c08fad9e20990ca1a9f897ffb4f1c8c6b63348e2a594dad48d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36182dd576c222c08fad9e20990ca1a9f897ffb4f1c8c6b63348e2a594dad48d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:19 compute-0 podman[101243]: 2025-11-29 07:14:19.337148411 +0000 UTC m=+0.022329998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36182dd576c222c08fad9e20990ca1a9f897ffb4f1c8c6b63348e2a594dad48d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36182dd576c222c08fad9e20990ca1a9f897ffb4f1c8c6b63348e2a594dad48d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v99: 181 pgs: 2 unknown, 179 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Nov 29 07:14:19 compute-0 podman[101243]: 2025-11-29 07:14:19.444062655 +0000 UTC m=+0.129244232 container init 08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:19 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Nov 29 07:14:19 compute-0 podman[101243]: 2025-11-29 07:14:19.452264258 +0000 UTC m=+0.137445805 container start 08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:14:19 compute-0 podman[101243]: 2025-11-29 07:14:19.455557399 +0000 UTC m=+0.140738956 container attach 08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:19 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Nov 29 07:14:19 compute-0 ansible-async_wrapper.py[100516]: Done in kid B.
Nov 29 07:14:19 compute-0 sudo[101287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixdhhopyyuzebdkhynzawwyfnncapjfl ; /usr/bin/python3'
Nov 29 07:14:19 compute-0 sudo[101287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:20 compute-0 python3[101289]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:20 compute-0 podman[101290]: 2025-11-29 07:14:20.125533602 +0000 UTC m=+0.041907480 container create b7b4cfe308ff8e32cd5c2d7c5bf243110a493aa1d570e3d4e6787c96864cd5c4 (image=quay.io/ceph/ceph:v18, name=sleepy_lichterman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:14:20 compute-0 systemd[1]: Started libpod-conmon-b7b4cfe308ff8e32cd5c2d7c5bf243110a493aa1d570e3d4e6787c96864cd5c4.scope.
Nov 29 07:14:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966ec3d7f4dbdc06782b8c2f27703b20aa3e85887140a6579501120f120d5eaa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966ec3d7f4dbdc06782b8c2f27703b20aa3e85887140a6579501120f120d5eaa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:20 compute-0 podman[101290]: 2025-11-29 07:14:20.107333047 +0000 UTC m=+0.023706955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 29 07:14:20 compute-0 podman[101290]: 2025-11-29 07:14:20.236138486 +0000 UTC m=+0.152512384 container init b7b4cfe308ff8e32cd5c2d7c5bf243110a493aa1d570e3d4e6787c96864cd5c4 (image=quay.io/ceph/ceph:v18, name=sleepy_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:14:20 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1199245091' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 07:14:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 29 07:14:20 compute-0 podman[101290]: 2025-11-29 07:14:20.243008644 +0000 UTC m=+0.159382522 container start b7b4cfe308ff8e32cd5c2d7c5bf243110a493aa1d570e3d4e6787c96864cd5c4 (image=quay.io/ceph/ceph:v18, name=sleepy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:14:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 29 07:14:20 compute-0 podman[101290]: 2025-11-29 07:14:20.249045228 +0000 UTC m=+0.165419136 container attach b7b4cfe308ff8e32cd5c2d7c5bf243110a493aa1d570e3d4e6787c96864cd5c4 (image=quay.io/ceph/ceph:v18, name=sleepy_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:20 compute-0 ceph-mon[75050]: from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:14:20 compute-0 ceph-mon[75050]: pgmap v99: 181 pgs: 2 unknown, 179 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Nov 29 07:14:20 compute-0 ceph-mon[75050]: 2.3 deep-scrub starts
Nov 29 07:14:20 compute-0 ceph-mon[75050]: 2.3 deep-scrub ok
Nov 29 07:14:20 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 29 07:14:20 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 29 07:14:20 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 29 07:14:20 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 29 07:14:20 compute-0 pensive_fermat[101259]: {
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "osd_id": 2,
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "type": "bluestore"
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:     },
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "osd_id": 1,
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "type": "bluestore"
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:     },
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "osd_id": 0,
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:         "type": "bluestore"
Nov 29 07:14:20 compute-0 pensive_fermat[101259]:     }
Nov 29 07:14:20 compute-0 pensive_fermat[101259]: }
Nov 29 07:14:20 compute-0 systemd[1]: libpod-08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b.scope: Deactivated successfully.
Nov 29 07:14:20 compute-0 systemd[1]: libpod-08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b.scope: Consumed 1.125s CPU time.
Nov 29 07:14:20 compute-0 podman[101243]: 2025-11-29 07:14:20.588565122 +0000 UTC m=+1.273746689 container died 08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:14:20 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:14:20 compute-0 sleepy_lichterman[101305]: 
Nov 29 07:14:20 compute-0 sleepy_lichterman[101305]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 29 07:14:20 compute-0 systemd[1]: libpod-b7b4cfe308ff8e32cd5c2d7c5bf243110a493aa1d570e3d4e6787c96864cd5c4.scope: Deactivated successfully.
Nov 29 07:14:21 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 29 07:14:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v101: 181 pgs: 1 unknown, 180 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 240 B/s rd, 480 B/s wr, 1 op/s
Nov 29 07:14:21 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 29 07:14:22 compute-0 ceph-mon[75050]: 3.5 deep-scrub starts
Nov 29 07:14:22 compute-0 ceph-mon[75050]: 3.5 deep-scrub ok
Nov 29 07:14:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1199245091' entity='client.rgw.rgw.compute-0.qxekyl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 07:14:22 compute-0 ceph-mon[75050]: osdmap e42: 3 total, 3 up, 3 in
Nov 29 07:14:22 compute-0 ceph-mon[75050]: 3.6 scrub starts
Nov 29 07:14:22 compute-0 ceph-mon[75050]: 2.4 scrub starts
Nov 29 07:14:22 compute-0 ceph-mon[75050]: 2.4 scrub ok
Nov 29 07:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-36182dd576c222c08fad9e20990ca1a9f897ffb4f1c8c6b63348e2a594dad48d-merged.mount: Deactivated successfully.
Nov 29 07:14:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v102: 181 pgs: 181 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 4.2 KiB/s wr, 16 op/s
Nov 29 07:14:24 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 29 07:14:25 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 29 07:14:25 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 29 07:14:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v103: 181 pgs: 181 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 3.2 KiB/s wr, 12 op/s
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 29 07:14:25 compute-0 podman[101243]: 2025-11-29 07:14:25.500507939 +0000 UTC m=+6.185689506 container remove 08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 29 07:14:25 compute-0 sudo[101109]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:25 compute-0 podman[101290]: 2025-11-29 07:14:25.532429717 +0000 UTC m=+5.448803605 container died b7b4cfe308ff8e32cd5c2d7c5bf243110a493aa1d570e3d4e6787c96864cd5c4 (image=quay.io/ceph/ceph:v18, name=sleepy_lichterman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:14:25 compute-0 ceph-mon[75050]: 3.6 scrub ok
Nov 29 07:14:25 compute-0 ceph-mon[75050]: from='client.14265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:14:25 compute-0 ceph-mon[75050]: pgmap v101: 181 pgs: 1 unknown, 180 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 240 B/s rd, 480 B/s wr, 1 op/s
Nov 29 07:14:25 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Nov 29 07:14:25 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Nov 29 07:14:25 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event c62cea62-8724-4342-8caf-b52f2bad7035 (Global Recovery Event) in 15 seconds
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 07:14:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-966ec3d7f4dbdc06782b8c2f27703b20aa3e85887140a6579501120f120d5eaa-merged.mount: Deactivated successfully.
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:26 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 29 07:14:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.541311264s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.960708618s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.541243553s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.960708618s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.558216095s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.977706909s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.558074951s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.977684021s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.558049202s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.977684021s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.541109085s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.960754395s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.540990829s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.960754395s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.558155060s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.977706909s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.557684898s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.977546692s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.557671547s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.977554321s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.557662964s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.977546692s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.557650566s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.977554321s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.557489395s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.977523804s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.557459831s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.977523804s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.545762062s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965843201s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.545742989s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965843201s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.555537224s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.976257324s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544848442s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965599060s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.555509567s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.976257324s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.555050850s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.976005554s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544820786s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965599060s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544547081s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965553284s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.555023193s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.976005554s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544516563s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965553284s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544445992s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965576172s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544424057s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965576172s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.554688454s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975997925s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.554394722s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975738525s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544569969s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965812683s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.554657936s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975997925s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.554374695s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975738525s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544426918s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965812683s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.554385185s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975875854s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.554365158s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975875854s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.553919792s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975578308s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.553898811s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975578308s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544205666s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965980530s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.544177055s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965980530s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543651581s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965560913s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543596268s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965560913s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.553174019s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975181580s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.553253174s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975395203s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.553231239s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975395203s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.553013802s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975181580s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543719292s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965995789s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543698311s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965995789s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543670654s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965988159s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552699089s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975059509s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552677155s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975059509s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552586555s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975059509s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543472290s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965995789s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543518066s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965988159s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543605804s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.966171265s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543440819s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965995789s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543593407s) [0] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.966171265s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552394867s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975067139s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552380562s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975067139s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552050591s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.974861145s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543195724s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.966033936s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552031517s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.974861145s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.543168068s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.966033936s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552572250s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975059509s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552182198s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.975166321s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.552163124s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.975166321s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.540719986s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.963813782s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.540707588s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.963813782s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542906761s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.966094971s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542835236s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.966041565s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542893410s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.966094971s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.551425934s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.974700928s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.551413536s) [1] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.974700928s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542817116s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.966041565s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.540411949s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.963806152s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542290688s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.965835571s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542229652s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.965835571s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.540355682s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.963989258s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.540396690s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.963806152s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542350769s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.966125488s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542327881s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.966125488s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542318344s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.966178894s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542300224s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.966178894s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542144775s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 78.966117859s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=8.542124748s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.966117859s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.540315628s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.963989258s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.539555550s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 84.963829041s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:26 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=31/32 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=14.539523125s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.963829041s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v105: 181 pgs: 181 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.9 KiB/s wr, 11 op/s
Nov 29 07:14:27 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 29 07:14:28 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 29 07:14:28 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:28 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5798dd55-47c6-4512-90eb-0483daf895b2 does not exist
Nov 29 07:14:28 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 63af548f-49b8-44a2-9567-b9f72b7acd2b (Updating mds.cephfs deployment (+1 -> 1))
Nov 29 07:14:28 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.586161613s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.381103516s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.586122513s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.381103516s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585832596s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380973816s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585807800s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380973816s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585733414s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.381011963s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585714340s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.381011963s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585554123s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380928040s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585542679s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380928040s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585487366s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380935669s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585449219s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380935669s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585391045s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380950928s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585374832s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380950928s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585219383s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380874634s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.850418091s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.646263123s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.850395203s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.646263123s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584922791s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380867004s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.850172043s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.646186829s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584823608s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380851746s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584874153s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380867004s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.850149155s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.646186829s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584792137s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380851746s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.850033760s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.646186829s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.850015640s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.646186829s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 podman[101290]: 2025-11-29 07:14:28.864606532 +0000 UTC m=+8.780980410 container remove b7b4cfe308ff8e32cd5c2d7c5bf243110a493aa1d570e3d4e6787c96864cd5c4 (image=quay.io/ceph/ceph:v18, name=sleepy_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584621429s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380874634s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849992752s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.646247864s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584468842s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380737305s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584606171s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380874634s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849958420s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.646247864s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584433556s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380737305s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849712372s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.646118164s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849694252s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.646118164s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584068298s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380653381s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584051132s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380653381s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849487305s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.646118164s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849616051s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.646247864s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.583934784s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380638123s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.583915710s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380638123s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849534035s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.646247864s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849414825s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.646118164s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849124908s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.645957947s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.849102974s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.645957947s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.583655357s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380538940s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584206581s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active+scrubbing pruub 98.381118774s@ [ 4.5:  ]  mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.583434105s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380424500s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.583415985s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380424500s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.583627701s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380538940s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.584065437s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.381118774s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.583336830s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.380554199s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.848856926s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.646125793s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.583302498s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380554199s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.848773956s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.646125793s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.848560333s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 93.645957947s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=35/37 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43 pruub=9.848533630s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.645957947s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.575595856s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.373100281s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.575562477s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.373100281s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.575531960s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.373077393s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.575446129s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active pruub 98.373062134s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.575428009s) [2] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.373062134s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.585202217s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.380874634s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43 pruub=14.575393677s) [1] r=-1 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.373077393s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bdhrqf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 07:14:28 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 29 07:14:28 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bdhrqf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[6.f( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[6.8( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[6.c( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[6.d( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-mon[75050]: 3.7 scrub starts
Nov 29 07:14:28 compute-0 ceph-mon[75050]: 3.7 scrub ok
Nov 29 07:14:28 compute-0 ceph-mon[75050]: pgmap v102: 181 pgs: 181 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 4.2 KiB/s wr, 16 op/s
Nov 29 07:14:28 compute-0 ceph-mon[75050]: 3.8 scrub starts
Nov 29 07:14:28 compute-0 ceph-mon[75050]: 2.5 scrub starts
Nov 29 07:14:28 compute-0 ceph-mon[75050]: 2.5 scrub ok
Nov 29 07:14:28 compute-0 ceph-mon[75050]: pgmap v103: 181 pgs: 181 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 3.2 KiB/s wr, 12 op/s
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:14:28 compute-0 ceph-mon[75050]: 3.8 scrub ok
Nov 29 07:14:28 compute-0 ceph-mon[75050]: 4.4 deep-scrub starts
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-mon[75050]: 4.4 deep-scrub ok
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:14:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:28 compute-0 ceph-mon[75050]: osdmap e43: 3 total, 3 up, 3 in
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[6.4( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[6.1( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[6.b( empty local-lis/les=0/0 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=0/0 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=0/0 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.773705482s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.288757324s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.773668289s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.288757324s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.565914154s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.081123352s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.565888405s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.081123352s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.565740585s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.081069946s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.565721512s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.081069946s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.904280663s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.419746399s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.904255867s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.419746399s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.565492630s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.081146240s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.565464020s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.081146240s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.565202713s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.081016541s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.565179825s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.081016541s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.903910637s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.419868469s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.903882027s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.419868469s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.564807892s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080963135s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.564791679s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080963135s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.564682961s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080947876s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.564666748s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080947876s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.903498650s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.419845581s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.903480530s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.419845581s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.564482689s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080940247s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.564462662s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080940247s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.564496994s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.081108093s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.564465523s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.081108093s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.903095245s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420021057s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.903061867s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420021057s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.902400970s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420013428s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.902347565s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420013428s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.902787209s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420516968s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.902267456s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420097351s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.902206421s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420097351s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.902658463s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420516968s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.901991844s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420043945s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.901930809s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420043945s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.901943207s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420089722s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.901920319s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420089722s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=0/0 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.562585831s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080963135s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.561919212s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080963135s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.562696457s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.081932068s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.901098251s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420349121s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.901104927s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420356750s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.562659264s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.081932068s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.901068687s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420349121s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.901080132s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420356750s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.561504364s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080818176s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 sudo[101287]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.561398506s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080780029s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.561417580s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080818176s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.561278343s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080810547s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.900565147s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420120239s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.561225891s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080810547s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.900483131s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420120239s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.560380936s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080543518s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.560351372s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080543518s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.592072487s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.112449646s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.592049599s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.112449646s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899834633s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420295715s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899809837s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420295715s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899784088s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420387268s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.559914589s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080551147s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899757385s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420387268s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.559894562s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080551147s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.559787750s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.080589294s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.559769630s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080589294s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899636269s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420494080s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899595261s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420486450s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899608612s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420494080s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.549722672s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.070663452s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.549702644s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.070663452s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899538994s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420486450s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899485588s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420509338s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.561374664s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.080780029s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.549550056s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.070686340s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.549479485s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.070617676s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.549450874s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.070617676s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.549323082s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active pruub 91.070632935s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.549301147s) [0] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.070632935s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899314880s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420539856s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43 pruub=12.549434662s) [2] r=-1 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.070686340s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899107933s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420539856s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.899413109s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420509338s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.898747444s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 87.420524597s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:14:28 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=35/36 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=8.898723602s) [2] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.420524597s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=0/0 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=0/0 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:14:28 compute-0 systemd[1]: libpod-conmon-b7b4cfe308ff8e32cd5c2d7c5bf243110a493aa1d570e3d4e6787c96864cd5c4.scope: Deactivated successfully.
Nov 29 07:14:28 compute-0 systemd[1]: libpod-conmon-08adaf5af9e4508662f35629cd5685a6f76780a7ff6c8c4f6527bfb5c0e4e52b.scope: Deactivated successfully.
Nov 29 07:14:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bdhrqf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 07:14:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:14:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:29 compute-0 ceph-mgr[75345]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.bdhrqf on compute-0
Nov 29 07:14:29 compute-0 ceph-mgr[75345]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.bdhrqf on compute-0
Nov 29 07:14:29 compute-0 radosgw[99623]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 07:14:29 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-rgw-rgw-compute-0-qxekyl[99619]: 2025-11-29T07:14:29.034+0000 7f0fe4bbb940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 07:14:29 compute-0 radosgw[99623]: framework: beast
Nov 29 07:14:29 compute-0 radosgw[99623]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 07:14:29 compute-0 radosgw[99623]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 07:14:29 compute-0 radosgw[99623]: starting handler: beast
Nov 29 07:14:29 compute-0 radosgw[99623]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:14:29 compute-0 sudo[101400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:29 compute-0 sudo[101400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:29 compute-0 sudo[101400]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:29 compute-0 radosgw[99623]: mgrc service_daemon_register rgw.14261 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.qxekyl,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864328,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=7ba03397-1943-4fb1-bfd5-2752d9c78a7d,zone_name=default,zonegroup_id=529147a5-95c6-4fc0-8a5f-d5e8d4efa3ee,zonegroup_name=default}
Nov 29 07:14:29 compute-0 sudo[101871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:29 compute-0 sudo[101871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:29 compute-0 sudo[101871]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:29 compute-0 sudo[101977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:29 compute-0 sudo[101977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:29 compute-0 sudo[101977]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:29 compute-0 sudo[102002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1
Nov 29 07:14:29 compute-0 sudo[102002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v106: 181 pgs: 181 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 3.0 KiB/s wr, 12 op/s
Nov 29 07:14:29 compute-0 podman[102069]: 2025-11-29 07:14:29.555601667 +0000 UTC m=+0.020559769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 29 07:14:29 compute-0 sudo[102106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arcbhskivhvcbjtgsikzyxynoocrjdlp ; /usr/bin/python3'
Nov 29 07:14:29 compute-0 podman[102069]: 2025-11-29 07:14:29.725262256 +0000 UTC m=+0.190220338 container create 0d0f3cf28296e13d4a7d321805e82edad5673f42c28f211efdd0d31a3dce3e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:29 compute-0 sudo[102106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 07:14:29 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[6.f( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [0] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [0] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[6.8( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [2] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=43/44 n=0 ec=31/13 lis/c=31/31 les/c/f=32/32/0 sis=43) [2] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=35/19 lis/c=35/35 les/c/f=36/36/0 sis=43) [2] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 systemd[1]: Started libpod-conmon-0d0f3cf28296e13d4a7d321805e82edad5673f42c28f211efdd0d31a3dce3e7d.scope.
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[6.d( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[6.4( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[6.1( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[6.b( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[6.c( empty local-lis/les=43/44 n=0 ec=35/17 lis/c=35/35 les/c/f=37/37/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=33/16 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=31/11 lis/c=31/31 les/c/f=32/32/0 sis=43) [1] r=0 lpr=43 pi=[31,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=33/14 lis/c=33/33 les/c/f=34/34/0 sis=43) [1] r=0 lpr=43 pi=[33,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:14:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:29 compute-0 podman[102069]: 2025-11-29 07:14:29.81812198 +0000 UTC m=+0.283080062 container init 0d0f3cf28296e13d4a7d321805e82edad5673f42c28f211efdd0d31a3dce3e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:29 compute-0 podman[102069]: 2025-11-29 07:14:29.825150201 +0000 UTC m=+0.290108273 container start 0d0f3cf28296e13d4a7d321805e82edad5673f42c28f211efdd0d31a3dce3e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:14:29 compute-0 hungry_elion[102111]: 167 167
Nov 29 07:14:29 compute-0 systemd[1]: libpod-0d0f3cf28296e13d4a7d321805e82edad5673f42c28f211efdd0d31a3dce3e7d.scope: Deactivated successfully.
Nov 29 07:14:29 compute-0 podman[102069]: 2025-11-29 07:14:29.832637764 +0000 UTC m=+0.297595846 container attach 0d0f3cf28296e13d4a7d321805e82edad5673f42c28f211efdd0d31a3dce3e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:29 compute-0 podman[102069]: 2025-11-29 07:14:29.833474527 +0000 UTC m=+0.298432619 container died 0d0f3cf28296e13d4a7d321805e82edad5673f42c28f211efdd0d31a3dce3e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-88240cc2e78bdf65c89408b6ee3fd508e88bb4a5927747a0fb7b0505756f5019-merged.mount: Deactivated successfully.
Nov 29 07:14:29 compute-0 python3[102108]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:29 compute-0 ceph-mon[75050]: pgmap v105: 181 pgs: 181 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.9 KiB/s wr, 11 op/s
Nov 29 07:14:29 compute-0 ceph-mon[75050]: 4.5 scrub starts
Nov 29 07:14:29 compute-0 ceph-mon[75050]: 2.c scrub starts
Nov 29 07:14:29 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:29 compute-0 ceph-mon[75050]: 4.6 scrub starts
Nov 29 07:14:29 compute-0 ceph-mon[75050]: 4.6 scrub ok
Nov 29 07:14:29 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bdhrqf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 07:14:29 compute-0 ceph-mon[75050]: 2.c scrub ok
Nov 29 07:14:29 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bdhrqf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 07:14:29 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:29 compute-0 ceph-mon[75050]: Deploying daemon mds.cephfs.compute-0.bdhrqf on compute-0
Nov 29 07:14:29 compute-0 ceph-mon[75050]: pgmap v106: 181 pgs: 181 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 3.0 KiB/s wr, 12 op/s
Nov 29 07:14:29 compute-0 ceph-mon[75050]: osdmap e44: 3 total, 3 up, 3 in
Nov 29 07:14:29 compute-0 podman[102069]: 2025-11-29 07:14:29.893004924 +0000 UTC m=+0.357963026 container remove 0d0f3cf28296e13d4a7d321805e82edad5673f42c28f211efdd0d31a3dce3e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:14:29 compute-0 systemd[1]: libpod-conmon-0d0f3cf28296e13d4a7d321805e82edad5673f42c28f211efdd0d31a3dce3e7d.scope: Deactivated successfully.
Nov 29 07:14:29 compute-0 podman[102130]: 2025-11-29 07:14:29.952189422 +0000 UTC m=+0.048758885 container create 14251513d5e8ce4a1a48c4044ab76a369638c9b7988ac7024dd125e30e0ace8b (image=quay.io/ceph/ceph:v18, name=gallant_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:14:29 compute-0 systemd[1]: Reloading.
Nov 29 07:14:30 compute-0 podman[102130]: 2025-11-29 07:14:29.927186464 +0000 UTC m=+0.023755977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:30 compute-0 systemd-sysv-generator[102178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:14:30 compute-0 systemd-rc-local-generator[102175]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:14:30 compute-0 systemd[1]: Started libpod-conmon-14251513d5e8ce4a1a48c4044ab76a369638c9b7988ac7024dd125e30e0ace8b.scope.
Nov 29 07:14:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdcd99e159d3534b0a5a7d8638f45d5aaf673edda6c0b285a321dad694b9ab8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdcd99e159d3534b0a5a7d8638f45d5aaf673edda6c0b285a321dad694b9ab8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:30 compute-0 systemd[1]: Reloading.
Nov 29 07:14:30 compute-0 podman[102130]: 2025-11-29 07:14:30.352886949 +0000 UTC m=+0.449456432 container init 14251513d5e8ce4a1a48c4044ab76a369638c9b7988ac7024dd125e30e0ace8b (image=quay.io/ceph/ceph:v18, name=gallant_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:14:30 compute-0 podman[102130]: 2025-11-29 07:14:30.360090365 +0000 UTC m=+0.456659838 container start 14251513d5e8ce4a1a48c4044ab76a369638c9b7988ac7024dd125e30e0ace8b (image=quay.io/ceph/ceph:v18, name=gallant_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:14:30 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 29 07:14:30 compute-0 podman[102130]: 2025-11-29 07:14:30.385335891 +0000 UTC m=+0.481905364 container attach 14251513d5e8ce4a1a48c4044ab76a369638c9b7988ac7024dd125e30e0ace8b (image=quay.io/ceph/ceph:v18, name=gallant_shirley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 29 07:14:30 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 29 07:14:30 compute-0 systemd-rc-local-generator[102214]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:14:30 compute-0 systemd-sysv-generator[102220]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:14:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:30 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.bdhrqf for 14ff1f30-5059-58f1-9a23-69871bb275a1...
Nov 29 07:14:30 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 29 07:14:30 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 29 07:14:30 compute-0 podman[102297]: 2025-11-29 07:14:30.873457683 +0000 UTC m=+0.049576487 container create f32e02240e8a83659f9e6ca6aa769bfab0c78f2322374b714daa8a8d1689a511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mds-cephfs-compute-0-bdhrqf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:30 compute-0 ceph-mgr[75345]: [progress INFO root] Writing back 11 completed events
Nov 29 07:14:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:14:30 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:30 compute-0 ceph-mon[75050]: 2.e scrub starts
Nov 29 07:14:30 compute-0 ceph-mon[75050]: 2.e scrub ok
Nov 29 07:14:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e95e9e3a2d7f55007da2144b93d330265e6d420a89b2b432a886c9865c6cd8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e95e9e3a2d7f55007da2144b93d330265e6d420a89b2b432a886c9865c6cd8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e95e9e3a2d7f55007da2144b93d330265e6d420a89b2b432a886c9865c6cd8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e95e9e3a2d7f55007da2144b93d330265e6d420a89b2b432a886c9865c6cd8f/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.bdhrqf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:30 compute-0 podman[102297]: 2025-11-29 07:14:30.848661509 +0000 UTC m=+0.024780343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:30 compute-0 podman[102297]: 2025-11-29 07:14:30.965458873 +0000 UTC m=+0.141577707 container init f32e02240e8a83659f9e6ca6aa769bfab0c78f2322374b714daa8a8d1689a511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mds-cephfs-compute-0-bdhrqf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:14:30 compute-0 podman[102297]: 2025-11-29 07:14:30.971043045 +0000 UTC m=+0.147161859 container start f32e02240e8a83659f9e6ca6aa769bfab0c78f2322374b714daa8a8d1689a511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mds-cephfs-compute-0-bdhrqf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:14:30 compute-0 bash[102297]: f32e02240e8a83659f9e6ca6aa769bfab0c78f2322374b714daa8a8d1689a511
Nov 29 07:14:30 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.bdhrqf for 14ff1f30-5059-58f1-9a23-69871bb275a1.
Nov 29 07:14:31 compute-0 ceph-mds[102316]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 07:14:31 compute-0 ceph-mds[102316]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 07:14:31 compute-0 ceph-mds[102316]: main not setting numa affinity
Nov 29 07:14:31 compute-0 ceph-mds[102316]: pidfile_write: ignore empty --pid-file
Nov 29 07:14:31 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mds-cephfs-compute-0-bdhrqf[102312]: starting mds.cephfs.compute-0.bdhrqf at 
Nov 29 07:14:31 compute-0 sudo[102002]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:14:31 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf Updating MDS map to version 2 from mon.0
Nov 29 07:14:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:14:31 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:14:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 07:14:31 compute-0 gallant_shirley[102185]: 
Nov 29 07:14:31 compute-0 gallant_shirley[102185]: [{"container_id": "7eb39cf0035c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.43%", "created": "2025-11-29T07:12:33.835768Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-29T07:12:33.906451Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:14:12.418352Z", "memory_usage": 11597250, "ports": [], "service_name": "crash", "started": "2025-11-29T07:12:33.744736Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.bdhrqf", "daemon_name": "mds.cephfs.compute-0.bdhrqf", "daemon_type": "mds", "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "cf5b754473e0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "27.75%", "created": "2025-11-29T07:11:19.378498Z", "daemon_id": "compute-0.kzdpag", "daemon_name": "mgr.compute-0.kzdpag", "daemon_type": "mgr", "events": ["2025-11-29T07:12:39.078224Z daemon:mgr.compute-0.kzdpag [INFO] \"Reconfigured mgr.compute-0.kzdpag on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:14:12.418250Z", "memory_usage": 548929536, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-29T07:11:19.282918Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@mgr.compute-0.kzdpag", "version": "18.2.7"}, {"container_id": "21a56ae912cb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.04%", "created": "2025-11-29T07:11:14.023354Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-29T07:12:38.228425Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:14:12.418125Z", "memory_request": 2147483648, "memory_usage": 39457914, "ports": [], "service_name": "mon", "started": "2025-11-29T07:11:16.897310Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@mon.compute-0", "version": "18.2.7"}, {"container_id": "9e203bb20123", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.52%", "created": "2025-11-29T07:13:03.835136Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-29T07:13:03.881208Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:14:12.418442Z", "memory_request": 4294967296, "memory_usage": 58248396, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T07:13:03.749990Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@osd.0", "version": "18.2.7"}, {"container_id": "6a5fc11573d1", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.68%", "created": "2025-11-29T07:13:08.980371Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-29T07:13:09.100022Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:14:12.418533Z", "memory_request": 4294967296, "memory_usage": 60030976, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T07:13:08.760503Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@osd.1", "version": "18.2.7"}, {"container_id": "2e6c1ee4769a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.63%", "created": "2025-11-29T07:13:14.613628Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-29T07:13:14.704791Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:14:12.418620Z", "memory_request": 4294967296, "memory_usage": 63721963, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T07:13:14.445172Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@osd.2", "version": "18.2.7"}, {"container_id": "4edd4e145dee", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "9.85%", "created": "2025-11-29T07:14:10.920254Z", "daemon_id": "rgw.compute-0.qxekyl", "daemon_name": "rgw.rgw.compute-0.qxekyl", "daemon_type": "rgw", "events": ["2025-11-29T07:14:10.971664Z daemon:rgw.rgw.compute-0.qxekyl [INFO] \"Deployed rgw.rgw.compute-0.qxekyl on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-11-29T07:14:12.418702Z", "memory_usage": 18004049, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-29T07:14:10.818199Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-14ff1f30-5059-58f1-9a23-69871bb275a1@rgw.rgw.compute-0.qxekyl", "version": "18.2.7"}]
Nov 29 07:14:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:31 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 63af548f-49b8-44a2-9567-b9f72b7acd2b (Updating mds.cephfs deployment (+1 -> 1))
Nov 29 07:14:31 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 63af548f-49b8-44a2-9567-b9f72b7acd2b (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 29 07:14:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 29 07:14:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 07:14:31 compute-0 systemd[1]: libpod-14251513d5e8ce4a1a48c4044ab76a369638c9b7988ac7024dd125e30e0ace8b.scope: Deactivated successfully.
Nov 29 07:14:31 compute-0 podman[102130]: 2025-11-29 07:14:31.06360641 +0000 UTC m=+1.160175933 container died 14251513d5e8ce4a1a48c4044ab76a369638c9b7988ac7024dd125e30e0ace8b (image=quay.io/ceph/ceph:v18, name=gallant_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdcd99e159d3534b0a5a7d8638f45d5aaf673edda6c0b285a321dad694b9ab8b-merged.mount: Deactivated successfully.
Nov 29 07:14:31 compute-0 sudo[102338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:31 compute-0 sudo[102338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:31 compute-0 podman[102130]: 2025-11-29 07:14:31.134600299 +0000 UTC m=+1.231169762 container remove 14251513d5e8ce4a1a48c4044ab76a369638c9b7988ac7024dd125e30e0ace8b (image=quay.io/ceph/ceph:v18, name=gallant_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:14:31 compute-0 sudo[102338]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:31 compute-0 systemd[1]: libpod-conmon-14251513d5e8ce4a1a48c4044ab76a369638c9b7988ac7024dd125e30e0ace8b.scope: Deactivated successfully.
Nov 29 07:14:31 compute-0 sudo[102106]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:31 compute-0 sudo[102376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:14:31 compute-0 sudo[102376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:31 compute-0 sudo[102376]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:31 compute-0 sudo[102401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:31 compute-0 sudo[102401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:31 compute-0 sudo[102401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:31 compute-0 sudo[102426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:31 compute-0 sudo[102426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:31 compute-0 sudo[102426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:31 compute-0 sudo[102451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:31 compute-0 sudo[102451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:31 compute-0 sudo[102451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:31 compute-0 sudo[102476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:14:31 compute-0 sudo[102476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v108: 181 pgs: 181 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 639 B/s wr, 2 op/s
Nov 29 07:14:31 compute-0 podman[102569]: 2025-11-29 07:14:31.836625313 +0000 UTC m=+0.059671372 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:14:31 compute-0 podman[102569]: 2025-11-29 07:14:31.925316373 +0000 UTC m=+0.148362432 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:14:32 compute-0 sudo[102653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdctmmxpljebuykxfkbgrhvcmrjyxiio ; /usr/bin/python3'
Nov 29 07:14:32 compute-0 sudo[102653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:32 compute-0 python3[102662]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:32 compute-0 podman[102693]: 2025-11-29 07:14:32.25057206 +0000 UTC m=+0.020153679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:32 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 29 07:14:32 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 29 07:14:32 compute-0 podman[102693]: 2025-11-29 07:14:32.708580749 +0000 UTC m=+0.478162358 container create d915429ce134348f17e9382ab0e3b4d411649679147def5bb40e44ca70fda702 (image=quay.io/ceph/ceph:v18, name=stoic_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:14:32 compute-0 ceph-mon[75050]: 4.b scrub starts
Nov 29 07:14:32 compute-0 ceph-mon[75050]: 4.b scrub ok
Nov 29 07:14:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:32 compute-0 ceph-mon[75050]: from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 07:14:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:32 compute-0 ceph-mon[75050]: pgmap v108: 181 pgs: 181 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 639 B/s wr, 2 op/s
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e3 new map
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:13:52.990561+0000
                                           modified        2025-11-29T07:13:52.990604+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.bdhrqf{-1:14269} state up:standby seq 1 addr [v2:192.168.122.100:6814/653789432,v1:192.168.122.100:6815/653789432] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf Updating MDS map to version 3 from mon.0
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf Monitors have assigned me to become a standby.
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/653789432,v1:192.168.122.100:6815/653789432] up:boot
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/653789432,v1:192.168.122.100:6815/653789432] as mds.0
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bdhrqf assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.bdhrqf"} v 0) v1
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bdhrqf"}]: dispatch
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e3 all = 0
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e4 new map
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:13:52.990561+0000
                                           modified        2025-11-29T07:14:32.737141+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14269}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.bdhrqf{0:14269} state up:creating seq 1 addr [v2:192.168.122.100:6814/653789432,v1:192.168.122.100:6815/653789432] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bdhrqf=up:creating}
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf Updating MDS map to version 4 from mon.0
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x1
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x100
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x600
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x601
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x602
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x603
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x604
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x605
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x606
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x607
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x608
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.cache creating system inode with ino:0x609
Nov 29 07:14:32 compute-0 systemd[1]: Started libpod-conmon-d915429ce134348f17e9382ab0e3b4d411649679147def5bb40e44ca70fda702.scope.
Nov 29 07:14:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/442e6caf7f461601921833d4bbcd137d379e3644955177c4a3077056149a60d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/442e6caf7f461601921833d4bbcd137d379e3644955177c4a3077056149a60d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:32 compute-0 podman[102693]: 2025-11-29 07:14:32.804519868 +0000 UTC m=+0.574101497 container init d915429ce134348f17e9382ab0e3b4d411649679147def5bb40e44ca70fda702 (image=quay.io/ceph/ceph:v18, name=stoic_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:14:32 compute-0 ceph-mds[102316]: mds.0.4 creating_done
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bdhrqf is now active in filesystem cephfs as rank 0
Nov 29 07:14:32 compute-0 podman[102693]: 2025-11-29 07:14:32.814777809 +0000 UTC m=+0.584359448 container start d915429ce134348f17e9382ab0e3b4d411649679147def5bb40e44ca70fda702 (image=quay.io/ceph/ceph:v18, name=stoic_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:14:32 compute-0 podman[102693]: 2025-11-29 07:14:32.818726908 +0000 UTC m=+0.588308517 container attach d915429ce134348f17e9382ab0e3b4d411649679147def5bb40e44ca70fda702 (image=quay.io/ceph/ceph:v18, name=stoic_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:32 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Nov 29 07:14:32 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Nov 29 07:14:32 compute-0 sudo[102476]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:14:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:32 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 91bbbd22-2e03-4e40-b977-783182ee0412 does not exist
Nov 29 07:14:32 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev f4a7cc70-9a94-432a-9589-4f15a3ea77ff does not exist
Nov 29 07:14:32 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ff08ddb8-036b-427e-b64e-b001051acef6 does not exist
Nov 29 07:14:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:14:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:14:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:14:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:14:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:14:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:33 compute-0 sudo[102779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:33 compute-0 sudo[102779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:33 compute-0 sudo[102779]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:33 compute-0 sudo[102804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:33 compute-0 sudo[102804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:33 compute-0 sudo[102804]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:33 compute-0 sudo[102829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:33 compute-0 sudo[102829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:33 compute-0 sudo[102829]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:33 compute-0 sudo[102873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:14:33 compute-0 sudo[102873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:33 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 29 07:14:33 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 29 07:14:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 07:14:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620401570' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:14:33 compute-0 stoic_vaughan[102741]: 
Nov 29 07:14:33 compute-0 stoic_vaughan[102741]: {"fsid":"14ff1f30-5059-58f1-9a23-69871bb275a1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":196,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1764400405,"num_in_osds":3,"osd_in_since":1764400372,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":181}],"num_pgs":181,"num_pools":11,"num_objects":44,"data_bytes":463928,"bytes_used":84443136,"bytes_avail":64327483392,"bytes_total":64411926528,"read_bytes_sec":0,"write_bytes_sec":639,"read_op_per_sec":1,"write_op_per_sec":1},"fsmap":{"epoch":4,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.bdhrqf","status":"up:creating","gid":14269}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":6,"modified":"2025-11-29T07:14:29.444803+0000","services":{"rgw":{"daemons":{"summary":"","14261":{"start_epoch":6,"start_stamp":"2025-11-29T07:14:29.107784+0000","gid":14261,"addr":"192.168.122.100:0/1199245091","metadata":{"arch":"x86_64","ceph_release":"reef","ceph_version":"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)","ceph_version_short":"18.2.7","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.qxekyl","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864328","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"7ba03397-1943-4fb1-bfd5-2752d9c78a7d","zone_name":"default","zonegroup_id":"529147a5-95c6-4fc0-8a5f-d5e8d4efa3ee","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{}}
Nov 29 07:14:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v109: 181 pgs: 181 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.6 KiB/s wr, 127 op/s
Nov 29 07:14:33 compute-0 systemd[1]: libpod-d915429ce134348f17e9382ab0e3b4d411649679147def5bb40e44ca70fda702.scope: Deactivated successfully.
Nov 29 07:14:33 compute-0 podman[102927]: 2025-11-29 07:14:33.496403504 +0000 UTC m=+0.026176239 container died d915429ce134348f17e9382ab0e3b4d411649679147def5bb40e44ca70fda702 (image=quay.io/ceph/ceph:v18, name=stoic_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 29 07:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-442e6caf7f461601921833d4bbcd137d379e3644955177c4a3077056149a60d7-merged.mount: Deactivated successfully.
Nov 29 07:14:33 compute-0 podman[102927]: 2025-11-29 07:14:33.560714386 +0000 UTC m=+0.090487101 container remove d915429ce134348f17e9382ab0e3b4d411649679147def5bb40e44ca70fda702 (image=quay.io/ceph/ceph:v18, name=stoic_vaughan, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:14:33 compute-0 systemd[1]: libpod-conmon-d915429ce134348f17e9382ab0e3b4d411649679147def5bb40e44ca70fda702.scope: Deactivated successfully.
Nov 29 07:14:33 compute-0 sudo[102653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:33 compute-0 podman[102956]: 2025-11-29 07:14:33.638454835 +0000 UTC m=+0.045634121 container create 11f103a8fb2c4b101847ea5ef1cc81f2eefb289db6e06b3849123141959e1d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:14:33 compute-0 systemd[1]: Started libpod-conmon-11f103a8fb2c4b101847ea5ef1cc81f2eefb289db6e06b3849123141959e1d7c.scope.
Nov 29 07:14:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:33 compute-0 podman[102956]: 2025-11-29 07:14:33.620375101 +0000 UTC m=+0.027554407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:33 compute-0 podman[102956]: 2025-11-29 07:14:33.72617951 +0000 UTC m=+0.133358806 container init 11f103a8fb2c4b101847ea5ef1cc81f2eefb289db6e06b3849123141959e1d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:14:33 compute-0 ceph-mon[75050]: 2.10 scrub starts
Nov 29 07:14:33 compute-0 ceph-mon[75050]: 2.10 scrub ok
Nov 29 07:14:33 compute-0 ceph-mon[75050]: mds.? [v2:192.168.122.100:6814/653789432,v1:192.168.122.100:6815/653789432] up:boot
Nov 29 07:14:33 compute-0 ceph-mon[75050]: daemon mds.cephfs.compute-0.bdhrqf assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 07:14:33 compute-0 ceph-mon[75050]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 07:14:33 compute-0 ceph-mon[75050]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 07:14:33 compute-0 ceph-mon[75050]: Cluster is now healthy
Nov 29 07:14:33 compute-0 ceph-mon[75050]: fsmap cephfs:0 1 up:standby
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bdhrqf"}]: dispatch
Nov 29 07:14:33 compute-0 ceph-mon[75050]: fsmap cephfs:1 {0=cephfs.compute-0.bdhrqf=up:creating}
Nov 29 07:14:33 compute-0 ceph-mon[75050]: daemon mds.cephfs.compute-0.bdhrqf is now active in filesystem cephfs as rank 0
Nov 29 07:14:33 compute-0 ceph-mon[75050]: 4.c deep-scrub starts
Nov 29 07:14:33 compute-0 ceph-mon[75050]: 4.c deep-scrub ok
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:33 compute-0 ceph-mon[75050]: 3.b scrub starts
Nov 29 07:14:33 compute-0 ceph-mon[75050]: 3.b scrub ok
Nov 29 07:14:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/620401570' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 07:14:33 compute-0 ceph-mon[75050]: pgmap v109: 181 pgs: 181 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.6 KiB/s wr, 127 op/s
Nov 29 07:14:33 compute-0 podman[102956]: 2025-11-29 07:14:33.736252405 +0000 UTC m=+0.143431691 container start 11f103a8fb2c4b101847ea5ef1cc81f2eefb289db6e06b3849123141959e1d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dirac, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:14:33 compute-0 podman[102956]: 2025-11-29 07:14:33.740491011 +0000 UTC m=+0.147670327 container attach 11f103a8fb2c4b101847ea5ef1cc81f2eefb289db6e06b3849123141959e1d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dirac, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:14:33 compute-0 compassionate_dirac[102973]: 167 167
Nov 29 07:14:33 compute-0 systemd[1]: libpod-11f103a8fb2c4b101847ea5ef1cc81f2eefb289db6e06b3849123141959e1d7c.scope: Deactivated successfully.
Nov 29 07:14:33 compute-0 podman[102956]: 2025-11-29 07:14:33.742880367 +0000 UTC m=+0.150059663 container died 11f103a8fb2c4b101847ea5ef1cc81f2eefb289db6e06b3849123141959e1d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dirac, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:14:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 new map
Nov 29 07:14:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T07:13:52.990561+0000
                                           modified        2025-11-29T07:14:33.742062+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14269}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.bdhrqf{0:14269} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/653789432,v1:192.168.122.100:6815/653789432] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 29 07:14:33 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf Updating MDS map to version 5 from mon.0
Nov 29 07:14:33 compute-0 ceph-mds[102316]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 07:14:33 compute-0 ceph-mds[102316]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 29 07:14:33 compute-0 ceph-mds[102316]: mds.0.4 recovery_done -- successful recovery!
Nov 29 07:14:33 compute-0 ceph-mds[102316]: mds.0.4 active_start
Nov 29 07:14:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/653789432,v1:192.168.122.100:6815/653789432] up:active
Nov 29 07:14:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bdhrqf=up:active}
Nov 29 07:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9ea0fe33814e366c8a85d5a06c1b2a4d9e5515e957867c17ab29bfc99a319ef-merged.mount: Deactivated successfully.
Nov 29 07:14:33 compute-0 podman[102956]: 2025-11-29 07:14:33.896446094 +0000 UTC m=+0.303625380 container remove 11f103a8fb2c4b101847ea5ef1cc81f2eefb289db6e06b3849123141959e1d7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:14:33 compute-0 systemd[1]: libpod-conmon-11f103a8fb2c4b101847ea5ef1cc81f2eefb289db6e06b3849123141959e1d7c.scope: Deactivated successfully.
Nov 29 07:14:34 compute-0 podman[102999]: 2025-11-29 07:14:34.102241142 +0000 UTC m=+0.087027905 container create 950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:14:34 compute-0 podman[102999]: 2025-11-29 07:14:34.038519356 +0000 UTC m=+0.023306129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:34 compute-0 systemd[1]: Started libpod-conmon-950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d.scope.
Nov 29 07:14:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c963498483f1f266726c0fb4b60f5491ac914ee7d609e0faf1a63c458ec548f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c963498483f1f266726c0fb4b60f5491ac914ee7d609e0faf1a63c458ec548f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c963498483f1f266726c0fb4b60f5491ac914ee7d609e0faf1a63c458ec548f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c963498483f1f266726c0fb4b60f5491ac914ee7d609e0faf1a63c458ec548f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c963498483f1f266726c0fb4b60f5491ac914ee7d609e0faf1a63c458ec548f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:34 compute-0 podman[102999]: 2025-11-29 07:14:34.210059916 +0000 UTC m=+0.194846689 container init 950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:14:34 compute-0 podman[102999]: 2025-11-29 07:14:34.21749212 +0000 UTC m=+0.202278863 container start 950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 07:14:34 compute-0 podman[102999]: 2025-11-29 07:14:34.221916611 +0000 UTC m=+0.206703384 container attach 950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:14:34 compute-0 sudo[103043]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqoxqtwilxpqqizxcadagfvhetwtiojl ; /usr/bin/python3'
Nov 29 07:14:34 compute-0 sudo[103043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:34 compute-0 python3[103045]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:34 compute-0 podman[103046]: 2025-11-29 07:14:34.688995317 +0000 UTC m=+0.046533565 container create b10bead229cdf0b9451190a4a0d9039164a68b5dd5e3afca39b682d9169b36b2 (image=quay.io/ceph/ceph:v18, name=hopeful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:14:34 compute-0 systemd[1]: Started libpod-conmon-b10bead229cdf0b9451190a4a0d9039164a68b5dd5e3afca39b682d9169b36b2.scope.
Nov 29 07:14:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:34 compute-0 podman[103046]: 2025-11-29 07:14:34.66683915 +0000 UTC m=+0.024377428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff1bfa99521b9cf6666b450f15c01239410f64fd52db5c2c24100663033dd3b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff1bfa99521b9cf6666b450f15c01239410f64fd52db5c2c24100663033dd3b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:34 compute-0 podman[103046]: 2025-11-29 07:14:34.78067386 +0000 UTC m=+0.138212148 container init b10bead229cdf0b9451190a4a0d9039164a68b5dd5e3afca39b682d9169b36b2 (image=quay.io/ceph/ceph:v18, name=hopeful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:34 compute-0 podman[103046]: 2025-11-29 07:14:34.793070819 +0000 UTC m=+0.150609067 container start b10bead229cdf0b9451190a4a0d9039164a68b5dd5e3afca39b682d9169b36b2 (image=quay.io/ceph/ceph:v18, name=hopeful_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:34 compute-0 podman[103046]: 2025-11-29 07:14:34.798096487 +0000 UTC m=+0.155634775 container attach b10bead229cdf0b9451190a4a0d9039164a68b5dd5e3afca39b682d9169b36b2 (image=quay.io/ceph/ceph:v18, name=hopeful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:34 compute-0 ceph-mon[75050]: mds.? [v2:192.168.122.100:6814/653789432,v1:192.168.122.100:6815/653789432] up:active
Nov 29 07:14:34 compute-0 ceph-mon[75050]: fsmap cephfs:1 {0=cephfs.compute-0.bdhrqf=up:active}
Nov 29 07:14:35 compute-0 tender_dubinsky[103015]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:14:35 compute-0 tender_dubinsky[103015]: --> relative data size: 1.0
Nov 29 07:14:35 compute-0 tender_dubinsky[103015]: --> All data devices are unavailable
Nov 29 07:14:35 compute-0 systemd[1]: libpod-950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d.scope: Deactivated successfully.
Nov 29 07:14:35 compute-0 systemd[1]: libpod-950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d.scope: Consumed 1.047s CPU time.
Nov 29 07:14:35 compute-0 podman[102999]: 2025-11-29 07:14:35.321562779 +0000 UTC m=+1.306349522 container died 950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 07:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c963498483f1f266726c0fb4b60f5491ac914ee7d609e0faf1a63c458ec548f-merged.mount: Deactivated successfully.
Nov 29 07:14:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 07:14:35 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285908314' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:14:35 compute-0 hopeful_wing[103061]: 
Nov 29 07:14:35 compute-0 hopeful_wing[103061]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.qxekyl","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 29 07:14:35 compute-0 systemd[1]: libpod-b10bead229cdf0b9451190a4a0d9039164a68b5dd5e3afca39b682d9169b36b2.scope: Deactivated successfully.
Nov 29 07:14:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v110: 181 pgs: 181 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 1.4 KiB/s wr, 113 op/s
Nov 29 07:14:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:14:35 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Nov 29 07:14:35 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Nov 29 07:14:35 compute-0 podman[102999]: 2025-11-29 07:14:35.885753766 +0000 UTC m=+1.870540509 container remove 950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:35 compute-0 systemd[1]: libpod-conmon-950188414e07cdfa525273d921dc8b06053ab8843cf615e94c52023b3046d77d.scope: Deactivated successfully.
Nov 29 07:14:35 compute-0 ceph-mgr[75345]: [progress INFO root] Writing back 12 completed events
Nov 29 07:14:35 compute-0 sudo[102873]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:14:35 compute-0 podman[103046]: 2025-11-29 07:14:35.970255561 +0000 UTC m=+1.327793809 container died b10bead229cdf0b9451190a4a0d9039164a68b5dd5e3afca39b682d9169b36b2 (image=quay.io/ceph/ceph:v18, name=hopeful_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:14:35 compute-0 sudo[103136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:35 compute-0 sudo[103136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:35 compute-0 sudo[103136]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:36 compute-0 sudo[103161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:36 compute-0 sudo[103161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:36 compute-0 sudo[103161]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1285908314' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 07:14:36 compute-0 ceph-mon[75050]: pgmap v110: 181 pgs: 181 active+clean; 455 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 1.4 KiB/s wr, 113 op/s
Nov 29 07:14:36 compute-0 sudo[103186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:36 compute-0 sudo[103186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:36 compute-0 sudo[103186]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:36 compute-0 sudo[103211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:14:36 compute-0 sudo[103211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:36 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Nov 29 07:14:36 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Nov 29 07:14:36 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 29 07:14:36 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 29 07:14:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cff1bfa99521b9cf6666b450f15c01239410f64fd52db5c2c24100663033dd3b-merged.mount: Deactivated successfully.
Nov 29 07:14:37 compute-0 ceph-mon[75050]: 4.15 deep-scrub starts
Nov 29 07:14:37 compute-0 ceph-mon[75050]: 4.15 deep-scrub ok
Nov 29 07:14:37 compute-0 ceph-mon[75050]: 4.16 scrub starts
Nov 29 07:14:37 compute-0 ceph-mon[75050]: 4.16 scrub ok
Nov 29 07:14:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v111: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 105 op/s
Nov 29 07:14:37 compute-0 podman[103123]: 2025-11-29 07:14:37.515791264 +0000 UTC m=+2.057699976 container remove b10bead229cdf0b9451190a4a0d9039164a68b5dd5e3afca39b682d9169b36b2 (image=quay.io/ceph/ceph:v18, name=hopeful_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:14:37 compute-0 systemd[1]: libpod-conmon-b10bead229cdf0b9451190a4a0d9039164a68b5dd5e3afca39b682d9169b36b2.scope: Deactivated successfully.
Nov 29 07:14:37 compute-0 sudo[103043]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:37 compute-0 podman[103279]: 2025-11-29 07:14:37.729322295 +0000 UTC m=+0.039452452 container create 3c712b9a207431d878da2dd44a0a1f113acb06efeea5c0412c8ed44c8d5611c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:14:37 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 29 07:14:37 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 29 07:14:37 compute-0 systemd[1]: Started libpod-conmon-3c712b9a207431d878da2dd44a0a1f113acb06efeea5c0412c8ed44c8d5611c4.scope.
Nov 29 07:14:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:37 compute-0 podman[103279]: 2025-11-29 07:14:37.709282145 +0000 UTC m=+0.019412322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:37 compute-0 podman[103279]: 2025-11-29 07:14:37.809356437 +0000 UTC m=+0.119486634 container init 3c712b9a207431d878da2dd44a0a1f113acb06efeea5c0412c8ed44c8d5611c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:37 compute-0 podman[103279]: 2025-11-29 07:14:37.815480824 +0000 UTC m=+0.125610981 container start 3c712b9a207431d878da2dd44a0a1f113acb06efeea5c0412c8ed44c8d5611c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:37 compute-0 clever_pare[103296]: 167 167
Nov 29 07:14:37 compute-0 systemd[1]: libpod-3c712b9a207431d878da2dd44a0a1f113acb06efeea5c0412c8ed44c8d5611c4.scope: Deactivated successfully.
Nov 29 07:14:37 compute-0 podman[103279]: 2025-11-29 07:14:37.820180994 +0000 UTC m=+0.130311151 container attach 3c712b9a207431d878da2dd44a0a1f113acb06efeea5c0412c8ed44c8d5611c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:14:37 compute-0 podman[103279]: 2025-11-29 07:14:37.821213211 +0000 UTC m=+0.131343368 container died 3c712b9a207431d878da2dd44a0a1f113acb06efeea5c0412c8ed44c8d5611c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-267f4350ccc27e8655265fb831c1773d6717a47ae48cc84b4747c5e05ec89ba6-merged.mount: Deactivated successfully.
Nov 29 07:14:37 compute-0 podman[103279]: 2025-11-29 07:14:37.862193005 +0000 UTC m=+0.172323162 container remove 3c712b9a207431d878da2dd44a0a1f113acb06efeea5c0412c8ed44c8d5611c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:37 compute-0 systemd[1]: libpod-conmon-3c712b9a207431d878da2dd44a0a1f113acb06efeea5c0412c8ed44c8d5611c4.scope: Deactivated successfully.
Nov 29 07:14:38 compute-0 podman[103319]: 2025-11-29 07:14:38.02407336 +0000 UTC m=+0.046044913 container create 6c3d883f3f49841da18e5e0970551b7f4c0c260ad43c8ee87aa13585ebb06416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:38 compute-0 systemd[1]: Started libpod-conmon-6c3d883f3f49841da18e5e0970551b7f4c0c260ad43c8ee87aa13585ebb06416.scope.
Nov 29 07:14:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87be90ecec720f9f594c7652e904e5863fae9a2ee098eb3e9d9dad1ceba6ff4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87be90ecec720f9f594c7652e904e5863fae9a2ee098eb3e9d9dad1ceba6ff4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87be90ecec720f9f594c7652e904e5863fae9a2ee098eb3e9d9dad1ceba6ff4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87be90ecec720f9f594c7652e904e5863fae9a2ee098eb3e9d9dad1ceba6ff4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:38 compute-0 podman[103319]: 2025-11-29 07:14:38.005710816 +0000 UTC m=+0.027682389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:38 compute-0 podman[103319]: 2025-11-29 07:14:38.106815476 +0000 UTC m=+0.128787049 container init 6c3d883f3f49841da18e5e0970551b7f4c0c260ad43c8ee87aa13585ebb06416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_margulis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:38 compute-0 podman[103319]: 2025-11-29 07:14:38.112889033 +0000 UTC m=+0.134860586 container start 6c3d883f3f49841da18e5e0970551b7f4c0c260ad43c8ee87aa13585ebb06416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:14:38 compute-0 podman[103319]: 2025-11-29 07:14:38.116555593 +0000 UTC m=+0.138527166 container attach 6c3d883f3f49841da18e5e0970551b7f4c0c260ad43c8ee87aa13585ebb06416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:38 compute-0 ceph-mon[75050]: 2.12 deep-scrub starts
Nov 29 07:14:38 compute-0 ceph-mon[75050]: 2.12 deep-scrub ok
Nov 29 07:14:38 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:38 compute-0 ceph-mon[75050]: pgmap v111: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.3 KiB/s wr, 105 op/s
Nov 29 07:14:38 compute-0 ceph-mon[75050]: 4.17 scrub starts
Nov 29 07:14:38 compute-0 ceph-mon[75050]: 4.17 scrub ok
Nov 29 07:14:38 compute-0 sudo[103364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnqtyrslfhbcgumwwktbrolwctwjocfj ; /usr/bin/python3'
Nov 29 07:14:38 compute-0 sudo[103364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:38 compute-0 python3[103366]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:38 compute-0 podman[103367]: 2025-11-29 07:14:38.679744363 +0000 UTC m=+0.040254943 container create 60eda4de789fb33485739d1ded65c3d845387ad3f4faf253ed3be67601c7fc8e (image=quay.io/ceph/ceph:v18, name=vigorous_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:14:38 compute-0 systemd[1]: Started libpod-conmon-60eda4de789fb33485739d1ded65c3d845387ad3f4faf253ed3be67601c7fc8e.scope.
Nov 29 07:14:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1103868766c87d36bcf2e37f50a84313457d636531b7a263c53c2c3656f46bf6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1103868766c87d36bcf2e37f50a84313457d636531b7a263c53c2c3656f46bf6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:38 compute-0 podman[103367]: 2025-11-29 07:14:38.661283847 +0000 UTC m=+0.021794437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:38 compute-0 podman[103367]: 2025-11-29 07:14:38.765456941 +0000 UTC m=+0.125967551 container init 60eda4de789fb33485739d1ded65c3d845387ad3f4faf253ed3be67601c7fc8e (image=quay.io/ceph/ceph:v18, name=vigorous_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:14:38 compute-0 podman[103367]: 2025-11-29 07:14:38.770840578 +0000 UTC m=+0.131351168 container start 60eda4de789fb33485739d1ded65c3d845387ad3f4faf253ed3be67601c7fc8e (image=quay.io/ceph/ceph:v18, name=vigorous_kapitsa, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:14:38 compute-0 podman[103367]: 2025-11-29 07:14:38.774246962 +0000 UTC m=+0.134757552 container attach 60eda4de789fb33485739d1ded65c3d845387ad3f4faf253ed3be67601c7fc8e (image=quay.io/ceph/ceph:v18, name=vigorous_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:38 compute-0 confident_margulis[103336]: {
Nov 29 07:14:38 compute-0 confident_margulis[103336]:     "0": [
Nov 29 07:14:38 compute-0 confident_margulis[103336]:         {
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "devices": [
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "/dev/loop3"
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             ],
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_name": "ceph_lv0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_size": "21470642176",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "name": "ceph_lv0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "tags": {
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.crush_device_class": "",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.encrypted": "0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.osd_id": "0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.type": "block",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.vdo": "0"
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             },
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "type": "block",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "vg_name": "ceph_vg0"
Nov 29 07:14:38 compute-0 confident_margulis[103336]:         }
Nov 29 07:14:38 compute-0 confident_margulis[103336]:     ],
Nov 29 07:14:38 compute-0 confident_margulis[103336]:     "1": [
Nov 29 07:14:38 compute-0 confident_margulis[103336]:         {
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "devices": [
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "/dev/loop4"
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             ],
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_name": "ceph_lv1",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_size": "21470642176",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "name": "ceph_lv1",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "tags": {
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.crush_device_class": "",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.encrypted": "0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.osd_id": "1",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.type": "block",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.vdo": "0"
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             },
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "type": "block",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "vg_name": "ceph_vg1"
Nov 29 07:14:38 compute-0 confident_margulis[103336]:         }
Nov 29 07:14:38 compute-0 confident_margulis[103336]:     ],
Nov 29 07:14:38 compute-0 confident_margulis[103336]:     "2": [
Nov 29 07:14:38 compute-0 confident_margulis[103336]:         {
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "devices": [
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "/dev/loop5"
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             ],
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_name": "ceph_lv2",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_size": "21470642176",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "name": "ceph_lv2",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "tags": {
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.crush_device_class": "",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.encrypted": "0",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.osd_id": "2",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.type": "block",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:                 "ceph.vdo": "0"
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             },
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "type": "block",
Nov 29 07:14:38 compute-0 confident_margulis[103336]:             "vg_name": "ceph_vg2"
Nov 29 07:14:38 compute-0 confident_margulis[103336]:         }
Nov 29 07:14:38 compute-0 confident_margulis[103336]:     ]
Nov 29 07:14:38 compute-0 confident_margulis[103336]: }
Nov 29 07:14:38 compute-0 systemd[1]: libpod-6c3d883f3f49841da18e5e0970551b7f4c0c260ad43c8ee87aa13585ebb06416.scope: Deactivated successfully.
Nov 29 07:14:38 compute-0 podman[103319]: 2025-11-29 07:14:38.909900208 +0000 UTC m=+0.931871761 container died 6c3d883f3f49841da18e5e0970551b7f4c0c260ad43c8ee87aa13585ebb06416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_margulis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b87be90ecec720f9f594c7652e904e5863fae9a2ee098eb3e9d9dad1ceba6ff4-merged.mount: Deactivated successfully.
Nov 29 07:14:38 compute-0 podman[103319]: 2025-11-29 07:14:38.976508693 +0000 UTC m=+0.998480246 container remove 6c3d883f3f49841da18e5e0970551b7f4c0c260ad43c8ee87aa13585ebb06416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_margulis, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:38 compute-0 systemd[1]: libpod-conmon-6c3d883f3f49841da18e5e0970551b7f4c0c260ad43c8ee87aa13585ebb06416.scope: Deactivated successfully.
Nov 29 07:14:39 compute-0 sudo[103211]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:39 compute-0 sudo[103401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:39 compute-0 sudo[103401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:39 compute-0 sudo[103401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:39 compute-0 sudo[103426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:39 compute-0 sudo[103426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:39 compute-0 sudo[103426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:39 compute-0 sudo[103470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:39 compute-0 sudo[103470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:39 compute-0 sudo[103470]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:39 compute-0 sudo[103495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:14:39 compute-0 sudo[103495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 29 07:14:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438009325' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 07:14:39 compute-0 vigorous_kapitsa[103383]: mimic
Nov 29 07:14:39 compute-0 systemd[1]: libpod-60eda4de789fb33485739d1ded65c3d845387ad3f4faf253ed3be67601c7fc8e.scope: Deactivated successfully.
Nov 29 07:14:39 compute-0 podman[103367]: 2025-11-29 07:14:39.35123786 +0000 UTC m=+0.711748450 container died 60eda4de789fb33485739d1ded65c3d845387ad3f4faf253ed3be67601c7fc8e (image=quay.io/ceph/ceph:v18, name=vigorous_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1103868766c87d36bcf2e37f50a84313457d636531b7a263c53c2c3656f46bf6-merged.mount: Deactivated successfully.
Nov 29 07:14:39 compute-0 podman[103367]: 2025-11-29 07:14:39.434214613 +0000 UTC m=+0.794725203 container remove 60eda4de789fb33485739d1ded65c3d845387ad3f4faf253ed3be67601c7fc8e (image=quay.io/ceph/ceph:v18, name=vigorous_kapitsa, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:14:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v112: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 KiB/s wr, 103 op/s
Nov 29 07:14:39 compute-0 sudo[103364]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:39 compute-0 systemd[1]: libpod-conmon-60eda4de789fb33485739d1ded65c3d845387ad3f4faf253ed3be67601c7fc8e.scope: Deactivated successfully.
Nov 29 07:14:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3438009325' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 07:14:39 compute-0 podman[103572]: 2025-11-29 07:14:39.599789339 +0000 UTC m=+0.044936332 container create 205cd95293dc9fd342e88e4bce6e0067fe6199cef33099167c3c9a57383e63d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:39 compute-0 systemd[1]: Started libpod-conmon-205cd95293dc9fd342e88e4bce6e0067fe6199cef33099167c3c9a57383e63d2.scope.
Nov 29 07:14:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:39 compute-0 podman[103572]: 2025-11-29 07:14:39.676673016 +0000 UTC m=+0.121820029 container init 205cd95293dc9fd342e88e4bce6e0067fe6199cef33099167c3c9a57383e63d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:39 compute-0 podman[103572]: 2025-11-29 07:14:39.580352287 +0000 UTC m=+0.025499310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:39 compute-0 podman[103572]: 2025-11-29 07:14:39.684082949 +0000 UTC m=+0.129229942 container start 205cd95293dc9fd342e88e4bce6e0067fe6199cef33099167c3c9a57383e63d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:14:39 compute-0 podman[103572]: 2025-11-29 07:14:39.688180432 +0000 UTC m=+0.133327425 container attach 205cd95293dc9fd342e88e4bce6e0067fe6199cef33099167c3c9a57383e63d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:39 compute-0 dazzling_saha[103588]: 167 167
Nov 29 07:14:39 compute-0 systemd[1]: libpod-205cd95293dc9fd342e88e4bce6e0067fe6199cef33099167c3c9a57383e63d2.scope: Deactivated successfully.
Nov 29 07:14:39 compute-0 podman[103572]: 2025-11-29 07:14:39.691261266 +0000 UTC m=+0.136408269 container died 205cd95293dc9fd342e88e4bce6e0067fe6199cef33099167c3c9a57383e63d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-458895ae68eb5ded3253f6107d56322ac2cd98172bcc173718e4950f34b7bace-merged.mount: Deactivated successfully.
Nov 29 07:14:39 compute-0 podman[103572]: 2025-11-29 07:14:39.802648448 +0000 UTC m=+0.247795451 container remove 205cd95293dc9fd342e88e4bce6e0067fe6199cef33099167c3c9a57383e63d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_saha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:39 compute-0 systemd[1]: libpod-conmon-205cd95293dc9fd342e88e4bce6e0067fe6199cef33099167c3c9a57383e63d2.scope: Deactivated successfully.
Nov 29 07:14:39 compute-0 podman[103615]: 2025-11-29 07:14:39.956914874 +0000 UTC m=+0.054721360 container create 32d5de3ec14edcb5ae7c05f3a668d3fbadadac887478c30009f973a7a9ee483d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:14:39 compute-0 systemd[1]: Started libpod-conmon-32d5de3ec14edcb5ae7c05f3a668d3fbadadac887478c30009f973a7a9ee483d.scope.
Nov 29 07:14:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:40 compute-0 podman[103615]: 2025-11-29 07:14:39.924240499 +0000 UTC m=+0.022047015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4f7c149fa026bcd08c48e51f6b54ca3732510ee1795ec60e6a8ba35b38f221/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4f7c149fa026bcd08c48e51f6b54ca3732510ee1795ec60e6a8ba35b38f221/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4f7c149fa026bcd08c48e51f6b54ca3732510ee1795ec60e6a8ba35b38f221/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4f7c149fa026bcd08c48e51f6b54ca3732510ee1795ec60e6a8ba35b38f221/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:40 compute-0 podman[103615]: 2025-11-29 07:14:40.030239073 +0000 UTC m=+0.128045589 container init 32d5de3ec14edcb5ae7c05f3a668d3fbadadac887478c30009f973a7a9ee483d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 29 07:14:40 compute-0 podman[103615]: 2025-11-29 07:14:40.0385178 +0000 UTC m=+0.136324296 container start 32d5de3ec14edcb5ae7c05f3a668d3fbadadac887478c30009f973a7a9ee483d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:40 compute-0 podman[103615]: 2025-11-29 07:14:40.062460075 +0000 UTC m=+0.160266591 container attach 32d5de3ec14edcb5ae7c05f3a668d3fbadadac887478c30009f973a7a9ee483d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:14:40 compute-0 sudo[103659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqwjmzdvnzqotinvqfjdzbcnmnmdtxqc ; /usr/bin/python3'
Nov 29 07:14:40 compute-0 sudo[103659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:14:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:40 compute-0 ceph-mon[75050]: pgmap v112: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 KiB/s wr, 103 op/s
Nov 29 07:14:40 compute-0 python3[103661]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:14:40 compute-0 podman[103662]: 2025-11-29 07:14:40.586335618 +0000 UTC m=+0.052270392 container create f56f49419ac9357715a5c6c74b23f2722d9a945dcb97230bbbc821dbe53d6ea1 (image=quay.io/ceph/ceph:v18, name=pensive_varahamihira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:14:40 compute-0 systemd[1]: Started libpod-conmon-f56f49419ac9357715a5c6c74b23f2722d9a945dcb97230bbbc821dbe53d6ea1.scope.
Nov 29 07:14:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1717c7e6ff55d1417f9dd1844ff190df2e78d45d34c8d2658bbc856f45aa25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:40 compute-0 podman[103662]: 2025-11-29 07:14:40.566257438 +0000 UTC m=+0.032192222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:14:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1717c7e6ff55d1417f9dd1844ff190df2e78d45d34c8d2658bbc856f45aa25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:40 compute-0 podman[103662]: 2025-11-29 07:14:40.673810065 +0000 UTC m=+0.139744859 container init f56f49419ac9357715a5c6c74b23f2722d9a945dcb97230bbbc821dbe53d6ea1 (image=quay.io/ceph/ceph:v18, name=pensive_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:40 compute-0 podman[103662]: 2025-11-29 07:14:40.680371335 +0000 UTC m=+0.146306109 container start f56f49419ac9357715a5c6c74b23f2722d9a945dcb97230bbbc821dbe53d6ea1 (image=quay.io/ceph/ceph:v18, name=pensive_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:14:40 compute-0 podman[103662]: 2025-11-29 07:14:40.696940329 +0000 UTC m=+0.162875193 container attach f56f49419ac9357715a5c6c74b23f2722d9a945dcb97230bbbc821dbe53d6ea1 (image=quay.io/ceph/ceph:v18, name=pensive_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]: {
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "osd_id": 2,
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "type": "bluestore"
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:     },
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "osd_id": 1,
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "type": "bluestore"
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:     },
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "osd_id": 0,
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:         "type": "bluestore"
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]:     }
Nov 29 07:14:40 compute-0 thirsty_bohr[103631]: }
Nov 29 07:14:40 compute-0 systemd[1]: libpod-32d5de3ec14edcb5ae7c05f3a668d3fbadadac887478c30009f973a7a9ee483d.scope: Deactivated successfully.
Nov 29 07:14:40 compute-0 podman[103615]: 2025-11-29 07:14:40.969615019 +0000 UTC m=+1.067421515 container died 32d5de3ec14edcb5ae7c05f3a668d3fbadadac887478c30009f973a7a9ee483d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c4f7c149fa026bcd08c48e51f6b54ca3732510ee1795ec60e6a8ba35b38f221-merged.mount: Deactivated successfully.
Nov 29 07:14:41 compute-0 podman[103615]: 2025-11-29 07:14:41.040216653 +0000 UTC m=+1.138023149 container remove 32d5de3ec14edcb5ae7c05f3a668d3fbadadac887478c30009f973a7a9ee483d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:41 compute-0 systemd[1]: libpod-conmon-32d5de3ec14edcb5ae7c05f3a668d3fbadadac887478c30009f973a7a9ee483d.scope: Deactivated successfully.
Nov 29 07:14:41 compute-0 sudo[103495]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:41 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ef363642-61f2-45d5-a3f0-a30255a386fe does not exist
Nov 29 07:14:41 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 49c2b41c-3194-4edb-82ce-ca577ac4c554 does not exist
Nov 29 07:14:41 compute-0 sudo[103740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:41 compute-0 sudo[103740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:41 compute-0 sudo[103740]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:41 compute-0 sudo[103765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:14:41 compute-0 sudo[103765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:41 compute-0 sudo[103765]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:41 compute-0 sudo[103790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:41 compute-0 sudo[103790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:41 compute-0 sudo[103790]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 29 07:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585962036' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 07:14:41 compute-0 pensive_varahamihira[103677]: 
Nov 29 07:14:41 compute-0 pensive_varahamihira[103677]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Nov 29 07:14:41 compute-0 systemd[1]: libpod-f56f49419ac9357715a5c6c74b23f2722d9a945dcb97230bbbc821dbe53d6ea1.scope: Deactivated successfully.
Nov 29 07:14:41 compute-0 podman[103662]: 2025-11-29 07:14:41.368688123 +0000 UTC m=+0.834622897 container died f56f49419ac9357715a5c6c74b23f2722d9a945dcb97230bbbc821dbe53d6ea1 (image=quay.io/ceph/ceph:v18, name=pensive_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:41 compute-0 sudo[103815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:41 compute-0 sudo[103815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:41 compute-0 sudo[103815]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c1717c7e6ff55d1417f9dd1844ff190df2e78d45d34c8d2658bbc856f45aa25-merged.mount: Deactivated successfully.
Nov 29 07:14:41 compute-0 sudo[103851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:41 compute-0 sudo[103851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:41 compute-0 sudo[103851]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v113: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.5 KiB/s wr, 88 op/s
Nov 29 07:14:41 compute-0 podman[103662]: 2025-11-29 07:14:41.462338949 +0000 UTC m=+0.928273753 container remove f56f49419ac9357715a5c6c74b23f2722d9a945dcb97230bbbc821dbe53d6ea1 (image=quay.io/ceph/ceph:v18, name=pensive_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:41 compute-0 systemd[1]: libpod-conmon-f56f49419ac9357715a5c6c74b23f2722d9a945dcb97230bbbc821dbe53d6ea1.scope: Deactivated successfully.
Nov 29 07:14:41 compute-0 sudo[103659]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:41 compute-0 sudo[103879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:14:41 compute-0 sudo[103879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:41 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 29 07:14:41 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 29 07:14:41 compute-0 podman[103975]: 2025-11-29 07:14:41.948715694 +0000 UTC m=+0.080219959 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:14:42 compute-0 podman[103975]: 2025-11-29 07:14:42.044595371 +0000 UTC m=+0.176099656 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1585962036' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 07:14:42 compute-0 ceph-mon[75050]: pgmap v113: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.5 KiB/s wr, 88 op/s
Nov 29 07:14:42 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 29 07:14:42 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 29 07:14:42 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 29 07:14:42 compute-0 sudo[103879]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:42 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 29 07:14:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:14:42 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:14:42 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:42 compute-0 sudo[104136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:42 compute-0 sudo[104136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:42 compute-0 sudo[104136]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:42 compute-0 sudo[104161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:42 compute-0 sudo[104161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:42 compute-0 sudo[104161]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:42 compute-0 sudo[104186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:42 compute-0 sudo[104186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:42 compute-0 sudo[104186]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:42 compute-0 sudo[104211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:14:42 compute-0 sudo[104211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:43 compute-0 ceph-mon[75050]: 2.14 scrub starts
Nov 29 07:14:43 compute-0 ceph-mon[75050]: 2.14 scrub ok
Nov 29 07:14:43 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:43 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:43 compute-0 sudo[104211]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:14:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:14:43 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:14:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:14:43 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:43 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 0b6414c5-62ed-400f-921c-09b22053c4e9 does not exist
Nov 29 07:14:43 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7690d798-f8e6-473a-b1b0-cd53734569c7 does not exist
Nov 29 07:14:43 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c282e3b4-745b-45df-b307-6d67879f9dec does not exist
Nov 29 07:14:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:14:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:14:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:14:43 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:14:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:14:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:43 compute-0 sudo[104267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:43 compute-0 sudo[104267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:43 compute-0 sudo[104267]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v114: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.5 KiB/s wr, 85 op/s
Nov 29 07:14:43 compute-0 sudo[104292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:43 compute-0 sudo[104292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:43 compute-0 sudo[104292]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:43 compute-0 sudo[104317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:43 compute-0 sudo[104317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:43 compute-0 sudo[104317]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:43 compute-0 sudo[104342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:14:43 compute-0 sudo[104342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:43 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 29 07:14:43 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 29 07:14:43 compute-0 podman[104407]: 2025-11-29 07:14:43.84764799 +0000 UTC m=+0.024610525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:44 compute-0 podman[104407]: 2025-11-29 07:14:44.171737639 +0000 UTC m=+0.348700164 container create 5a9fccdb87788be37f9b67f5f6b684be3a9497cf020b9d56cc262447028cfd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:44 compute-0 ceph-mon[75050]: 3.d scrub starts
Nov 29 07:14:44 compute-0 ceph-mon[75050]: 3.d scrub ok
Nov 29 07:14:44 compute-0 ceph-mon[75050]: 2.1a scrub starts
Nov 29 07:14:44 compute-0 ceph-mon[75050]: 2.1a scrub ok
Nov 29 07:14:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:14:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:14:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:14:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:14:44 compute-0 ceph-mon[75050]: pgmap v114: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.5 KiB/s wr, 85 op/s
Nov 29 07:14:44 compute-0 systemd[1]: Started libpod-conmon-5a9fccdb87788be37f9b67f5f6b684be3a9497cf020b9d56cc262447028cfd1b.scope.
Nov 29 07:14:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:44 compute-0 podman[104407]: 2025-11-29 07:14:44.269578789 +0000 UTC m=+0.446541314 container init 5a9fccdb87788be37f9b67f5f6b684be3a9497cf020b9d56cc262447028cfd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:44 compute-0 podman[104407]: 2025-11-29 07:14:44.279363898 +0000 UTC m=+0.456326423 container start 5a9fccdb87788be37f9b67f5f6b684be3a9497cf020b9d56cc262447028cfd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:14:44 compute-0 nifty_allen[104423]: 167 167
Nov 29 07:14:44 compute-0 systemd[1]: libpod-5a9fccdb87788be37f9b67f5f6b684be3a9497cf020b9d56cc262447028cfd1b.scope: Deactivated successfully.
Nov 29 07:14:44 compute-0 podman[104407]: 2025-11-29 07:14:44.285886306 +0000 UTC m=+0.462848831 container attach 5a9fccdb87788be37f9b67f5f6b684be3a9497cf020b9d56cc262447028cfd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:44 compute-0 podman[104407]: 2025-11-29 07:14:44.28637801 +0000 UTC m=+0.463340545 container died 5a9fccdb87788be37f9b67f5f6b684be3a9497cf020b9d56cc262447028cfd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbe3745045b55b48dbf7fca12d4a072a920e2d9eb81ac6e0f38761fd5eec2885-merged.mount: Deactivated successfully.
Nov 29 07:14:44 compute-0 podman[104407]: 2025-11-29 07:14:44.333945913 +0000 UTC m=+0.510908438 container remove 5a9fccdb87788be37f9b67f5f6b684be3a9497cf020b9d56cc262447028cfd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:14:44 compute-0 systemd[1]: libpod-conmon-5a9fccdb87788be37f9b67f5f6b684be3a9497cf020b9d56cc262447028cfd1b.scope: Deactivated successfully.
Nov 29 07:14:44 compute-0 podman[104446]: 2025-11-29 07:14:44.488529418 +0000 UTC m=+0.045174909 container create f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:44 compute-0 systemd[1]: Started libpod-conmon-f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145.scope.
Nov 29 07:14:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ddfd8855930b0d9d6a04236c12e89c2512790a0efe558fc337861adac9f1eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ddfd8855930b0d9d6a04236c12e89c2512790a0efe558fc337861adac9f1eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ddfd8855930b0d9d6a04236c12e89c2512790a0efe558fc337861adac9f1eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ddfd8855930b0d9d6a04236c12e89c2512790a0efe558fc337861adac9f1eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ddfd8855930b0d9d6a04236c12e89c2512790a0efe558fc337861adac9f1eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:44 compute-0 podman[104446]: 2025-11-29 07:14:44.468477269 +0000 UTC m=+0.025122790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:44 compute-0 podman[104446]: 2025-11-29 07:14:44.572777297 +0000 UTC m=+0.129422818 container init f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:14:44 compute-0 podman[104446]: 2025-11-29 07:14:44.580903409 +0000 UTC m=+0.137548900 container start f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:44 compute-0 podman[104446]: 2025-11-29 07:14:44.584889099 +0000 UTC m=+0.141534620 container attach f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:14:44 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 29 07:14:44 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 29 07:14:44 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 29 07:14:44 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 29 07:14:45 compute-0 ceph-mon[75050]: 2.1e scrub starts
Nov 29 07:14:45 compute-0 ceph-mon[75050]: 2.1e scrub ok
Nov 29 07:14:45 compute-0 ceph-mon[75050]: 4.19 scrub starts
Nov 29 07:14:45 compute-0 ceph-mon[75050]: 4.19 scrub ok
Nov 29 07:14:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v115: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s wr, 2 op/s
Nov 29 07:14:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:45 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 29 07:14:45 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 29 07:14:45 compute-0 cranky_mccarthy[104462]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:14:45 compute-0 cranky_mccarthy[104462]: --> relative data size: 1.0
Nov 29 07:14:45 compute-0 cranky_mccarthy[104462]: --> All data devices are unavailable
Nov 29 07:14:45 compute-0 systemd[1]: libpod-f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145.scope: Deactivated successfully.
Nov 29 07:14:45 compute-0 podman[104446]: 2025-11-29 07:14:45.704259726 +0000 UTC m=+1.260905217 container died f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:14:45 compute-0 systemd[1]: libpod-f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145.scope: Consumed 1.070s CPU time.
Nov 29 07:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-66ddfd8855930b0d9d6a04236c12e89c2512790a0efe558fc337861adac9f1eb-merged.mount: Deactivated successfully.
Nov 29 07:14:45 compute-0 podman[104446]: 2025-11-29 07:14:45.776648969 +0000 UTC m=+1.333294460 container remove f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:14:45 compute-0 systemd[1]: libpod-conmon-f55639575c9d1376f118c44e4d7bd4af4111541e4ace0aaa731d2a63a174e145.scope: Deactivated successfully.
Nov 29 07:14:45 compute-0 sudo[104342]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:45 compute-0 sudo[104503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:45 compute-0 sudo[104503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:45 compute-0 sudo[104503]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:45 compute-0 sudo[104528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:45 compute-0 sudo[104528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:45 compute-0 sudo[104528]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:45 compute-0 sudo[104553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:45 compute-0 sudo[104553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:45 compute-0 sudo[104553]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:46 compute-0 sudo[104578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:14:46 compute-0 sudo[104578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:46 compute-0 ceph-mon[75050]: 5.6 scrub starts
Nov 29 07:14:46 compute-0 ceph-mon[75050]: 5.6 scrub ok
Nov 29 07:14:46 compute-0 ceph-mon[75050]: pgmap v115: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s wr, 2 op/s
Nov 29 07:14:46 compute-0 podman[104644]: 2025-11-29 07:14:46.389608473 +0000 UTC m=+0.054114694 container create 295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ganguly, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:46 compute-0 systemd[1]: Started libpod-conmon-295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7.scope.
Nov 29 07:14:46 compute-0 podman[104644]: 2025-11-29 07:14:46.363632441 +0000 UTC m=+0.028138712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:46 compute-0 podman[104644]: 2025-11-29 07:14:46.475600779 +0000 UTC m=+0.140107030 container init 295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:46 compute-0 podman[104644]: 2025-11-29 07:14:46.483163556 +0000 UTC m=+0.147669777 container start 295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ganguly, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:14:46 compute-0 podman[104644]: 2025-11-29 07:14:46.486713394 +0000 UTC m=+0.151219605 container attach 295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ganguly, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:14:46 compute-0 suspicious_ganguly[104660]: 167 167
Nov 29 07:14:46 compute-0 systemd[1]: libpod-295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7.scope: Deactivated successfully.
Nov 29 07:14:46 compute-0 conmon[104660]: conmon 295934f50cc2a7901b0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7.scope/container/memory.events
Nov 29 07:14:46 compute-0 podman[104644]: 2025-11-29 07:14:46.490069515 +0000 UTC m=+0.154575726 container died 295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ganguly, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f234dcb5ac062fee4a86bc5767bae89522cdca1daa9ba7def1d9215347e30377-merged.mount: Deactivated successfully.
Nov 29 07:14:46 compute-0 podman[104644]: 2025-11-29 07:14:46.542688577 +0000 UTC m=+0.207194788 container remove 295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:14:46 compute-0 systemd[1]: libpod-conmon-295934f50cc2a7901b0ccb62c9aaa913486ab074f44e8f26aaee9a4896987fa7.scope: Deactivated successfully.
Nov 29 07:14:46 compute-0 podman[104686]: 2025-11-29 07:14:46.748466484 +0000 UTC m=+0.098491329 container create f5b405438de4e68d6d1b675eb30f1088b6f931f65207ce39e97a2906c4d2fe33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ellis, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:14:46 compute-0 podman[104686]: 2025-11-29 07:14:46.671174347 +0000 UTC m=+0.021199212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:46 compute-0 systemd[1]: Started libpod-conmon-f5b405438de4e68d6d1b675eb30f1088b6f931f65207ce39e97a2906c4d2fe33.scope.
Nov 29 07:14:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d12eab3d9be12f8b132a6202fb061eb3f5a9860c7826244b3f67ef3f0f8bba1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d12eab3d9be12f8b132a6202fb061eb3f5a9860c7826244b3f67ef3f0f8bba1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d12eab3d9be12f8b132a6202fb061eb3f5a9860c7826244b3f67ef3f0f8bba1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d12eab3d9be12f8b132a6202fb061eb3f5a9860c7826244b3f67ef3f0f8bba1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:47 compute-0 podman[104686]: 2025-11-29 07:14:47.024309262 +0000 UTC m=+0.374334207 container init f5b405438de4e68d6d1b675eb30f1088b6f931f65207ce39e97a2906c4d2fe33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ellis, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:14:47 compute-0 podman[104686]: 2025-11-29 07:14:47.036494516 +0000 UTC m=+0.386519401 container start f5b405438de4e68d6d1b675eb30f1088b6f931f65207ce39e97a2906c4d2fe33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:14:47 compute-0 podman[104686]: 2025-11-29 07:14:47.041325418 +0000 UTC m=+0.391350343 container attach f5b405438de4e68d6d1b675eb30f1088b6f931f65207ce39e97a2906c4d2fe33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:14:47 compute-0 ceph-mon[75050]: 3.10 scrub starts
Nov 29 07:14:47 compute-0 ceph-mon[75050]: 3.10 scrub ok
Nov 29 07:14:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v116: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s wr, 2 op/s
Nov 29 07:14:47 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 29 07:14:47 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 29 07:14:47 compute-0 crazy_ellis[104703]: {
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:     "0": [
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:         {
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "devices": [
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "/dev/loop3"
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             ],
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_name": "ceph_lv0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_size": "21470642176",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "name": "ceph_lv0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "tags": {
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.crush_device_class": "",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.encrypted": "0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.osd_id": "0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.type": "block",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.vdo": "0"
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             },
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "type": "block",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "vg_name": "ceph_vg0"
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:         }
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:     ],
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:     "1": [
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:         {
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "devices": [
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "/dev/loop4"
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             ],
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_name": "ceph_lv1",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_size": "21470642176",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "name": "ceph_lv1",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "tags": {
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.crush_device_class": "",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.encrypted": "0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.osd_id": "1",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.type": "block",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.vdo": "0"
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             },
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "type": "block",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "vg_name": "ceph_vg1"
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:         }
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:     ],
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:     "2": [
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:         {
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "devices": [
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "/dev/loop5"
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             ],
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_name": "ceph_lv2",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_size": "21470642176",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "name": "ceph_lv2",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "tags": {
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.cluster_name": "ceph",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.crush_device_class": "",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.encrypted": "0",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.osd_id": "2",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.type": "block",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:                 "ceph.vdo": "0"
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             },
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "type": "block",
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:             "vg_name": "ceph_vg2"
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:         }
Nov 29 07:14:47 compute-0 crazy_ellis[104703]:     ]
Nov 29 07:14:47 compute-0 crazy_ellis[104703]: }
Nov 29 07:14:47 compute-0 systemd[1]: libpod-f5b405438de4e68d6d1b675eb30f1088b6f931f65207ce39e97a2906c4d2fe33.scope: Deactivated successfully.
Nov 29 07:14:47 compute-0 podman[104686]: 2025-11-29 07:14:47.883063649 +0000 UTC m=+1.233088494 container died f5b405438de4e68d6d1b675eb30f1088b6f931f65207ce39e97a2906c4d2fe33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ellis, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d12eab3d9be12f8b132a6202fb061eb3f5a9860c7826244b3f67ef3f0f8bba1-merged.mount: Deactivated successfully.
Nov 29 07:14:47 compute-0 podman[104686]: 2025-11-29 07:14:47.941223703 +0000 UTC m=+1.291248558 container remove f5b405438de4e68d6d1b675eb30f1088b6f931f65207ce39e97a2906c4d2fe33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ellis, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:47 compute-0 systemd[1]: libpod-conmon-f5b405438de4e68d6d1b675eb30f1088b6f931f65207ce39e97a2906c4d2fe33.scope: Deactivated successfully.
Nov 29 07:14:47 compute-0 sudo[104578]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:48 compute-0 sudo[104726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:48 compute-0 sudo[104726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:48 compute-0 sudo[104726]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:48 compute-0 sudo[104751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:14:48 compute-0 sudo[104751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:48 compute-0 sudo[104751]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:48 compute-0 sudo[104776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:48 compute-0 sudo[104776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:48 compute-0 sudo[104776]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:48 compute-0 sudo[104801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:14:48 compute-0 sudo[104801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:48 compute-0 ceph-mon[75050]: pgmap v116: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 853 B/s wr, 2 op/s
Nov 29 07:14:48 compute-0 ceph-mon[75050]: 4.1d scrub starts
Nov 29 07:14:48 compute-0 ceph-mon[75050]: 4.1d scrub ok
Nov 29 07:14:48 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 29 07:14:48 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 29 07:14:48 compute-0 podman[104866]: 2025-11-29 07:14:48.564581361 +0000 UTC m=+0.037985031 container create 4091cb5aa1cb539ad1e9f64116c6737a6b850a45c0ecd6e350a8c822624feb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:48 compute-0 systemd[1]: Started libpod-conmon-4091cb5aa1cb539ad1e9f64116c6737a6b850a45c0ecd6e350a8c822624feb9a.scope.
Nov 29 07:14:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:48 compute-0 podman[104866]: 2025-11-29 07:14:48.636778199 +0000 UTC m=+0.110181899 container init 4091cb5aa1cb539ad1e9f64116c6737a6b850a45c0ecd6e350a8c822624feb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hoover, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:14:48 compute-0 podman[104866]: 2025-11-29 07:14:48.642370832 +0000 UTC m=+0.115774502 container start 4091cb5aa1cb539ad1e9f64116c6737a6b850a45c0ecd6e350a8c822624feb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:48 compute-0 podman[104866]: 2025-11-29 07:14:48.548205383 +0000 UTC m=+0.021609073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:48 compute-0 podman[104866]: 2025-11-29 07:14:48.646160907 +0000 UTC m=+0.119564597 container attach 4091cb5aa1cb539ad1e9f64116c6737a6b850a45c0ecd6e350a8c822624feb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hoover, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:14:48 compute-0 angry_hoover[104882]: 167 167
Nov 29 07:14:48 compute-0 systemd[1]: libpod-4091cb5aa1cb539ad1e9f64116c6737a6b850a45c0ecd6e350a8c822624feb9a.scope: Deactivated successfully.
Nov 29 07:14:48 compute-0 podman[104866]: 2025-11-29 07:14:48.647645017 +0000 UTC m=+0.121048687 container died 4091cb5aa1cb539ad1e9f64116c6737a6b850a45c0ecd6e350a8c822624feb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e44876ce24abf12fabc710c265c82d5f24b829e6a17a3fb0dc8ecf7a0e22f563-merged.mount: Deactivated successfully.
Nov 29 07:14:48 compute-0 podman[104866]: 2025-11-29 07:14:48.684933578 +0000 UTC m=+0.158337248 container remove 4091cb5aa1cb539ad1e9f64116c6737a6b850a45c0ecd6e350a8c822624feb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 29 07:14:48 compute-0 systemd[1]: libpod-conmon-4091cb5aa1cb539ad1e9f64116c6737a6b850a45c0ecd6e350a8c822624feb9a.scope: Deactivated successfully.
Nov 29 07:14:48 compute-0 podman[104905]: 2025-11-29 07:14:48.847564474 +0000 UTC m=+0.040253244 container create b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_matsumoto, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:14:48 compute-0 systemd[1]: Started libpod-conmon-b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1.scope.
Nov 29 07:14:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1562995b14179be041f68f29847af1c6fb3de6225f1beaca4ef05ce7cadae1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1562995b14179be041f68f29847af1c6fb3de6225f1beaca4ef05ce7cadae1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1562995b14179be041f68f29847af1c6fb3de6225f1beaca4ef05ce7cadae1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a1562995b14179be041f68f29847af1c6fb3de6225f1beaca4ef05ce7cadae1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:14:48 compute-0 podman[104905]: 2025-11-29 07:14:48.832385668 +0000 UTC m=+0.025074458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:14:48 compute-0 podman[104905]: 2025-11-29 07:14:48.938006593 +0000 UTC m=+0.130695383 container init b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:14:48 compute-0 podman[104905]: 2025-11-29 07:14:48.943847512 +0000 UTC m=+0.136536272 container start b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_matsumoto, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:14:48 compute-0 podman[104905]: 2025-11-29 07:14:48.947106802 +0000 UTC m=+0.139795562 container attach b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_matsumoto, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:14:49 compute-0 ceph-mon[75050]: 4.1e scrub starts
Nov 29 07:14:49 compute-0 ceph-mon[75050]: 4.1e scrub ok
Nov 29 07:14:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v117: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]: {
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "osd_id": 2,
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "type": "bluestore"
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:     },
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "osd_id": 1,
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "type": "bluestore"
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:     },
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "osd_id": 0,
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:         "type": "bluestore"
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]:     }
Nov 29 07:14:49 compute-0 interesting_matsumoto[104921]: }
Nov 29 07:14:49 compute-0 systemd[1]: libpod-b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1.scope: Deactivated successfully.
Nov 29 07:14:49 compute-0 systemd[1]: libpod-b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1.scope: Consumed 1.013s CPU time.
Nov 29 07:14:49 compute-0 podman[104905]: 2025-11-29 07:14:49.950163743 +0000 UTC m=+1.142852583 container died b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_matsumoto, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:14:50 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 29 07:14:50 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 29 07:14:50 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.1f deep-scrub starts
Nov 29 07:14:50 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 4.1f deep-scrub ok
Nov 29 07:14:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:50 compute-0 ceph-mon[75050]: pgmap v117: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 29 07:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a1562995b14179be041f68f29847af1c6fb3de6225f1beaca4ef05ce7cadae1-merged.mount: Deactivated successfully.
Nov 29 07:14:50 compute-0 podman[104905]: 2025-11-29 07:14:50.668889914 +0000 UTC m=+1.861578674 container remove b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_matsumoto, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:14:50 compute-0 sudo[104801]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:14:50 compute-0 systemd[1]: libpod-conmon-b5f488639f6deb899d9f50cdbbb854e5fc91797429c57d3f8983911553a332c1.scope: Deactivated successfully.
Nov 29 07:14:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:14:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ec5560a4-dde1-4386-b9d4-3369de301d8e does not exist
Nov 29 07:14:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 865999f3-2243-4865-aa6b-2f1bcb0799eb does not exist
Nov 29 07:14:50 compute-0 sudo[104966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:14:50 compute-0 sudo[104966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:50 compute-0 sudo[104966]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:50 compute-0 sudo[104991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:14:50 compute-0 sudo[104991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:14:50 compute-0 sudo[104991]: pam_unix(sudo:session): session closed for user root
Nov 29 07:14:51 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 29 07:14:51 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 29 07:14:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v118: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:51 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 29 07:14:51 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 29 07:14:51 compute-0 ceph-mon[75050]: 3.13 scrub starts
Nov 29 07:14:51 compute-0 ceph-mon[75050]: 3.13 scrub ok
Nov 29 07:14:51 compute-0 ceph-mon[75050]: 4.1f deep-scrub starts
Nov 29 07:14:51 compute-0 ceph-mon[75050]: 4.1f deep-scrub ok
Nov 29 07:14:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:14:51 compute-0 ceph-mon[75050]: 3.14 scrub starts
Nov 29 07:14:51 compute-0 ceph-mon[75050]: pgmap v118: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:52 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 29 07:14:52 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 29 07:14:52 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 29 07:14:52 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 29 07:14:53 compute-0 ceph-mon[75050]: 3.14 scrub ok
Nov 29 07:14:53 compute-0 ceph-mon[75050]: 6.3 scrub starts
Nov 29 07:14:53 compute-0 ceph-mon[75050]: 6.3 scrub ok
Nov 29 07:14:53 compute-0 ceph-mon[75050]: 3.19 scrub starts
Nov 29 07:14:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v119: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:54 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 29 07:14:54 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 29 07:14:54 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 29 07:14:54 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 29 07:14:54 compute-0 ceph-mon[75050]: 3.19 scrub ok
Nov 29 07:14:54 compute-0 ceph-mon[75050]: 5.8 scrub starts
Nov 29 07:14:54 compute-0 ceph-mon[75050]: 5.8 scrub ok
Nov 29 07:14:54 compute-0 ceph-mon[75050]: pgmap v119: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v120: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:14:56 compute-0 ceph-mon[75050]: 6.5 scrub starts
Nov 29 07:14:56 compute-0 ceph-mon[75050]: 6.5 scrub ok
Nov 29 07:14:56 compute-0 ceph-mon[75050]: 5.a scrub starts
Nov 29 07:14:56 compute-0 ceph-mon[75050]: 5.a scrub ok
Nov 29 07:14:56 compute-0 ceph-mon[75050]: pgmap v120: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:56 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 29 07:14:56 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 29 07:14:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v121: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:57 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 29 07:14:57 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 29 07:14:57 compute-0 ceph-mon[75050]: 5.b scrub starts
Nov 29 07:14:57 compute-0 ceph-mon[75050]: 5.b scrub ok
Nov 29 07:14:58 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 29 07:14:58 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 29 07:14:58 compute-0 ceph-mon[75050]: pgmap v121: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:58 compute-0 ceph-mon[75050]: 5.d scrub starts
Nov 29 07:14:58 compute-0 ceph-mon[75050]: 5.d scrub ok
Nov 29 07:14:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v122: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:14:59 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 29 07:14:59 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 29 07:14:59 compute-0 ceph-mon[75050]: 6.7 scrub starts
Nov 29 07:14:59 compute-0 ceph-mon[75050]: 6.7 scrub ok
Nov 29 07:14:59 compute-0 ceph-mon[75050]: pgmap v122: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:00 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 29 07:15:00 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 29 07:15:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:00 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 29 07:15:00 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 29 07:15:00 compute-0 ceph-mon[75050]: 6.9 scrub starts
Nov 29 07:15:00 compute-0 ceph-mon[75050]: 6.9 scrub ok
Nov 29 07:15:01 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 29 07:15:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v123: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:01 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 29 07:15:01 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 29 07:15:01 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 29 07:15:02 compute-0 ceph-mon[75050]: 6.a scrub starts
Nov 29 07:15:02 compute-0 ceph-mon[75050]: 6.a scrub ok
Nov 29 07:15:02 compute-0 ceph-mon[75050]: 5.e scrub starts
Nov 29 07:15:02 compute-0 ceph-mon[75050]: 5.e scrub ok
Nov 29 07:15:02 compute-0 ceph-mon[75050]: pgmap v123: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:03 compute-0 ceph-mon[75050]: 5.1e scrub starts
Nov 29 07:15:03 compute-0 ceph-mon[75050]: 5.1e scrub ok
Nov 29 07:15:03 compute-0 ceph-mon[75050]: 3.1a scrub starts
Nov 29 07:15:03 compute-0 ceph-mon[75050]: 3.1a scrub ok
Nov 29 07:15:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v124: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:03 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Nov 29 07:15:03 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Nov 29 07:15:03 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 29 07:15:03 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 29 07:15:04 compute-0 ceph-mon[75050]: pgmap v124: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 29 07:15:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 29 07:15:05 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:15:05
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.control']
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:15:05 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v125: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:15:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:15:05 compute-0 ceph-mon[75050]: 3.1c deep-scrub starts
Nov 29 07:15:05 compute-0 ceph-mon[75050]: 3.1c deep-scrub ok
Nov 29 07:15:05 compute-0 ceph-mon[75050]: 5.10 scrub starts
Nov 29 07:15:05 compute-0 ceph-mon[75050]: 5.10 scrub ok
Nov 29 07:15:05 compute-0 ceph-mon[75050]: 7.7 scrub starts
Nov 29 07:15:05 compute-0 ceph-mon[75050]: 7.7 scrub ok
Nov 29 07:15:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:06 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 29 07:15:06 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 29 07:15:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 29 07:15:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 29 07:15:06 compute-0 ceph-mon[75050]: 2.19 scrub starts
Nov 29 07:15:06 compute-0 ceph-mon[75050]: 2.19 scrub ok
Nov 29 07:15:06 compute-0 ceph-mon[75050]: pgmap v125: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v126: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:07 compute-0 ceph-mon[75050]: 2.18 scrub starts
Nov 29 07:15:07 compute-0 ceph-mon[75050]: 2.18 scrub ok
Nov 29 07:15:07 compute-0 ceph-mon[75050]: 7.b scrub starts
Nov 29 07:15:07 compute-0 ceph-mon[75050]: 7.b scrub ok
Nov 29 07:15:08 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 29 07:15:08 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 29 07:15:08 compute-0 ceph-mon[75050]: pgmap v126: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v127: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:09 compute-0 ceph-mon[75050]: 7.d scrub starts
Nov 29 07:15:09 compute-0 ceph-mon[75050]: 7.d scrub ok
Nov 29 07:15:09 compute-0 ceph-mon[75050]: pgmap v127: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:11 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 07:15:11 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 29 07:15:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:15:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v128: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 29 07:15:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:15:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 07:15:11 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 29 07:15:11 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 94d5890a-fda5-4995-968a-f2be0c0376d9 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 07:15:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:15:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:15:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:15:11 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 29 07:15:11 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 29 07:15:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 29 07:15:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:15:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 07:15:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 29 07:15:12 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 46ad4ab1-72d4-4c86-97f1-432dc80ed42e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 07:15:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:15:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:15:12 compute-0 ceph-mon[75050]: 5.7 scrub starts
Nov 29 07:15:12 compute-0 ceph-mon[75050]: 5.7 scrub ok
Nov 29 07:15:12 compute-0 ceph-mon[75050]: pgmap v128: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:15:12 compute-0 ceph-mon[75050]: osdmap e45: 3 total, 3 up, 3 in
Nov 29 07:15:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:15:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v131: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:15:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:15:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 29 07:15:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:15:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:15:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:15:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 07:15:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 29 07:15:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 07:15:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:15:13 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 091179d9-3aab-40cc-8ead-a8323bfc32a8 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 07:15:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 47 pg[9.0( v 44'385 (0'0,44'385] local-lis/les=36/37 n=177 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.177519798s) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 44'384 mlcod 44'384 active pruub 136.339370728s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 47 pg[8.0( v 35'4 (0'0,35'4] local-lis/les=34/35 n=4 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=11.020109177s) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 35'3 mlcod 35'3 active pruub 134.181991577s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:13 compute-0 ceph-mon[75050]: 5.17 scrub starts
Nov 29 07:15:13 compute-0 ceph-mon[75050]: 5.17 scrub ok
Nov 29 07:15:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:15:13 compute-0 ceph-mon[75050]: osdmap e46: 3 total, 3 up, 3 in
Nov 29 07:15:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:15:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 47 pg[8.0( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=11.020109177s) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 35'3 mlcod 0'0 unknown pruub 134.181991577s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:13 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 47 pg[9.0( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47 pruub=13.177519798s) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 44'384 mlcod 0'0 unknown pruub 136.339370728s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:13 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 29 07:15:13 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 29 07:15:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 29 07:15:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:15:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 07:15:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 29 07:15:14 compute-0 ceph-mgr[75345]: [progress INFO root] update: starting ev 87a17baf-6939-41f3-ac04-27ade1258623 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 07:15:14 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 94d5890a-fda5-4995-968a-f2be0c0376d9 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 07:15:14 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 94d5890a-fda5-4995-968a-f2be0c0376d9 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 29 07:15:14 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 46ad4ab1-72d4-4c86-97f1-432dc80ed42e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 07:15:14 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 46ad4ab1-72d4-4c86-97f1-432dc80ed42e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 29 07:15:14 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 091179d9-3aab-40cc-8ead-a8323bfc32a8 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 07:15:14 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 091179d9-3aab-40cc-8ead-a8323bfc32a8 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 29 07:15:14 compute-0 ceph-mgr[75345]: [progress INFO root] complete: finished ev 87a17baf-6939-41f3-ac04-27ade1258623 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 07:15:14 compute-0 ceph-mgr[75345]: [progress INFO root] Completed event 87a17baf-6939-41f3-ac04-27ade1258623 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.14( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.15( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.14( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-mon[75050]: pgmap v131: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:15:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:15:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:15:14 compute-0 ceph-mon[75050]: osdmap e47: 3 total, 3 up, 3 in
Nov 29 07:15:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 07:15:14 compute-0 ceph-mon[75050]: 7.10 scrub starts
Nov 29 07:15:14 compute-0 ceph-mon[75050]: 7.10 scrub ok
Nov 29 07:15:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 07:15:14 compute-0 ceph-mon[75050]: osdmap e48: 3 total, 3 up, 3 in
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.16( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.17( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.17( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.15( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.16( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.10( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.11( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1( v 35'4 (0'0,35'4] local-lis/les=34/35 n=1 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.2( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=1 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.3( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.3( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=1 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.2( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.c( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.d( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.d( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.c( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.f( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.e( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.9( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.8( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.a( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.b( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.e( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.f( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.b( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.a( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.9( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.8( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.6( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.7( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.5( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.6( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.7( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.4( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.4( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=1 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1b( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.5( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1a( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.19( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.18( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.19( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1e( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.18( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1f( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1f( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1e( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1c( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1d( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1c( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1d( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.12( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.13( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.11( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.10( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.13( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1a( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=34/35 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1b( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.12( v 44'385 lc 0'0 (0'0,44'385] local-lis/les=36/37 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.14( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.14( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.16( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.17( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.10( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.0( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 44'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1( v 35'4 (0'0,35'4] local-lis/les=47/48 n=1 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.2( v 35'4 (0'0,35'4] local-lis/les=47/48 n=1 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.3( v 35'4 (0'0,35'4] local-lis/les=47/48 n=1 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.2( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.15( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.d( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.e( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.c( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.f( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.8( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.a( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.9( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.0( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 35'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.b( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.a( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.6( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.5( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.4( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.7( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1b( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.4( v 35'4 (0'0,35'4] local-lis/les=47/48 n=1 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.19( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1f( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1e( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1c( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1d( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1a( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.11( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.18( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.13( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.10( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.12( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[8.1a( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=34/34 les/c/f=35/35/0 sis=47) [1] r=0 lpr=47 pi=[34,47)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 48 pg[9.12( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=36/36 les/c/f=37/37/0 sis=47) [1] r=0 lpr=47 pi=[36,47)/1 crt=44'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:14 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 29 07:15:14 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 29 07:15:14 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 29 07:15:14 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 29 07:15:15 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 29 07:15:15 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 29 07:15:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v134: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:15:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 07:15:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 29 07:15:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:15:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:15:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 07:15:15 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 29 07:15:15 compute-0 ceph-mon[75050]: 5.1b scrub starts
Nov 29 07:15:15 compute-0 ceph-mon[75050]: 5.1b scrub ok
Nov 29 07:15:15 compute-0 ceph-mon[75050]: 7.12 scrub starts
Nov 29 07:15:15 compute-0 ceph-mon[75050]: 7.12 scrub ok
Nov 29 07:15:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:15 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 49 pg[10.0( v 39'16 (0'0,39'16] local-lis/les=38/39 n=8 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=49 pruub=13.580121040s) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 39'15 mlcod 39'15 active pruub 132.729446411s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:15 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=15.603130341s) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active pruub 140.856948853s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:15 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=49 pruub=15.603130341s) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown pruub 140.856948853s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:15 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 49 pg[10.0( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=49 pruub=13.580121040s) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 39'15 mlcod 0'0 unknown pruub 132.729446411s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:16 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 29 07:15:16 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 29 07:15:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 29 07:15:16 compute-0 ceph-mon[75050]: 2.1d scrub starts
Nov 29 07:15:16 compute-0 ceph-mon[75050]: 2.1d scrub ok
Nov 29 07:15:16 compute-0 ceph-mon[75050]: pgmap v134: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:15:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 07:15:16 compute-0 ceph-mon[75050]: osdmap e49: 3 total, 3 up, 3 in
Nov 29 07:15:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 07:15:16 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=40/41 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1e( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.19( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.d( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.a( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.13( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.b( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.10( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1f( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.11( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1d( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.12( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1c( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1b( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.6( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1a( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.4( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.5( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.7( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.f( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.8( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.18( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.9( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.c( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.e( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1( v 39'16 (0'0,39'16] local-lis/les=38/39 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.3( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.2( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.14( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.15( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.16( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.17( v 39'16 lc 0'0 (0'0,39'16] local-lis/les=38/39 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.19( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=40/40 les/c/f=41/41/0 sis=49) [1] r=0 lpr=49 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.d( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1e( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.a( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.13( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.b( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.11( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1f( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1d( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1c( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.12( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1b( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.6( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1a( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.4( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.5( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.f( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.8( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.9( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.7( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.c( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.0( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 39'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.e( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.1( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.10( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.2( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.18( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.3( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.15( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.17( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.16( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 50 pg[10.14( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=38/38 les/c/f=39/39/0 sis=49) [2] r=0 lpr=49 pi=[38,49)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:16 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 29 07:15:16 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 29 07:15:17 compute-0 ceph-mgr[75345]: [progress INFO root] Writing back 16 completed events
Nov 29 07:15:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 07:15:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 1 peering, 124 unknown, 180 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:17 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 29 07:15:17 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 29 07:15:18 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Nov 29 07:15:18 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 29 07:15:18 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 29 07:15:19 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 29 07:15:19 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 29 07:15:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:19 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 29 07:15:20 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 29 07:15:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:21 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Nov 29 07:15:21 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 29 07:15:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:24 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:15:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 11.7808 seconds
Nov 29 07:15:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:15:27 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 8.855551720s
Nov 29 07:15:27 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 8.855551720s
Nov 29 07:15:27 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.855698586s, txc = 0x55f94cf72c00
Nov 29 07:15:27 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 29 07:15:27 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 29 07:15:27 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 29 07:15:27 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.375969887s, txc = 0x55819bf9d500
Nov 29 07:15:27 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 8.376237869s
Nov 29 07:15:27 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 8.376237869s
Nov 29 07:15:27 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Nov 29 07:15:27 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.272954941s, txc = 0x560f3f7c6f00
Nov 29 07:15:27 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 10.272868156s
Nov 29 07:15:27 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 10.272868156s
Nov 29 07:15:27 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 5.4 scrub starts
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 5.4 scrub ok
Nov 29 07:15:27 compute-0 ceph-mon[75050]: osdmap e50: 3 total, 3 up, 3 in
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 5.1c scrub starts
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 5.1c scrub ok
Nov 29 07:15:27 compute-0 ceph-mon[75050]: pgmap v137: 305 pgs: 1 peering, 124 unknown, 180 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 5.1f scrub starts
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 5.1f scrub ok
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 4.18 deep-scrub starts
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 7.14 scrub starts
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 7.14 scrub ok
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 5.5 scrub starts
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 5.5 scrub ok
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 7.16 scrub starts
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 7.17 scrub starts
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 2.1f deep-scrub starts
Nov 29 07:15:27 compute-0 ceph-mon[75050]: 7.19 scrub starts
Nov 29 07:15:27 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.205441475s, txc = 0x55f94cf72900
Nov 29 07:15:27 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.261546135s, txc = 0x55f94ccb0000
Nov 29 07:15:27 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.284347057s, txc = 0x55f94b1f8c00
Nov 29 07:15:27 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.332289696s, txc = 0x55f94cd02c00
Nov 29 07:15:27 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 8.561247826s, txc = 0x55819bf9c600
Nov 29 07:15:27 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.500851154s, txc = 0x55819bf71800
Nov 29 07:15:27 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 10.280205727s, txc = 0x560f40498300
Nov 29 07:15:27 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.301464081s, txc = 0x560f3f7c7500
Nov 29 07:15:28 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 29 07:15:28 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 29 07:15:28 compute-0 ceph-mon[75050]: pgmap v138: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:28 compute-0 ceph-mon[75050]: pgmap v139: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:28 compute-0 ceph-mon[75050]: pgmap v140: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:28 compute-0 ceph-mon[75050]: pgmap v141: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:28 compute-0 ceph-mon[75050]: pgmap v142: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:15:28 compute-0 ceph-mon[75050]: 7.16 scrub ok
Nov 29 07:15:28 compute-0 ceph-mon[75050]: 7.19 scrub ok
Nov 29 07:15:28 compute-0 ceph-mon[75050]: 7.17 scrub ok
Nov 29 07:15:28 compute-0 ceph-mon[75050]: 2.1f deep-scrub ok
Nov 29 07:15:28 compute-0 ceph-mon[75050]: 4.18 deep-scrub ok
Nov 29 07:15:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 300 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:15:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:15:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 07:15:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:15:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:15:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:29 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 29 07:15:29 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 29 07:15:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 29 07:15:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:15:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:15:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:15:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:15:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 07:15:29 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 29 07:15:30 compute-0 ceph-mon[75050]: 7.1d scrub starts
Nov 29 07:15:30 compute-0 ceph-mon[75050]: 7.1d scrub ok
Nov 29 07:15:30 compute-0 ceph-mon[75050]: pgmap v143: 305 pgs: 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 300 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 07:15:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.427269936s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.175064087s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.14( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.427203178s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.175033569s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.427192688s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.175064087s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.15( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.432509422s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.180435181s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.14( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.427110672s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.175033569s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.485548973s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.233779907s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.485557556s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.233795166s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431924820s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.180160522s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.476088524s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.224258423s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431893349s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180160522s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.475977898s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.224258423s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.10( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431866646s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.180374146s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.10( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431845665s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180374146s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431476593s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.180175781s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431452751s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180175781s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484956741s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.233810425s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484933853s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.233810425s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.485501289s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.233779907s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.485111237s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234176636s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.485088348s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234176636s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484704018s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.233795166s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.2( v 35'4 (0'0,35'4] local-lis/les=47/48 n=1 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431230545s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.180511475s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.c( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431308746s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.180633545s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.2( v 35'4 (0'0,35'4] local-lis/les=47/48 n=1 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431200027s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180511475s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.c( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.431279182s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180633545s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.14( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484709740s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234191895s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484979630s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234481812s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484664917s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234191895s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484951973s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234481812s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430888176s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.180526733s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484477997s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234176636s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.d( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430988312s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.180664062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.d( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430933952s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180664062s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484451294s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234176636s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.e( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430827141s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.180679321s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430830002s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.180679321s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.e( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430807114s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180679321s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430808067s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180679321s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484273911s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234283447s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484254837s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234283447s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430645943s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.180725098s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484188080s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234283447s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430614471s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180725098s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.484150887s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234283447s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430343628s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180526733s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430476189s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.180801392s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.f( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430459023s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.180816650s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430452347s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180801392s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.f( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430436134s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180816650s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.483779907s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234298706s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.483713150s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234298706s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.b( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430366516s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.180969238s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.9( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430157661s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.180908203s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.9( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430127144s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180908203s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.10( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430007935s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.180648804s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.429919243s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.180969238s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.483428001s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234359741s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.429889679s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180969238s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.483243942s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234359741s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.483167648s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234420776s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.483149529s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234420776s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.b( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430252075s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180969238s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.429553986s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180648804s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.6( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430300713s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.181716919s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.429579735s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.181015015s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.6( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430260658s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.181716919s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.429555893s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.181015015s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.4( v 35'4 (0'0,35'4] local-lis/les=47/48 n=1 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430080414s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.181762695s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430082321s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.181762695s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430056572s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.181762695s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.4( v 35'4 (0'0,35'4] local-lis/les=47/48 n=1 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.430057526s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.181762695s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.482595444s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234436035s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.482575417s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234436035s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.15( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.432465553s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.180435181s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.482147217s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234466553s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1b( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.429443359s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.181747437s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.482127190s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234466553s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1b( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.429412842s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.181747437s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.482001305s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234466553s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.481933594s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234466553s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.18( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.429226875s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.181838989s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.482618332s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234375000s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.c( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.481695175s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234375000s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.481560707s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234466553s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.481535912s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234466553s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428862572s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.181915283s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428714752s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.181823730s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1f( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428796768s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.181930542s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428784370s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.181915283s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.481351852s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234542847s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.18( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.429192543s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.181838989s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.481306076s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234542847s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.481125832s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234481812s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1d( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428744316s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.182083130s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.481101990s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234481812s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1d( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428716660s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.182083130s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428648949s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.182113647s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1c( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428557396s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.182067871s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428600311s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.182113647s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1f( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428755760s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.181930542s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1c( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428470612s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.182067871s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.480803490s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234542847s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.480772018s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234542847s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.12( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428279877s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.182174683s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.480629921s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234588623s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428497314s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.182464600s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.12( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428213120s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.182174683s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.480606079s) [2] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234588623s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428456306s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.182464600s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.e( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.11( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428053856s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.182174683s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.11( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428030014s) [2] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.182174683s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.480383873s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234634399s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.480312347s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active pruub 150.234634399s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.480341911s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234634399s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.480289459s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 150.234634399s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1a( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428247452s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active pruub 148.182617188s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[8.1a( v 35'4 (0'0,35'4] local-lis/les=47/48 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428215981s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.182617188s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428036690s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 148.182510376s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.428018570s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.182510376s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=8.427884102s) [0] r=-1 lpr=51 pi=[47,51)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.181823730s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.f( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.9( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.b( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.6( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[8.15( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.18( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[8.2( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[8.d( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[8.4( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.1f( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.1d( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[8.12( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[8.11( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[8.1a( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[8.1b( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[8.1c( empty local-lis/les=0/0 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.1e( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466770172s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132019043s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.1e( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466719627s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132019043s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.19( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.461006165s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.126510620s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.19( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.460976601s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.126510620s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.d( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466334343s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 39'16 active pruub 144.132019043s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.d( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466292381s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 0'0 unknown NOTIFY pruub 144.132019043s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.b( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466386795s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132232666s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.b( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466354370s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132232666s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.13( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466254234s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132232666s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.13( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466223717s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132232666s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.12( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466369629s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132385254s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.12( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466333389s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132385254s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.11( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466249466s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132324219s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.11( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466218948s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132324219s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.10( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466408730s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132659912s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.1a( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466039658s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132461548s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.10( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466305733s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132659912s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.1a( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.466014862s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132461548s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.7( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465925217s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132537842s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.7( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465890884s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132537842s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.6( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465783119s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132461548s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.6( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465746880s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132461548s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.4( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465666771s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132461548s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.4( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465635300s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132461548s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.8( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465575218s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132553101s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.f( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465566635s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132553101s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.8( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465543747s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132553101s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.9( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465484619s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 39'16 active pruub 144.132537842s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.f( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465505600s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132553101s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.9( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465446472s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 0'0 unknown NOTIFY pruub 144.132537842s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.e( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465394020s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 39'16 active pruub 144.132614136s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.e( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465349197s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 0'0 unknown NOTIFY pruub 144.132614136s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.1( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465346336s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132659912s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.2( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465291023s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.132675171s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.1( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465305328s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132659912s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.2( v 39'16 (0'0,39'16] local-lis/les=49/50 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465250015s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.132675171s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.14( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465694427s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 39'16 active pruub 144.133178711s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.14( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465661049s) [1] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 0'0 unknown NOTIFY pruub 144.133178711s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.15( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465640068s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 39'16 active pruub 144.133178711s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.15( v 50'17 (0'0,50'17] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465606689s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 39'16 mlcod 0'0 unknown NOTIFY pruub 144.133178711s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.16( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465609550s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.133270264s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.17( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465596199s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active pruub 144.133255005s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.19( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.16( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465552330s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.133270264s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 51 pg[10.17( v 39'16 (0'0,39'16] local-lis/les=49/50 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51 pruub=10.465522766s) [0] r=-1 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.133255005s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.b( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.13( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.12( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.11( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.10( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.7( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.6( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.4( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.8( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.9( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.2( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 51 pg[10.14( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.1( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:30 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 51 pg[10.17( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 29 07:15:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 07:15:31 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-mon[75050]: 2.f scrub starts
Nov 29 07:15:31 compute-0 ceph-mon[75050]: 2.f scrub ok
Nov 29 07:15:31 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:15:31 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:15:31 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 07:15:31 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:15:31 compute-0 ceph-mon[75050]: osdmap e51: 3 total, 3 up, 3 in
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=-1 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[8.1c( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[8.1b( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[8.11( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.8( v 39'16 (0'0,39'16] local-lis/les=51/52 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.19( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.b( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[8.12( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[8.4( v 35'4 (0'0,35'4] local-lis/les=51/52 n=1 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[8.d( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[8.2( v 35'4 (0'0,35'4] local-lis/les=51/52 n=1 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[8.15( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [2] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [2] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.1( v 39'16 (0'0,39'16] local-lis/les=51/52 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.1e( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.7( v 39'16 (0'0,39'16] local-lis/les=51/52 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.16( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.1d( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.14( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.4( v 39'16 (0'0,39'16] local-lis/les=51/52 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.1f( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.d( v 50'17 lc 39'9 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.18( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.1a( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.e( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.f( v 35'4 lc 0'0 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.17( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.14( v 50'17 lc 39'13 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.12( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.11( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.10( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.13( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.9( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.e( v 50'17 lc 39'7 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.c( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.6( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.15( v 50'17 lc 39'5 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.b( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[10.9( v 50'17 lc 39'15 (0'0,50'17] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=50'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[8.10( v 35'4 (0'0,35'4] local-lis/les=51/52 n=0 ec=47/34 lis/c=47/47 les/c/f=48/48/0 sis=51) [0] r=0 lpr=51 pi=[47,51)/1 crt=35'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.6( v 39'16 (0'0,39'16] local-lis/les=51/52 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.2( v 39'16 (0'0,39'16] local-lis/les=51/52 n=1 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.1a( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 52 pg[10.f( v 39'16 (0'0,39'16] local-lis/les=51/52 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=51) [1] r=0 lpr=51 pi=[49,51)/1 crt=39'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=49/40 lis/c=49/49 les/c/f=50/50/0 sis=51) [0] r=0 lpr=51 pi=[49,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:31 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 29 07:15:31 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 29 07:15:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 300 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 07:15:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:15:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 29 07:15:32 compute-0 ceph-mon[75050]: osdmap e52: 3 total, 3 up, 3 in
Nov 29 07:15:32 compute-0 ceph-mon[75050]: 4.11 scrub starts
Nov 29 07:15:32 compute-0 ceph-mon[75050]: 4.11 scrub ok
Nov 29 07:15:32 compute-0 ceph-mon[75050]: pgmap v146: 305 pgs: 3 active+clean+scrubbing, 2 active+clean+scrubbing+deep, 300 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:32 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 07:15:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:15:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 07:15:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 29 07:15:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 53 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=52) [0]/[1] async=[0] r=0 lpr=52 pi=[47,52)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:32 compute-0 sudo[105039]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwqinssuuwcvebaytepytmqgynrskvmj ; /usr/bin/python3'
Nov 29 07:15:32 compute-0 sudo[105039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 29 07:15:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 07:15:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 29 07:15:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 07:15:33 compute-0 ceph-mon[75050]: osdmap e53: 3 total, 3 up, 3 in
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.709031105s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390426636s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.708930969s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390426636s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.708810806s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390609741s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.708725929s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390609741s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.707572937s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390502930s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.707515717s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390502930s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.707488060s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390548706s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.707427025s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390548706s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.700224876s) [0] async=[0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.384201050s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 54 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54 pruub=15.700163841s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.384201050s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:33 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 54 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:33 compute-0 python3[105041]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:15:33 compute-0 podman[105042]: 2025-11-29 07:15:33.142323681 +0000 UTC m=+0.022960390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:15:33 compute-0 podman[105042]: 2025-11-29 07:15:33.447277436 +0000 UTC m=+0.327914115 container create ffb94ec6fe84344686650ac4569fff19951feaacaea8172b5c50e2189ddd4ad5 (image=quay.io/ceph/ceph:v18, name=blissful_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:15:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 16 activating+remapped, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 87/213 objects misplaced (40.845%); 241 B/s, 1 objects/s recovering
Nov 29 07:15:33 compute-0 systemd[1]: Started libpod-conmon-ffb94ec6fe84344686650ac4569fff19951feaacaea8172b5c50e2189ddd4ad5.scope.
Nov 29 07:15:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:15:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cd21c198c1d4bd64b73a6be9885cc4fe167b29717235c88dd78ff70e945ec96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cd21c198c1d4bd64b73a6be9885cc4fe167b29717235c88dd78ff70e945ec96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:33 compute-0 podman[105042]: 2025-11-29 07:15:33.542921577 +0000 UTC m=+0.423558296 container init ffb94ec6fe84344686650ac4569fff19951feaacaea8172b5c50e2189ddd4ad5 (image=quay.io/ceph/ceph:v18, name=blissful_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:15:33 compute-0 podman[105042]: 2025-11-29 07:15:33.549216009 +0000 UTC m=+0.429852668 container start ffb94ec6fe84344686650ac4569fff19951feaacaea8172b5c50e2189ddd4ad5 (image=quay.io/ceph/ceph:v18, name=blissful_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:15:33 compute-0 podman[105042]: 2025-11-29 07:15:33.553279291 +0000 UTC m=+0.433915980 container attach ffb94ec6fe84344686650ac4569fff19951feaacaea8172b5c50e2189ddd4ad5 (image=quay.io/ceph/ceph:v18, name=blissful_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:15:33 compute-0 blissful_sinoussi[105057]: could not fetch user info: no user info saved
Nov 29 07:15:33 compute-0 systemd[1]: libpod-ffb94ec6fe84344686650ac4569fff19951feaacaea8172b5c50e2189ddd4ad5.scope: Deactivated successfully.
Nov 29 07:15:33 compute-0 podman[105042]: 2025-11-29 07:15:33.80181059 +0000 UTC m=+0.682447259 container died ffb94ec6fe84344686650ac4569fff19951feaacaea8172b5c50e2189ddd4ad5 (image=quay.io/ceph/ceph:v18, name=blissful_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cd21c198c1d4bd64b73a6be9885cc4fe167b29717235c88dd78ff70e945ec96-merged.mount: Deactivated successfully.
Nov 29 07:15:33 compute-0 podman[105042]: 2025-11-29 07:15:33.846195895 +0000 UTC m=+0.726832554 container remove ffb94ec6fe84344686650ac4569fff19951feaacaea8172b5c50e2189ddd4ad5 (image=quay.io/ceph/ceph:v18, name=blissful_sinoussi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:15:33 compute-0 systemd[1]: libpod-conmon-ffb94ec6fe84344686650ac4569fff19951feaacaea8172b5c50e2189ddd4ad5.scope: Deactivated successfully.
Nov 29 07:15:33 compute-0 sudo[105039]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:34 compute-0 sudo[105176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cruzdltijwugcndabcjpertbrypsdnbe ; /usr/bin/python3'
Nov 29 07:15:34 compute-0 sudo[105176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 29 07:15:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 07:15:34 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 29 07:15:34 compute-0 ceph-mon[75050]: osdmap e54: 3 total, 3 up, 3 in
Nov 29 07:15:34 compute-0 ceph-mon[75050]: pgmap v149: 305 pgs: 16 activating+remapped, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 87/213 objects misplaced (40.845%); 241 B/s, 1 objects/s recovering
Nov 29 07:15:34 compute-0 ceph-mon[75050]: osdmap e55: 3 total, 3 up, 3 in
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.707156181s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.391143799s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.707095146s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.391143799s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.706222534s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390472412s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.706192970s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390472412s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.706655502s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.391159058s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.706615448s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.391159058s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705844879s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390701294s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705801010s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390701294s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.706118584s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.391189575s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705825806s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390991211s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.706040382s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.391189575s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705779076s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390991211s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705280304s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390884399s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705043793s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390701294s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705007553s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390701294s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=52/53 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705206871s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390884399s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705265999s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.391098022s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.705233574s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.391098022s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.704697609s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390625000s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.704626083s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390625000s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.704357147s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 158.390655518s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 55 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=52/53 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55 pruub=14.704332352s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.390655518s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=54/55 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.9( v 44'385 (0'0,44'385] local-lis/les=54/55 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.1( v 44'385 (0'0,44'385] local-lis/les=54/55 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=54/55 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:34 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 55 pg[9.11( v 44'385 (0'0,44'385] local-lis/les=54/55 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:34 compute-0 python3[105178]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:15:34 compute-0 podman[105179]: 2025-11-29 07:15:34.20038587 +0000 UTC m=+0.040190502 container create 68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95 (image=quay.io/ceph/ceph:v18, name=jolly_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:15:34 compute-0 systemd[1]: Started libpod-conmon-68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95.scope.
Nov 29 07:15:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1aa6f0f1774b5fcaaddf87f609988ba2f720c0aafe93736296c0e67e5c4b135/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1aa6f0f1774b5fcaaddf87f609988ba2f720c0aafe93736296c0e67e5c4b135/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:34 compute-0 podman[105179]: 2025-11-29 07:15:34.268385812 +0000 UTC m=+0.108190464 container init 68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95 (image=quay.io/ceph/ceph:v18, name=jolly_hellman, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:15:34 compute-0 podman[105179]: 2025-11-29 07:15:34.275882098 +0000 UTC m=+0.115686740 container start 68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95 (image=quay.io/ceph/ceph:v18, name=jolly_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:15:34 compute-0 podman[105179]: 2025-11-29 07:15:34.183077366 +0000 UTC m=+0.022882088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 07:15:34 compute-0 podman[105179]: 2025-11-29 07:15:34.27962743 +0000 UTC m=+0.119432112 container attach 68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95 (image=quay.io/ceph/ceph:v18, name=jolly_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:15:34 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 29 07:15:34 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 29 07:15:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 29 07:15:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 07:15:35 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.1b( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.1d( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.d( v 44'385 (0'0,44'385] local-lis/les=55/56 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.3( v 44'385 (0'0,44'385] local-lis/les=55/56 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.b( v 44'385 (0'0,44'385] local-lis/les=55/56 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 56 pg[9.5( v 44'385 (0'0,44'385] local-lis/les=55/56 n=6 ec=47/36 lis/c=52/47 les/c/f=53/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:35 compute-0 jolly_hellman[105194]: {
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "user_id": "openstack",
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "display_name": "openstack",
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "email": "",
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "suspended": 0,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "max_buckets": 1000,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "subusers": [],
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "keys": [
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         {
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:             "user": "openstack",
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:             "access_key": "HN15E8SPZIA0DTN6QD9Y",
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:             "secret_key": "Y7DhZIj1fNpl4xjZtiM6IHnCxn4P3glJuqmn4Ixd"
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         }
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     ],
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "swift_keys": [],
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "caps": [],
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "op_mask": "read, write, delete",
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "default_placement": "",
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "default_storage_class": "",
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "placement_tags": [],
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "bucket_quota": {
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "enabled": false,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "check_on_raw": false,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "max_size": -1,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "max_size_kb": 0,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "max_objects": -1
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     },
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "user_quota": {
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "enabled": false,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "check_on_raw": false,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "max_size": -1,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "max_size_kb": 0,
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:         "max_objects": -1
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     },
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "temp_url_keys": [],
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "type": "rgw",
Nov 29 07:15:35 compute-0 jolly_hellman[105194]:     "mfa_ids": []
Nov 29 07:15:35 compute-0 jolly_hellman[105194]: }
Nov 29 07:15:35 compute-0 jolly_hellman[105194]: 
Nov 29 07:15:35 compute-0 systemd[1]: libpod-68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95.scope: Deactivated successfully.
Nov 29 07:15:35 compute-0 conmon[105194]: conmon 68e68e949ff4508da4a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95.scope/container/memory.events
Nov 29 07:15:35 compute-0 podman[105179]: 2025-11-29 07:15:35.228362434 +0000 UTC m=+1.068167096 container died 68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95 (image=quay.io/ceph/ceph:v18, name=jolly_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1aa6f0f1774b5fcaaddf87f609988ba2f720c0aafe93736296c0e67e5c4b135-merged.mount: Deactivated successfully.
Nov 29 07:15:35 compute-0 podman[105179]: 2025-11-29 07:15:35.328430275 +0000 UTC m=+1.168234937 container remove 68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95 (image=quay.io/ceph/ceph:v18, name=jolly_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:15:35 compute-0 systemd[1]: libpod-conmon-68e68e949ff4508da4a969d1ae210f9ef902529dea2354555e1402d078ae8c95.scope: Deactivated successfully.
Nov 29 07:15:35 compute-0 sudo[105176]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 16 activating+remapped, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 87/213 objects misplaced (40.845%); 241 B/s, 1 objects/s recovering
Nov 29 07:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:15:35 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 29 07:15:35 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 29 07:15:36 compute-0 ceph-mon[75050]: 7.1e scrub starts
Nov 29 07:15:36 compute-0 ceph-mon[75050]: 7.1e scrub ok
Nov 29 07:15:36 compute-0 ceph-mon[75050]: osdmap e56: 3 total, 3 up, 3 in
Nov 29 07:15:36 compute-0 ceph-mon[75050]: pgmap v152: 305 pgs: 16 activating+remapped, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 87/213 objects misplaced (40.845%); 241 B/s, 1 objects/s recovering
Nov 29 07:15:36 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 29 07:15:36 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 29 07:15:36 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 29 07:15:36 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 29 07:15:37 compute-0 ceph-mon[75050]: 5.18 scrub starts
Nov 29 07:15:37 compute-0 ceph-mon[75050]: 5.18 scrub ok
Nov 29 07:15:37 compute-0 ceph-mon[75050]: 6.f scrub starts
Nov 29 07:15:37 compute-0 ceph-mon[75050]: 6.f scrub ok
Nov 29 07:15:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 16 activating+remapped, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 87/213 objects misplaced (40.845%); 177 B/s, 1 objects/s recovering
Nov 29 07:15:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:38 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 29 07:15:38 compute-0 ceph-mon[75050]: 5.3 scrub starts
Nov 29 07:15:38 compute-0 ceph-mon[75050]: 5.3 scrub ok
Nov 29 07:15:38 compute-0 ceph-mon[75050]: pgmap v153: 305 pgs: 16 activating+remapped, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 87/213 objects misplaced (40.845%); 177 B/s, 1 objects/s recovering
Nov 29 07:15:38 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 29 07:15:39 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 29 07:15:39 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 29 07:15:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2 op/s; 349 B/s, 13 objects/s recovering
Nov 29 07:15:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 07:15:39 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:15:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 29 07:15:39 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:15:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 07:15:39 compute-0 ceph-mon[75050]: 5.1a scrub starts
Nov 29 07:15:39 compute-0 ceph-mon[75050]: 5.1a scrub ok
Nov 29 07:15:39 compute-0 ceph-mon[75050]: 4.e scrub starts
Nov 29 07:15:39 compute-0 ceph-mon[75050]: 4.e scrub ok
Nov 29 07:15:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 07:15:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 29 07:15:39 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 29 07:15:39 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 29 07:15:40 compute-0 ceph-mon[75050]: pgmap v154: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2 op/s; 349 B/s, 13 objects/s recovering
Nov 29 07:15:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 07:15:40 compute-0 ceph-mon[75050]: osdmap e57: 3 total, 3 up, 3 in
Nov 29 07:15:40 compute-0 ceph-mon[75050]: 4.13 scrub starts
Nov 29 07:15:40 compute-0 ceph-mon[75050]: 4.13 scrub ok
Nov 29 07:15:40 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 29 07:15:40 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 29 07:15:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1 op/s; 302 B/s, 11 objects/s recovering
Nov 29 07:15:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 07:15:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 07:15:41 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 29 07:15:41 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 29 07:15:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 29 07:15:41 compute-0 ceph-mon[75050]: 4.a scrub starts
Nov 29 07:15:41 compute-0 ceph-mon[75050]: 4.a scrub ok
Nov 29 07:15:41 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 07:15:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 07:15:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 07:15:41 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 29 07:15:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:42 compute-0 ceph-mon[75050]: pgmap v156: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1 op/s; 302 B/s, 11 objects/s recovering
Nov 29 07:15:42 compute-0 ceph-mon[75050]: 5.1d scrub starts
Nov 29 07:15:42 compute-0 ceph-mon[75050]: 5.1d scrub ok
Nov 29 07:15:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 07:15:42 compute-0 ceph-mon[75050]: osdmap e58: 3 total, 3 up, 3 in
Nov 29 07:15:42 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts
Nov 29 07:15:42 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok
Nov 29 07:15:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s; 280 B/s, 10 objects/s recovering
Nov 29 07:15:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 07:15:43 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 07:15:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 29 07:15:43 compute-0 ceph-mon[75050]: 4.1a deep-scrub starts
Nov 29 07:15:43 compute-0 ceph-mon[75050]: 4.1a deep-scrub ok
Nov 29 07:15:43 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 07:15:43 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 07:15:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 07:15:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 29 07:15:43 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 29 07:15:43 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 29 07:15:43 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 29 07:15:43 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 29 07:15:44 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 29 07:15:44 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 29 07:15:44 compute-0 ceph-mon[75050]: pgmap v158: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s; 280 B/s, 10 objects/s recovering
Nov 29 07:15:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 07:15:44 compute-0 ceph-mon[75050]: osdmap e59: 3 total, 3 up, 3 in
Nov 29 07:15:44 compute-0 ceph-mon[75050]: 5.2 scrub starts
Nov 29 07:15:44 compute-0 ceph-mon[75050]: 5.2 scrub ok
Nov 29 07:15:44 compute-0 ceph-mon[75050]: 4.1b scrub starts
Nov 29 07:15:44 compute-0 ceph-mon[75050]: 4.1b scrub ok
Nov 29 07:15:44 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 29 07:15:44 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 29 07:15:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 07:15:45 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 07:15:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 29 07:15:45 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 07:15:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 07:15:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 29 07:15:45 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 60 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=8.914646149s) [2] r=-1 lpr=60 pi=[47,60)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 164.180541992s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:45 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 60 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=8.914597511s) [2] r=-1 lpr=60 pi=[47,60)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.180541992s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:45 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 60 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=8.915124893s) [2] r=-1 lpr=60 pi=[47,60)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 164.181533813s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:45 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 60 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=8.915037155s) [2] r=-1 lpr=60 pi=[47,60)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.181533813s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:45 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 60 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=8.915105820s) [2] r=-1 lpr=60 pi=[47,60)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 164.182052612s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:45 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 60 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=8.914825439s) [2] r=-1 lpr=60 pi=[47,60)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 164.181854248s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:45 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 60 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=8.914759636s) [2] r=-1 lpr=60 pi=[47,60)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.181854248s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:45 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 60 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=8.915084839s) [2] r=-1 lpr=60 pi=[47,60)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.182052612s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:45 compute-0 ceph-mon[75050]: 5.c scrub starts
Nov 29 07:15:45 compute-0 ceph-mon[75050]: 5.c scrub ok
Nov 29 07:15:45 compute-0 ceph-mon[75050]: 6.8 scrub starts
Nov 29 07:15:45 compute-0 ceph-mon[75050]: 6.8 scrub ok
Nov 29 07:15:45 compute-0 ceph-mon[75050]: pgmap v160: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:45 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 07:15:45 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 60 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60) [2] r=0 lpr=60 pi=[47,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:45 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 60 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60) [2] r=0 lpr=60 pi=[47,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:45 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 60 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60) [2] r=0 lpr=60 pi=[47,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:45 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 60 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=60) [2] r=0 lpr=60 pi=[47,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:45 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 29 07:15:45 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 29 07:15:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 29 07:15:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 07:15:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:47 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 29 07:15:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 07:15:47 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 07:15:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=-1 lpr=61 pi=[47,61)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:47 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 29 07:15:47 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 29 07:15:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 61 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 61 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 61 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 61 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 61 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 61 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 61 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 61 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:48 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 07:15:48 compute-0 ceph-mon[75050]: osdmap e60: 3 total, 3 up, 3 in
Nov 29 07:15:48 compute-0 ceph-mon[75050]: 2.16 scrub starts
Nov 29 07:15:48 compute-0 ceph-mon[75050]: 2.16 scrub ok
Nov 29 07:15:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 29 07:15:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 07:15:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 07:15:48 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 29 07:15:48 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 62 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=61/62 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:48 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 62 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=61/62 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:48 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 62 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=61/62 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:48 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 62 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=61/62 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=61) [2]/[1] async=[2] r=0 lpr=61 pi=[47,61)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:49 compute-0 ceph-mon[75050]: pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:49 compute-0 ceph-mon[75050]: osdmap e61: 3 total, 3 up, 3 in
Nov 29 07:15:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 07:15:49 compute-0 ceph-mon[75050]: 2.1c scrub starts
Nov 29 07:15:49 compute-0 ceph-mon[75050]: 2.1c scrub ok
Nov 29 07:15:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 07:15:49 compute-0 ceph-mon[75050]: osdmap e62: 3 total, 3 up, 3 in
Nov 29 07:15:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 29 07:15:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 07:15:49 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 63 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=61/62 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63 pruub=14.995763779s) [2] async=[2] r=-1 lpr=63 pi=[47,63)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 174.029357910s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 63 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=61/62 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63 pruub=14.995652199s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.029357910s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:49 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 63 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=61/62 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63 pruub=14.995550156s) [2] async=[2] r=-1 lpr=63 pi=[47,63)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 174.029388428s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 63 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=61/62 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63 pruub=14.991238594s) [2] async=[2] r=-1 lpr=63 pi=[47,63)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 174.025161743s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 63 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=61/62 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63 pruub=14.995431900s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.029388428s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:49 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 63 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=61/62 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63 pruub=14.991153717s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.025161743s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:49 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 63 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=61/62 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63 pruub=14.996225357s) [2] async=[2] r=-1 lpr=63 pi=[47,63)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 174.030685425s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:49 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 63 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=61/62 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63 pruub=14.996026993s) [2] r=-1 lpr=63 pi=[47,63)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.030685425s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 07:15:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 07:15:49 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 62 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=62 pruub=9.162977219s) [2] r=-1 lpr=62 pi=[55,62)/1 crt=44'385 mlcod 0'0 active pruub 174.005874634s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 63 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=62 pruub=9.162899017s) [2] r=-1 lpr=62 pi=[55,62)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 174.005874634s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:49 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 62 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=6 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=62 pruub=9.162454605s) [2] r=-1 lpr=62 pi=[55,62)/1 crt=44'385 mlcod 0'0 active pruub 174.005920410s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 63 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=6 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=62 pruub=9.162220955s) [2] r=-1 lpr=62 pi=[55,62)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 174.005920410s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=62) [2] r=0 lpr=63 pi=[55,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:49 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 62 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=54/55 n=6 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=8.152269363s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=44'385 mlcod 0'0 active pruub 172.996139526s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 63 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=54/55 n=6 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=8.152178764s) [2] r=-1 lpr=62 pi=[54,62)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 172.996139526s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:49 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 62 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=62 pruub=9.161629677s) [2] r=-1 lpr=62 pi=[55,62)/1 crt=44'385 mlcod 0'0 active pruub 174.005950928s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=62) [2] r=0 lpr=63 pi=[55,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:49 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 63 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=62 pruub=9.161424637s) [2] r=-1 lpr=62 pi=[55,62)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 174.005950928s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=62) [2] r=0 lpr=63 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=62) [2] r=0 lpr=63 pi=[55,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 29 07:15:50 compute-0 ceph-mon[75050]: osdmap e63: 3 total, 3 up, 3 in
Nov 29 07:15:50 compute-0 ceph-mon[75050]: pgmap v166: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 07:15:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 07:15:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 07:15:50 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 29 07:15:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 64 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 64 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 64 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=6 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 64 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=55/56 n=6 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 64 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=54/55 n=6 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 64 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=54/55 n=6 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=0 lpr=64 pi=[54,64)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 64 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 64 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=0 lpr=64 pi=[55,64)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] r=-1 lpr=64 pi=[54,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.6( v 44'385 (0'0,44'385] local-lis/les=63/64 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.e( v 44'385 (0'0,44'385] local-lis/les=63/64 n=6 ec=47/36 lis/c=61/47 les/c/f=62/48/0 sis=63) [2] r=0 lpr=63 pi=[47,63)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:50 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 64 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.837157249s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 172.181945801s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 64 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.837070465s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.181945801s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:50 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 64 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.836515427s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 172.182250977s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:50 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 64 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.836457253s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 172.182250977s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:50 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 64 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:50 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 29 07:15:50 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 29 07:15:50 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 29 07:15:50 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 29 07:15:51 compute-0 sudo[105293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:51 compute-0 sudo[105293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:51 compute-0 sudo[105293]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:51 compute-0 sudo[105318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:15:51 compute-0 sudo[105318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:51 compute-0 sudo[105318]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:51 compute-0 sudo[105343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:51 compute-0 sudo[105343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:51 compute-0 sudo[105343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:51 compute-0 sudo[105370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:15:51 compute-0 sshd-session[105351]: Accepted publickey for zuul from 192.168.122.30 port 58448 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:15:51 compute-0 sudo[105370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:51 compute-0 systemd-logind[807]: New session 33 of user zuul.
Nov 29 07:15:51 compute-0 systemd[1]: Started Session 33 of User zuul.
Nov 29 07:15:51 compute-0 sshd-session[105351]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:15:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 29 07:15:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 07:15:51 compute-0 ceph-mon[75050]: osdmap e64: 3 total, 3 up, 3 in
Nov 29 07:15:51 compute-0 ceph-mon[75050]: 5.15 scrub starts
Nov 29 07:15:51 compute-0 ceph-mon[75050]: 5.15 scrub ok
Nov 29 07:15:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 07:15:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 29 07:15:51 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:51 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:51 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:51 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:51 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 65 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:51 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 65 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:51 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 65 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:51 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 65 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:51 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 65 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=64/65 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:51 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 65 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=64/65 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:51 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 65 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=64/65 n=6 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[55,64)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:51 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 65 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=64/65 n=6 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=64) [2]/[0] async=[2] r=0 lpr=64 pi=[54,64)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 07:15:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 07:15:51 compute-0 sudo[105370]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:15:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:15:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:15:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:15:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:15:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:15:51 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 56338ded-804e-482e-979f-21ff2a94c408 does not exist
Nov 29 07:15:51 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 9afb43ab-7083-4399-9ed8-882fb00409c7 does not exist
Nov 29 07:15:51 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 36fba985-91b0-483d-b4bf-1c25501b5e92 does not exist
Nov 29 07:15:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:15:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:15:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:15:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:15:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:15:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:15:51 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 29 07:15:51 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 29 07:15:51 compute-0 sudo[105481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:51 compute-0 sudo[105481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:51 compute-0 sudo[105481]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:51 compute-0 sudo[105527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:15:51 compute-0 sudo[105527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:51 compute-0 sudo[105527]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:51 compute-0 sudo[105568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:51 compute-0 sudo[105568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:51 compute-0 sudo[105568]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:51 compute-0 sudo[105603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:15:51 compute-0 sudo[105603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:52 compute-0 podman[105719]: 2025-11-29 07:15:52.262109906 +0000 UTC m=+0.055762548 container create 3c45abb92badf0046c95d3aeb172f018183b735b892aaf06aaf98b0f6f83b722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:15:52 compute-0 python3.9[105678]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:15:52 compute-0 systemd[1]: Started libpod-conmon-3c45abb92badf0046c95d3aeb172f018183b735b892aaf06aaf98b0f6f83b722.scope.
Nov 29 07:15:52 compute-0 podman[105719]: 2025-11-29 07:15:52.229034186 +0000 UTC m=+0.022686928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:15:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:15:52 compute-0 podman[105719]: 2025-11-29 07:15:52.358281794 +0000 UTC m=+0.151934456 container init 3c45abb92badf0046c95d3aeb172f018183b735b892aaf06aaf98b0f6f83b722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:15:52 compute-0 podman[105719]: 2025-11-29 07:15:52.366082776 +0000 UTC m=+0.159735448 container start 3c45abb92badf0046c95d3aeb172f018183b735b892aaf06aaf98b0f6f83b722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:15:52 compute-0 podman[105719]: 2025-11-29 07:15:52.370799894 +0000 UTC m=+0.164452536 container attach 3c45abb92badf0046c95d3aeb172f018183b735b892aaf06aaf98b0f6f83b722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:15:52 compute-0 vigorous_haibt[105741]: 167 167
Nov 29 07:15:52 compute-0 systemd[1]: libpod-3c45abb92badf0046c95d3aeb172f018183b735b892aaf06aaf98b0f6f83b722.scope: Deactivated successfully.
Nov 29 07:15:52 compute-0 podman[105719]: 2025-11-29 07:15:52.372027108 +0000 UTC m=+0.165679780 container died 3c45abb92badf0046c95d3aeb172f018183b735b892aaf06aaf98b0f6f83b722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac6f45d42f38ad860d12dd589e8f61cd4769b9cf6b4b6778719bb5a156c037d7-merged.mount: Deactivated successfully.
Nov 29 07:15:52 compute-0 podman[105719]: 2025-11-29 07:15:52.40739572 +0000 UTC m=+0.201048362 container remove 3c45abb92badf0046c95d3aeb172f018183b735b892aaf06aaf98b0f6f83b722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_haibt, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:15:52 compute-0 systemd[1]: libpod-conmon-3c45abb92badf0046c95d3aeb172f018183b735b892aaf06aaf98b0f6f83b722.scope: Deactivated successfully.
Nov 29 07:15:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 29 07:15:52 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 07:15:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 07:15:52 compute-0 ceph-mon[75050]: 4.1c deep-scrub starts
Nov 29 07:15:52 compute-0 ceph-mon[75050]: 4.1c deep-scrub ok
Nov 29 07:15:52 compute-0 ceph-mon[75050]: osdmap e65: 3 total, 3 up, 3 in
Nov 29 07:15:52 compute-0 ceph-mon[75050]: pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:15:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 07:15:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:15:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:15:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:15:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:15:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:15:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:15:52 compute-0 ceph-mon[75050]: 4.d scrub starts
Nov 29 07:15:52 compute-0 ceph-mon[75050]: 4.d scrub ok
Nov 29 07:15:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 66 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 66 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 66 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=64/54 les/c/f=65/55/0 sis=66) [2] r=0 lpr=66 pi=[54,66)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 66 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=64/54 les/c/f=65/55/0 sis=66) [2] r=0 lpr=66 pi=[54,66)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 66 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 66 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 66 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 66 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 66 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=64/65 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=14.986852646s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 44'385 active pruub 182.383987427s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 66 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=64/65 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=14.986754417s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 182.383987427s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 66 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=64/65 n=6 ec=47/36 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.986733437s) [2] async=[2] r=-1 lpr=66 pi=[54,66)/1 crt=44'385 mlcod 44'385 active pruub 182.384078979s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 66 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=64/65 n=6 ec=47/36 lis/c=64/54 les/c/f=65/55/0 sis=66 pruub=14.986577034s) [2] r=-1 lpr=66 pi=[54,66)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 182.384078979s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 66 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=64/65 n=6 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=14.986386299s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 44'385 active pruub 182.384017944s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 66 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=64/65 n=6 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=14.986199379s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 182.384017944s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 66 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=64/65 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=14.982217789s) [2] async=[2] r=-1 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 44'385 active pruub 182.380096436s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 66 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=64/65 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66 pruub=14.982178688s) [2] r=-1 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 182.380096436s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:52 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 66 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=65/66 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:52 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 66 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=65/66 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 29 07:15:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 07:15:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 29 07:15:52 compute-0 podman[105766]: 2025-11-29 07:15:52.574167089 +0000 UTC m=+0.049304343 container create da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:15:52 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 67 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=65/66 n=6 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.902297974s) [2] async=[2] r=-1 lpr=67 pi=[47,67)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 178.107803345s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 67 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=65/66 n=6 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.902129173s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.107803345s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 67 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 67 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 67 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=66/67 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 67 pg[9.7( v 44'385 (0'0,44'385] local-lis/les=66/67 n=6 ec=47/36 lis/c=64/54 les/c/f=65/55/0 sis=66) [2] r=0 lpr=66 pi=[54,66)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 67 pg[9.17( v 44'385 (0'0,44'385] local-lis/les=66/67 n=5 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 67 pg[9.f( v 44'385 (0'0,44'385] local-lis/les=66/67 n=6 ec=47/36 lis/c=64/55 les/c/f=65/56/0 sis=66) [2] r=0 lpr=66 pi=[55,66)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:52 compute-0 systemd[1]: Started libpod-conmon-da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b.scope.
Nov 29 07:15:52 compute-0 podman[105766]: 2025-11-29 07:15:52.552574221 +0000 UTC m=+0.027711485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:15:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b733b695bf9614773e5a495c5ceb6df9e5265582280b06a554218391ebffc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b733b695bf9614773e5a495c5ceb6df9e5265582280b06a554218391ebffc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b733b695bf9614773e5a495c5ceb6df9e5265582280b06a554218391ebffc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b733b695bf9614773e5a495c5ceb6df9e5265582280b06a554218391ebffc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b733b695bf9614773e5a495c5ceb6df9e5265582280b06a554218391ebffc5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:52 compute-0 podman[105766]: 2025-11-29 07:15:52.677316586 +0000 UTC m=+0.152453860 container init da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcclintock, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:15:52 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 29 07:15:52 compute-0 podman[105766]: 2025-11-29 07:15:52.686100605 +0000 UTC m=+0.161237859 container start da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:15:52 compute-0 podman[105766]: 2025-11-29 07:15:52.692371035 +0000 UTC m=+0.167508289 container attach da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcclintock, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:15:52 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 29 07:15:53 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 07:15:53 compute-0 ceph-mon[75050]: osdmap e66: 3 total, 3 up, 3 in
Nov 29 07:15:53 compute-0 ceph-mon[75050]: osdmap e67: 3 total, 3 up, 3 in
Nov 29 07:15:53 compute-0 ceph-mon[75050]: 6.c scrub starts
Nov 29 07:15:53 compute-0 ceph-mon[75050]: 6.c scrub ok
Nov 29 07:15:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 247 B/s, 12 objects/s recovering
Nov 29 07:15:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 29 07:15:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 07:15:53 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 29 07:15:53 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 68 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=65/66 n=5 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=68 pruub=14.894343376s) [2] async=[2] r=-1 lpr=68 pi=[47,68)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 178.108688354s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:53 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 68 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=65/66 n=5 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=68 pruub=14.894136429s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.108688354s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:15:53 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 68 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:15:53 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 68 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:15:53 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 68 pg[9.8( v 44'385 (0'0,44'385] local-lis/les=67/68 n=6 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:53 compute-0 gracious_mcclintock[105793]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:15:53 compute-0 gracious_mcclintock[105793]: --> relative data size: 1.0
Nov 29 07:15:53 compute-0 gracious_mcclintock[105793]: --> All data devices are unavailable
Nov 29 07:15:53 compute-0 systemd[1]: libpod-da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b.scope: Deactivated successfully.
Nov 29 07:15:53 compute-0 podman[105766]: 2025-11-29 07:15:53.82219987 +0000 UTC m=+1.297337124 container died da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcclintock, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:15:53 compute-0 systemd[1]: libpod-da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b.scope: Consumed 1.086s CPU time.
Nov 29 07:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-49b733b695bf9614773e5a495c5ceb6df9e5265582280b06a554218391ebffc5-merged.mount: Deactivated successfully.
Nov 29 07:15:53 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 29 07:15:53 compute-0 podman[105766]: 2025-11-29 07:15:53.985858884 +0000 UTC m=+1.460996138 container remove da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:15:53 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 29 07:15:54 compute-0 systemd[1]: libpod-conmon-da572cef9e37668c033334e0a202fb96cc1965659057bda74a32a5828aa6ad1b.scope: Deactivated successfully.
Nov 29 07:15:54 compute-0 sudo[105603]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:54 compute-0 sudo[105882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:54 compute-0 sudo[105882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:54 compute-0 sudo[105882]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:54 compute-0 sudo[105907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:15:54 compute-0 sudo[105907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:54 compute-0 sudo[105907]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:54 compute-0 sudo[105933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:54 compute-0 sudo[105933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:54 compute-0 sudo[105933]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:54 compute-0 sudo[105981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:15:54 compute-0 sudo[105981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 29 07:15:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 07:15:54 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 29 07:15:54 compute-0 ceph-mon[75050]: pgmap v172: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 247 B/s, 12 objects/s recovering
Nov 29 07:15:54 compute-0 ceph-mon[75050]: osdmap e68: 3 total, 3 up, 3 in
Nov 29 07:15:54 compute-0 ceph-mon[75050]: 3.18 scrub starts
Nov 29 07:15:54 compute-0 ceph-mon[75050]: 3.18 scrub ok
Nov 29 07:15:54 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 69 pg[9.18( v 44'385 (0'0,44'385] local-lis/les=68/69 n=5 ec=47/36 lis/c=65/47 les/c/f=66/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:15:54 compute-0 podman[106100]: 2025-11-29 07:15:54.623562957 +0000 UTC m=+0.047838623 container create d999e13ba4cc42aef634f2f6370ff7e0c7b67062c16df6acd8c7036c6ef55bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:15:54 compute-0 systemd[1]: Started libpod-conmon-d999e13ba4cc42aef634f2f6370ff7e0c7b67062c16df6acd8c7036c6ef55bc1.scope.
Nov 29 07:15:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:15:54 compute-0 podman[106100]: 2025-11-29 07:15:54.603564513 +0000 UTC m=+0.027840209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:15:54 compute-0 podman[106100]: 2025-11-29 07:15:54.704324615 +0000 UTC m=+0.128600311 container init d999e13ba4cc42aef634f2f6370ff7e0c7b67062c16df6acd8c7036c6ef55bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:15:54 compute-0 podman[106100]: 2025-11-29 07:15:54.711810779 +0000 UTC m=+0.136086445 container start d999e13ba4cc42aef634f2f6370ff7e0c7b67062c16df6acd8c7036c6ef55bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:15:54 compute-0 tender_hamilton[106139]: 167 167
Nov 29 07:15:54 compute-0 systemd[1]: libpod-d999e13ba4cc42aef634f2f6370ff7e0c7b67062c16df6acd8c7036c6ef55bc1.scope: Deactivated successfully.
Nov 29 07:15:54 compute-0 podman[106100]: 2025-11-29 07:15:54.720919946 +0000 UTC m=+0.145195632 container attach d999e13ba4cc42aef634f2f6370ff7e0c7b67062c16df6acd8c7036c6ef55bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:15:54 compute-0 podman[106100]: 2025-11-29 07:15:54.721819771 +0000 UTC m=+0.146095437 container died d999e13ba4cc42aef634f2f6370ff7e0c7b67062c16df6acd8c7036c6ef55bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:15:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0803a933490141864a5973e2eee7196af43eb9da2567f76359789aa5d1f0f2a-merged.mount: Deactivated successfully.
Nov 29 07:15:54 compute-0 podman[106100]: 2025-11-29 07:15:54.78278699 +0000 UTC m=+0.207062666 container remove d999e13ba4cc42aef634f2f6370ff7e0c7b67062c16df6acd8c7036c6ef55bc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:15:54 compute-0 systemd[1]: libpod-conmon-d999e13ba4cc42aef634f2f6370ff7e0c7b67062c16df6acd8c7036c6ef55bc1.scope: Deactivated successfully.
Nov 29 07:15:54 compute-0 sudo[106209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojevlxqeeubfhcgivgcdilinvjjcmrfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400554.3800743-32-57203209088962/AnsiballZ_command.py'
Nov 29 07:15:54 compute-0 sudo[106209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:15:54 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 29 07:15:54 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 29 07:15:54 compute-0 podman[106217]: 2025-11-29 07:15:54.967913077 +0000 UTC m=+0.046142946 container create a4fa9695fdac44cd39f419f4740e40f11054a3bc4fc9a2afadc977677a5424cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brahmagupta, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:15:55 compute-0 systemd[1]: Started libpod-conmon-a4fa9695fdac44cd39f419f4740e40f11054a3bc4fc9a2afadc977677a5424cc.scope.
Nov 29 07:15:55 compute-0 python3.9[106211]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:15:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739f94fe81645a35c7d2b84960dba20d35a93c80d34a05ea1039ddeb43bb928a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739f94fe81645a35c7d2b84960dba20d35a93c80d34a05ea1039ddeb43bb928a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739f94fe81645a35c7d2b84960dba20d35a93c80d34a05ea1039ddeb43bb928a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739f94fe81645a35c7d2b84960dba20d35a93c80d34a05ea1039ddeb43bb928a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:55 compute-0 podman[106217]: 2025-11-29 07:15:54.948460778 +0000 UTC m=+0.026690667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:15:55 compute-0 podman[106217]: 2025-11-29 07:15:55.047506544 +0000 UTC m=+0.125736413 container init a4fa9695fdac44cd39f419f4740e40f11054a3bc4fc9a2afadc977677a5424cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:15:55 compute-0 podman[106217]: 2025-11-29 07:15:55.056218361 +0000 UTC m=+0.134448240 container start a4fa9695fdac44cd39f419f4740e40f11054a3bc4fc9a2afadc977677a5424cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brahmagupta, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:15:55 compute-0 podman[106217]: 2025-11-29 07:15:55.058980136 +0000 UTC m=+0.137210015 container attach a4fa9695fdac44cd39f419f4740e40f11054a3bc4fc9a2afadc977677a5424cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:15:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 247 B/s, 12 objects/s recovering
Nov 29 07:15:55 compute-0 ceph-mon[75050]: osdmap e69: 3 total, 3 up, 3 in
Nov 29 07:15:55 compute-0 ceph-mon[75050]: 2.b scrub starts
Nov 29 07:15:55 compute-0 ceph-mon[75050]: 2.b scrub ok
Nov 29 07:15:55 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 29 07:15:55 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]: {
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:     "0": [
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:         {
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "devices": [
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "/dev/loop3"
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             ],
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_name": "ceph_lv0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_size": "21470642176",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "name": "ceph_lv0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "tags": {
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.cluster_name": "ceph",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.crush_device_class": "",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.encrypted": "0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.osd_id": "0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.type": "block",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.vdo": "0"
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             },
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "type": "block",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "vg_name": "ceph_vg0"
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:         }
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:     ],
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:     "1": [
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:         {
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "devices": [
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "/dev/loop4"
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             ],
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_name": "ceph_lv1",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_size": "21470642176",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "name": "ceph_lv1",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "tags": {
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.cluster_name": "ceph",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.crush_device_class": "",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.encrypted": "0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.osd_id": "1",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.type": "block",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.vdo": "0"
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             },
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "type": "block",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "vg_name": "ceph_vg1"
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:         }
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:     ],
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:     "2": [
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:         {
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "devices": [
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "/dev/loop5"
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             ],
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_name": "ceph_lv2",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_size": "21470642176",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "name": "ceph_lv2",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "tags": {
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.cluster_name": "ceph",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.crush_device_class": "",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.encrypted": "0",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.osd_id": "2",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.type": "block",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:                 "ceph.vdo": "0"
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             },
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "type": "block",
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:             "vg_name": "ceph_vg2"
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:         }
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]:     ]
Nov 29 07:15:55 compute-0 naughty_brahmagupta[106233]: }
Nov 29 07:15:55 compute-0 systemd[1]: libpod-a4fa9695fdac44cd39f419f4740e40f11054a3bc4fc9a2afadc977677a5424cc.scope: Deactivated successfully.
Nov 29 07:15:55 compute-0 podman[106217]: 2025-11-29 07:15:55.903471466 +0000 UTC m=+0.981701435 container died a4fa9695fdac44cd39f419f4740e40f11054a3bc4fc9a2afadc977677a5424cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brahmagupta, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-739f94fe81645a35c7d2b84960dba20d35a93c80d34a05ea1039ddeb43bb928a-merged.mount: Deactivated successfully.
Nov 29 07:15:55 compute-0 podman[106217]: 2025-11-29 07:15:55.993919237 +0000 UTC m=+1.072149136 container remove a4fa9695fdac44cd39f419f4740e40f11054a3bc4fc9a2afadc977677a5424cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:15:56 compute-0 systemd[1]: libpod-conmon-a4fa9695fdac44cd39f419f4740e40f11054a3bc4fc9a2afadc977677a5424cc.scope: Deactivated successfully.
Nov 29 07:15:56 compute-0 sudo[105981]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:56 compute-0 sudo[106262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:56 compute-0 sudo[106262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:56 compute-0 sudo[106262]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:56 compute-0 sudo[106287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:15:56 compute-0 sudo[106287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:56 compute-0 sudo[106287]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:56 compute-0 sudo[106312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:56 compute-0 sudo[106312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:56 compute-0 sudo[106312]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:56 compute-0 sudo[106337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:15:56 compute-0 sudo[106337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:56 compute-0 ceph-mon[75050]: pgmap v175: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 247 B/s, 12 objects/s recovering
Nov 29 07:15:56 compute-0 ceph-mon[75050]: 4.f scrub starts
Nov 29 07:15:56 compute-0 ceph-mon[75050]: 4.f scrub ok
Nov 29 07:15:56 compute-0 podman[106403]: 2025-11-29 07:15:56.661662678 +0000 UTC m=+0.051121952 container create 453ed3f1eb7770683fd6d82b9b15e4a0406885ebefe4a75877139aeb25758eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:15:56 compute-0 systemd[1]: Started libpod-conmon-453ed3f1eb7770683fd6d82b9b15e4a0406885ebefe4a75877139aeb25758eb3.scope.
Nov 29 07:15:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:15:56 compute-0 podman[106403]: 2025-11-29 07:15:56.633171183 +0000 UTC m=+0.022630477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:15:56 compute-0 podman[106403]: 2025-11-29 07:15:56.74624939 +0000 UTC m=+0.135708684 container init 453ed3f1eb7770683fd6d82b9b15e4a0406885ebefe4a75877139aeb25758eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shirley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:15:56 compute-0 podman[106403]: 2025-11-29 07:15:56.755117801 +0000 UTC m=+0.144577075 container start 453ed3f1eb7770683fd6d82b9b15e4a0406885ebefe4a75877139aeb25758eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shirley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:15:56 compute-0 cool_shirley[106420]: 167 167
Nov 29 07:15:56 compute-0 podman[106403]: 2025-11-29 07:15:56.758832042 +0000 UTC m=+0.148291306 container attach 453ed3f1eb7770683fd6d82b9b15e4a0406885ebefe4a75877139aeb25758eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shirley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:15:56 compute-0 systemd[1]: libpod-453ed3f1eb7770683fd6d82b9b15e4a0406885ebefe4a75877139aeb25758eb3.scope: Deactivated successfully.
Nov 29 07:15:56 compute-0 podman[106403]: 2025-11-29 07:15:56.759517691 +0000 UTC m=+0.148976955 container died 453ed3f1eb7770683fd6d82b9b15e4a0406885ebefe4a75877139aeb25758eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-675a40c2a4079e29140e857a4a6806aacd4d45137e3768a912a4158bdfb5bdbd-merged.mount: Deactivated successfully.
Nov 29 07:15:56 compute-0 podman[106403]: 2025-11-29 07:15:56.821420146 +0000 UTC m=+0.210879420 container remove 453ed3f1eb7770683fd6d82b9b15e4a0406885ebefe4a75877139aeb25758eb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:15:56 compute-0 systemd[1]: libpod-conmon-453ed3f1eb7770683fd6d82b9b15e4a0406885ebefe4a75877139aeb25758eb3.scope: Deactivated successfully.
Nov 29 07:15:56 compute-0 podman[106446]: 2025-11-29 07:15:56.964222212 +0000 UTC m=+0.041075689 container create 6dc7f8c1c3933c19b8206f6057a6323a9b8046b25a868571da9187be0f1e6cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:15:57 compute-0 systemd[1]: Started libpod-conmon-6dc7f8c1c3933c19b8206f6057a6323a9b8046b25a868571da9187be0f1e6cbd.scope.
Nov 29 07:15:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3eb5cc7d35a50f21427f9773b34afb6560425595385be89bef36748f3d5226/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3eb5cc7d35a50f21427f9773b34afb6560425595385be89bef36748f3d5226/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3eb5cc7d35a50f21427f9773b34afb6560425595385be89bef36748f3d5226/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3eb5cc7d35a50f21427f9773b34afb6560425595385be89bef36748f3d5226/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:15:57 compute-0 podman[106446]: 2025-11-29 07:15:56.943142758 +0000 UTC m=+0.019996265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:15:57 compute-0 podman[106446]: 2025-11-29 07:15:57.059511865 +0000 UTC m=+0.136365362 container init 6dc7f8c1c3933c19b8206f6057a6323a9b8046b25a868571da9187be0f1e6cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:15:57 compute-0 podman[106446]: 2025-11-29 07:15:57.067869622 +0000 UTC m=+0.144723099 container start 6dc7f8c1c3933c19b8206f6057a6323a9b8046b25a868571da9187be0f1e6cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:15:57 compute-0 podman[106446]: 2025-11-29 07:15:57.07443543 +0000 UTC m=+0.151288937 container attach 6dc7f8c1c3933c19b8206f6057a6323a9b8046b25a868571da9187be0f1e6cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:15:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 196 B/s, 10 objects/s recovering
Nov 29 07:15:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:15:57 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 29 07:15:57 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 29 07:15:57 compute-0 ceph-mon[75050]: pgmap v176: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 196 B/s, 10 objects/s recovering
Nov 29 07:15:58 compute-0 recursing_noyce[106463]: {
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "osd_id": 2,
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "type": "bluestore"
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:     },
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "osd_id": 1,
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "type": "bluestore"
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:     },
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "osd_id": 0,
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:         "type": "bluestore"
Nov 29 07:15:58 compute-0 recursing_noyce[106463]:     }
Nov 29 07:15:58 compute-0 recursing_noyce[106463]: }
Nov 29 07:15:58 compute-0 systemd[1]: libpod-6dc7f8c1c3933c19b8206f6057a6323a9b8046b25a868571da9187be0f1e6cbd.scope: Deactivated successfully.
Nov 29 07:15:58 compute-0 podman[106446]: 2025-11-29 07:15:58.098787436 +0000 UTC m=+1.175640913 container died 6dc7f8c1c3933c19b8206f6057a6323a9b8046b25a868571da9187be0f1e6cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef3eb5cc7d35a50f21427f9773b34afb6560425595385be89bef36748f3d5226-merged.mount: Deactivated successfully.
Nov 29 07:15:58 compute-0 podman[106446]: 2025-11-29 07:15:58.158018777 +0000 UTC m=+1.234872254 container remove 6dc7f8c1c3933c19b8206f6057a6323a9b8046b25a868571da9187be0f1e6cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:15:58 compute-0 systemd[1]: libpod-conmon-6dc7f8c1c3933c19b8206f6057a6323a9b8046b25a868571da9187be0f1e6cbd.scope: Deactivated successfully.
Nov 29 07:15:58 compute-0 sudo[106337]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:15:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:15:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:15:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:15:58 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 4b7dc7da-7416-49b7-9913-4850a0be89fb does not exist
Nov 29 07:15:58 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7a808e3e-1896-4466-b89f-4507b77f9b03 does not exist
Nov 29 07:15:58 compute-0 sudo[106507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:15:58 compute-0 sudo[106507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:58 compute-0 sudo[106507]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:58 compute-0 sudo[106535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:15:58 compute-0 sudo[106535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:15:58 compute-0 sudo[106535]: pam_unix(sudo:session): session closed for user root
Nov 29 07:15:59 compute-0 ceph-mon[75050]: 6.2 scrub starts
Nov 29 07:15:59 compute-0 ceph-mon[75050]: 6.2 scrub ok
Nov 29 07:15:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:15:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:15:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 143 B/s, 7 objects/s recovering
Nov 29 07:15:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 07:15:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 07:15:59 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 29 07:15:59 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 29 07:16:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 29 07:16:00 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 07:16:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 07:16:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 29 07:16:00 compute-0 ceph-mon[75050]: pgmap v177: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 143 B/s, 7 objects/s recovering
Nov 29 07:16:00 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 07:16:00 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 29 07:16:00 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 29 07:16:01 compute-0 ceph-mon[75050]: 5.14 scrub starts
Nov 29 07:16:01 compute-0 ceph-mon[75050]: 5.14 scrub ok
Nov 29 07:16:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 07:16:01 compute-0 ceph-mon[75050]: osdmap e70: 3 total, 3 up, 3 in
Nov 29 07:16:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 07:16:01 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 07:16:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 29 07:16:02 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 07:16:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 07:16:02 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 29 07:16:02 compute-0 ceph-mon[75050]: 2.13 scrub starts
Nov 29 07:16:02 compute-0 ceph-mon[75050]: 2.13 scrub ok
Nov 29 07:16:02 compute-0 ceph-mon[75050]: pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:02 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 07:16:02 compute-0 sudo[106209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:16:02 compute-0 sshd-session[105396]: Connection closed by 192.168.122.30 port 58448
Nov 29 07:16:02 compute-0 sshd-session[105351]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:16:02 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 29 07:16:02 compute-0 systemd[1]: session-33.scope: Consumed 8.479s CPU time.
Nov 29 07:16:02 compute-0 systemd-logind[807]: Session 33 logged out. Waiting for processes to exit.
Nov 29 07:16:02 compute-0 systemd-logind[807]: Removed session 33.
Nov 29 07:16:03 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 29 07:16:03 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 29 07:16:03 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 07:16:03 compute-0 ceph-mon[75050]: osdmap e71: 3 total, 3 up, 3 in
Nov 29 07:16:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 07:16:03 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 07:16:03 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 29 07:16:03 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 29 07:16:03 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 29 07:16:03 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 29 07:16:04 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 29 07:16:04 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 29 07:16:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 29 07:16:04 compute-0 ceph-mon[75050]: 7.11 scrub starts
Nov 29 07:16:04 compute-0 ceph-mon[75050]: 7.11 scrub ok
Nov 29 07:16:04 compute-0 ceph-mon[75050]: pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:04 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 07:16:04 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 07:16:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 07:16:04 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 29 07:16:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 29 07:16:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 29 07:16:04 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 29 07:16:04 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 29 07:16:05 compute-0 ceph-mon[75050]: 5.f scrub starts
Nov 29 07:16:05 compute-0 ceph-mon[75050]: 5.f scrub ok
Nov 29 07:16:05 compute-0 ceph-mon[75050]: 2.8 scrub starts
Nov 29 07:16:05 compute-0 ceph-mon[75050]: 2.8 scrub ok
Nov 29 07:16:05 compute-0 ceph-mon[75050]: 3.16 scrub starts
Nov 29 07:16:05 compute-0 ceph-mon[75050]: 3.16 scrub ok
Nov 29 07:16:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 07:16:05 compute-0 ceph-mon[75050]: osdmap e72: 3 total, 3 up, 3 in
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:16:05
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'backups']
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 07:16:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:16:05 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 72 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=72 pruub=12.986458778s) [2] r=-1 lpr=72 pi=[47,72)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 188.181732178s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:05 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 72 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=72 pruub=12.986408234s) [2] r=-1 lpr=72 pi=[47,72)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 188.181732178s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:05 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 72 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=72 pruub=12.986708641s) [2] r=-1 lpr=72 pi=[47,72)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 188.182617188s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:05 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 72 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=72 pruub=12.986650467s) [2] r=-1 lpr=72 pi=[47,72)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 188.182617188s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:05 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 72 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=72) [2] r=0 lpr=72 pi=[47,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:16:05 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 72 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=72) [2] r=0 lpr=72 pi=[47,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:05 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 29 07:16:05 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 29 07:16:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 29 07:16:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 07:16:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 07:16:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 29 07:16:06 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:06 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:06 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:06 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:06 compute-0 ceph-mon[75050]: 2.9 scrub starts
Nov 29 07:16:06 compute-0 ceph-mon[75050]: 2.9 scrub ok
Nov 29 07:16:06 compute-0 ceph-mon[75050]: 2.11 scrub starts
Nov 29 07:16:06 compute-0 ceph-mon[75050]: 2.11 scrub ok
Nov 29 07:16:06 compute-0 ceph-mon[75050]: pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 07:16:06 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 73 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] r=0 lpr=73 pi=[47,73)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:06 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 73 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] r=0 lpr=73 pi=[47,73)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:06 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 73 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] r=0 lpr=73 pi=[47,73)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:06 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 73 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=47/48 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] r=0 lpr=73 pi=[47,73)/1 crt=44'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 29 07:16:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 29 07:16:07 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 29 07:16:07 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 29 07:16:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 29 07:16:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 07:16:07 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 29 07:16:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 07:16:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 07:16:07 compute-0 ceph-mon[75050]: 2.6 scrub starts
Nov 29 07:16:07 compute-0 ceph-mon[75050]: 2.6 scrub ok
Nov 29 07:16:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 07:16:07 compute-0 ceph-mon[75050]: osdmap e73: 3 total, 3 up, 3 in
Nov 29 07:16:07 compute-0 ceph-mon[75050]: 7.13 scrub starts
Nov 29 07:16:07 compute-0 ceph-mon[75050]: 7.13 scrub ok
Nov 29 07:16:07 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 74 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=73/74 n=5 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[47,73)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:07 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 74 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=73/74 n=6 ec=47/36 lis/c=47/47 les/c/f=48/48/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[47,73)/1 crt=44'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:16:07 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 29 07:16:07 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 29 07:16:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 29 07:16:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 07:16:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 07:16:08 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 29 07:16:08 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 75 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=73/74 n=6 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75 pruub=14.803012848s) [2] async=[2] r=-1 lpr=75 pi=[47,75)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 193.165130615s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:08 compute-0 ceph-mon[75050]: 5.19 deep-scrub starts
Nov 29 07:16:08 compute-0 ceph-mon[75050]: 5.19 deep-scrub ok
Nov 29 07:16:08 compute-0 ceph-mon[75050]: osdmap e74: 3 total, 3 up, 3 in
Nov 29 07:16:08 compute-0 ceph-mon[75050]: pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 07:16:08 compute-0 ceph-mon[75050]: 3.17 scrub starts
Nov 29 07:16:08 compute-0 ceph-mon[75050]: 3.17 scrub ok
Nov 29 07:16:08 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 75 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=73/74 n=5 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75 pruub=14.801970482s) [2] async=[2] r=-1 lpr=75 pi=[47,75)/1 crt=44'385 lcod 0'0 mlcod 0'0 active pruub 193.164443970s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:08 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 75 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=73/74 n=5 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75 pruub=14.801815987s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.164443970s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:08 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 75 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=73/74 n=6 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75 pruub=14.802288055s) [2] r=-1 lpr=75 pi=[47,75)/1 crt=44'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.165130615s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:08 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 29 07:16:08 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 75 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:08 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 75 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:08 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 75 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:08 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 75 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=0/0 n=6 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:08 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 29 07:16:08 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 29 07:16:08 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 29 07:16:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7/215 objects misplaced (3.256%); 21 B/s, 0 objects/s recovering
Nov 29 07:16:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 07:16:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:16:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 29 07:16:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:16:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 07:16:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 07:16:09 compute-0 ceph-mon[75050]: osdmap e75: 3 total, 3 up, 3 in
Nov 29 07:16:09 compute-0 ceph-mon[75050]: 2.7 scrub starts
Nov 29 07:16:09 compute-0 ceph-mon[75050]: 2.7 scrub ok
Nov 29 07:16:09 compute-0 ceph-mon[75050]: 3.15 scrub starts
Nov 29 07:16:09 compute-0 ceph-mon[75050]: 3.15 scrub ok
Nov 29 07:16:09 compute-0 ceph-mon[75050]: pgmap v188: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7/215 objects misplaced (3.256%); 21 B/s, 0 objects/s recovering
Nov 29 07:16:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 07:16:09 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 29 07:16:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 29 07:16:09 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 29 07:16:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 76 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=75/76 n=5 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:09 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 76 pg[9.c( v 44'385 (0'0,44'385] local-lis/les=75/76 n=6 ec=47/36 lis/c=73/47 les/c/f=74/48/0 sis=75) [2] r=0 lpr=75 pi=[47,75)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 07:16:10 compute-0 ceph-mon[75050]: 6.d scrub starts
Nov 29 07:16:10 compute-0 ceph-mon[75050]: osdmap e76: 3 total, 3 up, 3 in
Nov 29 07:16:10 compute-0 ceph-mon[75050]: 6.d scrub ok
Nov 29 07:16:10 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 29 07:16:10 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 29 07:16:11 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 29 07:16:11 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 29 07:16:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7/215 objects misplaced (3.256%); 21 B/s, 0 objects/s recovering
Nov 29 07:16:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 07:16:11 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 07:16:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 29 07:16:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 07:16:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 07:16:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 29 07:16:12 compute-0 ceph-mon[75050]: 3.f scrub starts
Nov 29 07:16:12 compute-0 ceph-mon[75050]: 3.f scrub ok
Nov 29 07:16:12 compute-0 ceph-mon[75050]: 3.11 scrub starts
Nov 29 07:16:12 compute-0 ceph-mon[75050]: 3.11 scrub ok
Nov 29 07:16:12 compute-0 ceph-mon[75050]: pgmap v190: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7/215 objects misplaced (3.256%); 21 B/s, 0 objects/s recovering
Nov 29 07:16:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 07:16:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:16:13 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 29 07:16:13 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 29 07:16:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 07:16:13 compute-0 ceph-mon[75050]: osdmap e77: 3 total, 3 up, 3 in
Nov 29 07:16:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 07:16:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 29 07:16:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 07:16:13 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 29 07:16:13 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 29 07:16:14 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 29 07:16:14 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 29 07:16:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 29 07:16:14 compute-0 ceph-mon[75050]: 7.1c deep-scrub starts
Nov 29 07:16:14 compute-0 ceph-mon[75050]: 7.1c deep-scrub ok
Nov 29 07:16:14 compute-0 ceph-mon[75050]: pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 07:16:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 07:16:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 07:16:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 07:16:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:16:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:16:14 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 07:16:14 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 07:16:15 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 29 07:16:15 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 29 07:16:15 compute-0 ceph-mon[75050]: 2.a scrub starts
Nov 29 07:16:15 compute-0 ceph-mon[75050]: 2.a scrub ok
Nov 29 07:16:15 compute-0 ceph-mon[75050]: 7.15 scrub starts
Nov 29 07:16:15 compute-0 ceph-mon[75050]: 7.15 scrub ok
Nov 29 07:16:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 07:16:15 compute-0 ceph-mon[75050]: osdmap e78: 3 total, 3 up, 3 in
Nov 29 07:16:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 29 07:16:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 07:16:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 07:16:15 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 29 07:16:15 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 29 07:16:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 29 07:16:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 07:16:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 07:16:16 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 29 07:16:16 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 29 07:16:16 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 29 07:16:16 compute-0 ceph-mon[75050]: 2.d scrub starts
Nov 29 07:16:16 compute-0 ceph-mon[75050]: 2.d scrub ok
Nov 29 07:16:16 compute-0 ceph-mon[75050]: 7.a scrub starts
Nov 29 07:16:16 compute-0 ceph-mon[75050]: 7.a scrub ok
Nov 29 07:16:16 compute-0 ceph-mon[75050]: pgmap v194: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 29 07:16:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 07:16:16 compute-0 ceph-mon[75050]: 3.12 scrub starts
Nov 29 07:16:16 compute-0 ceph-mon[75050]: 3.12 scrub ok
Nov 29 07:16:17 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 29 07:16:17 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 29 07:16:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 29 07:16:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 07:16:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 07:16:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:16:17 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 29 07:16:17 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 29 07:16:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 29 07:16:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 07:16:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 07:16:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 29 07:16:17 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 07:16:17 compute-0 ceph-mon[75050]: osdmap e79: 3 total, 3 up, 3 in
Nov 29 07:16:17 compute-0 ceph-mon[75050]: 5.16 scrub starts
Nov 29 07:16:17 compute-0 ceph-mon[75050]: 5.16 scrub ok
Nov 29 07:16:17 compute-0 ceph-mon[75050]: 7.1 scrub starts
Nov 29 07:16:17 compute-0 ceph-mon[75050]: 7.1 scrub ok
Nov 29 07:16:17 compute-0 ceph-mon[75050]: pgmap v196: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 29 07:16:17 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 07:16:17 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 80 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=54/55 n=5 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=12.076891899s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=44'385 mlcod 0'0 active pruub 205.001556396s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:17 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 80 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=54/55 n=5 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=80 pruub=12.076834679s) [2] r=-1 lpr=80 pi=[54,80)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 205.001556396s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:17 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 80 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=80) [2] r=0 lpr=80 pi=[54,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:18 compute-0 sshd-session[106600]: Accepted publickey for zuul from 192.168.122.30 port 46094 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:16:18 compute-0 systemd-logind[807]: New session 34 of user zuul.
Nov 29 07:16:18 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 29 07:16:18 compute-0 sshd-session[106600]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:16:18 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 29 07:16:18 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 29 07:16:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 29 07:16:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 07:16:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 29 07:16:19 compute-0 ceph-mon[75050]: 7.4 scrub starts
Nov 29 07:16:19 compute-0 ceph-mon[75050]: 7.4 scrub ok
Nov 29 07:16:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 07:16:19 compute-0 ceph-mon[75050]: osdmap e80: 3 total, 3 up, 3 in
Nov 29 07:16:19 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 81 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=54/55 n=5 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[0] r=0 lpr=81 pi=[54,81)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:19 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 81 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=54/55 n=5 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[0] r=0 lpr=81 pi=[54,81)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:19 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 81 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[0] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:19 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 81 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[0] r=-1 lpr=81 pi=[54,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:19 compute-0 python3.9[106753]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 07:16:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 07:16:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 07:16:19 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.9 deep-scrub starts
Nov 29 07:16:19 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.9 deep-scrub ok
Nov 29 07:16:19 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 29 07:16:19 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 29 07:16:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 29 07:16:20 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 07:16:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 07:16:20 compute-0 python3.9[106927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:16:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 29 07:16:20 compute-0 ceph-mon[75050]: 2.15 scrub starts
Nov 29 07:16:20 compute-0 ceph-mon[75050]: 2.15 scrub ok
Nov 29 07:16:20 compute-0 ceph-mon[75050]: osdmap e81: 3 total, 3 up, 3 in
Nov 29 07:16:20 compute-0 ceph-mon[75050]: pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 07:16:20 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 29 07:16:20 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 29 07:16:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 07:16:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 07:16:21 compute-0 sudo[107081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsrcziuccwtwfnvresrnikppwtatnvpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400581.0310745-45-239259477954676/AnsiballZ_command.py'
Nov 29 07:16:21 compute-0 sudo[107081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 29 07:16:21 compute-0 python3.9[107083]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:16:21 compute-0 sudo[107081]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:21 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 07:16:22 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 82 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=81/82 n=5 ec=47/36 lis/c=54/54 les/c/f=55/55/0 sis=81) [2]/[0] async=[2] r=0 lpr=81 pi=[54,81)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:22 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 07:16:22 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 07:16:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 07:16:22 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 29 07:16:22 compute-0 ceph-mon[75050]: 5.9 deep-scrub starts
Nov 29 07:16:22 compute-0 ceph-mon[75050]: 5.9 deep-scrub ok
Nov 29 07:16:22 compute-0 ceph-mon[75050]: 3.e scrub starts
Nov 29 07:16:22 compute-0 ceph-mon[75050]: 3.e scrub ok
Nov 29 07:16:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 07:16:22 compute-0 ceph-mon[75050]: osdmap e82: 3 total, 3 up, 3 in
Nov 29 07:16:22 compute-0 ceph-mon[75050]: 5.13 scrub starts
Nov 29 07:16:22 compute-0 ceph-mon[75050]: 5.13 scrub ok
Nov 29 07:16:22 compute-0 ceph-mon[75050]: pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 07:16:22 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 83 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=83 pruub=8.614912987s) [1] r=-1 lpr=83 pi=[55,83)/1 crt=44'385 mlcod 0'0 active pruub 206.006668091s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:22 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 83 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=83 pruub=8.614855766s) [1] r=-1 lpr=83 pi=[55,83)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 206.006668091s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:22 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 83 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=83) [1] r=0 lpr=83 pi=[55,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:22 compute-0 sudo[107234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdufhdkgskpwzygowhutfktjubksgfaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400582.0490894-57-108563971578908/AnsiballZ_stat.py'
Nov 29 07:16:22 compute-0 sudo[107234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:16:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 29 07:16:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 07:16:22 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 29 07:16:22 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 84 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] r=-1 lpr=84 pi=[55,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:22 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 84 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] r=-1 lpr=84 pi=[55,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:22 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 84 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] r=0 lpr=84 pi=[55,84)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:22 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 84 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] r=0 lpr=84 pi=[55,84)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:22 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 84 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=81/82 n=5 ec=47/36 lis/c=81/54 les/c/f=82/55/0 sis=84 pruub=15.595026016s) [2] async=[2] r=-1 lpr=84 pi=[54,84)/1 crt=44'385 mlcod 44'385 active pruub 213.128540039s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:22 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 84 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=81/82 n=5 ec=47/36 lis/c=81/54 les/c/f=82/55/0 sis=84 pruub=15.594947815s) [2] r=-1 lpr=84 pi=[54,84)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 213.128540039s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:22 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 84 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=81/54 les/c/f=82/55/0 sis=84) [2] r=0 lpr=84 pi=[54,84)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:22 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 84 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=81/54 les/c/f=82/55/0 sis=84) [2] r=0 lpr=84 pi=[54,84)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:22 compute-0 python3.9[107236]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:16:22 compute-0 sudo[107234]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:22 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 29 07:16:22 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 29 07:16:23 compute-0 sudo[107388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbtualakodbfdjvxgxaozobzrhxzmnsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400582.9944448-68-34759274654038/AnsiballZ_file.py'
Nov 29 07:16:23 compute-0 sudo[107388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Nov 29 07:16:23 compute-0 python3.9[107390]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:16:23 compute-0 sudo[107388]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:23 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 29 07:16:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 29 07:16:24 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 29 07:16:24 compute-0 sudo[107540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjugrwymzpxklcybegtnnfhtfeapeypq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400584.07798-77-261062889823316/AnsiballZ_file.py'
Nov 29 07:16:24 compute-0 sudo[107540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:24 compute-0 ceph-mon[75050]: 7.2 scrub starts
Nov 29 07:16:24 compute-0 ceph-mon[75050]: 7.2 scrub ok
Nov 29 07:16:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 07:16:24 compute-0 ceph-mon[75050]: osdmap e83: 3 total, 3 up, 3 in
Nov 29 07:16:24 compute-0 ceph-mon[75050]: osdmap e84: 3 total, 3 up, 3 in
Nov 29 07:16:24 compute-0 python3.9[107542]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:16:24 compute-0 sudo[107540]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:24 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 29 07:16:24 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 29 07:16:25 compute-0 python3.9[107692]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:16:25 compute-0 network[107709]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:16:25 compute-0 network[107710]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:16:25 compute-0 network[107711]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:16:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:16:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 07:16:25 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 29 07:16:25 compute-0 ceph-mon[75050]: 7.f scrub starts
Nov 29 07:16:25 compute-0 ceph-mon[75050]: 7.f scrub ok
Nov 29 07:16:25 compute-0 ceph-mon[75050]: pgmap v204: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Nov 29 07:16:25 compute-0 ceph-mon[75050]: 3.c scrub starts
Nov 29 07:16:25 compute-0 ceph-mon[75050]: 3.c scrub ok
Nov 29 07:16:25 compute-0 ceph-mon[75050]: 5.1 scrub starts
Nov 29 07:16:25 compute-0 ceph-mon[75050]: 5.1 scrub ok
Nov 29 07:16:25 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 85 pg[9.13( v 44'385 (0'0,44'385] local-lis/les=84/85 n=5 ec=47/36 lis/c=81/54 les/c/f=82/55/0 sis=84) [2] r=0 lpr=84 pi=[54,84)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:25 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 85 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=84/85 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[55,84)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 29 07:16:26 compute-0 ceph-mon[75050]: pgmap v205: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:16:26 compute-0 ceph-mon[75050]: osdmap e85: 3 total, 3 up, 3 in
Nov 29 07:16:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 07:16:26 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 29 07:16:26 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 86 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=84/85 n=5 ec=47/36 lis/c=84/55 les/c/f=85/56/0 sis=86 pruub=15.077945709s) [1] async=[1] r=-1 lpr=86 pi=[55,86)/1 crt=44'385 mlcod 44'385 active pruub 216.621963501s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:26 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 86 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=84/85 n=5 ec=47/36 lis/c=84/55 les/c/f=85/56/0 sis=86 pruub=15.077850342s) [1] r=-1 lpr=86 pi=[55,86)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 216.621963501s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:26 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 86 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=84/55 les/c/f=85/56/0 sis=86) [1] r=0 lpr=86 pi=[55,86)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:26 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 86 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=84/55 les/c/f=85/56/0 sis=86) [1] r=0 lpr=86 pi=[55,86)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:26 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 29 07:16:26 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 29 07:16:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Nov 29 07:16:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:16:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 29 07:16:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 07:16:27 compute-0 ceph-mon[75050]: osdmap e86: 3 total, 3 up, 3 in
Nov 29 07:16:27 compute-0 ceph-mon[75050]: 2.17 scrub starts
Nov 29 07:16:27 compute-0 ceph-mon[75050]: 2.17 scrub ok
Nov 29 07:16:27 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 29 07:16:27 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 87 pg[9.15( v 44'385 (0'0,44'385] local-lis/les=86/87 n=5 ec=47/36 lis/c=84/55 les/c/f=85/56/0 sis=86) [1] r=0 lpr=86 pi=[55,86)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:28 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 29 07:16:28 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 29 07:16:29 compute-0 ceph-mon[75050]: pgmap v208: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Nov 29 07:16:29 compute-0 ceph-mon[75050]: osdmap e87: 3 total, 3 up, 3 in
Nov 29 07:16:29 compute-0 ceph-mon[75050]: 7.8 scrub starts
Nov 29 07:16:29 compute-0 ceph-mon[75050]: 7.8 scrub ok
Nov 29 07:16:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:16:30 compute-0 python3.9[107971]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:16:30 compute-0 ceph-mon[75050]: pgmap v210: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:16:31 compute-0 python3.9[108121]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:16:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:16:32 compute-0 python3.9[108275]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:16:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:16:32 compute-0 ceph-mon[75050]: pgmap v211: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 07:16:33 compute-0 sudo[108431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrmplsjshfntbrlvdzjkmqfccxhtmnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400592.7745736-125-111149467516454/AnsiballZ_setup.py'
Nov 29 07:16:33 compute-0 sudo[108431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:33 compute-0 python3.9[108433]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:16:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:16:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 07:16:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:16:33 compute-0 sudo[108431]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:34 compute-0 sudo[108515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seytvdzaviugujksidiaxklpnxsxylvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400592.7745736-125-111149467516454/AnsiballZ_dnf.py'
Nov 29 07:16:34 compute-0 sudo[108515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:16:34 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 29 07:16:34 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 29 07:16:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 29 07:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:16:36 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 29 07:16:36 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 29 07:16:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 07:16:37 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 29 07:16:38 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 29 07:16:38 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 29 07:16:38 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 29 07:16:39 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 07:16:39 compute-0 python3.9[108517]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:16:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 29 07:16:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 07:16:39 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:16:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 07:16:39 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:16:39 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 07:16:39 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 29 07:16:39 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 29 07:16:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:16:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 07:16:39 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 29 07:16:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 07:16:39 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:16:39 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:16:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 07:16:39 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 29 07:16:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 29 07:16:39 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 88 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=88 pruub=14.651585579s) [0] r=-1 lpr=88 pi=[63,88)/1 crt=44'385 mlcod 0'0 active pruub 217.958160400s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:39 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 88 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=88 pruub=14.651316643s) [0] r=-1 lpr=88 pi=[63,88)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 217.958160400s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:39 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 88 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=88) [0] r=0 lpr=88 pi=[63,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:40 compute-0 ceph-mon[75050]: pgmap v212: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 6.6 scrub starts
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 6.6 scrub ok
Nov 29 07:16:40 compute-0 ceph-mon[75050]: pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 7.3 scrub starts
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 7.3 scrub ok
Nov 29 07:16:40 compute-0 ceph-mon[75050]: pgmap v214: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 3.9 scrub starts
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 7.c scrub starts
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 7.c scrub ok
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 6.4 scrub starts
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 7.e scrub starts
Nov 29 07:16:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:16:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 7.e scrub ok
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 6.4 scrub ok
Nov 29 07:16:40 compute-0 ceph-mon[75050]: 3.9 scrub ok
Nov 29 07:16:40 compute-0 ceph-mon[75050]: pgmap v215: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 07:16:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 07:16:40 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:16:40 compute-0 ceph-mon[75050]: osdmap e88: 3 total, 3 up, 3 in
Nov 29 07:16:40 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 29 07:16:40 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 29 07:16:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 29 07:16:41 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 29 07:16:41 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 29 07:16:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 07:16:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 07:16:41 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 29 07:16:41 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 29 07:16:41 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 29 07:16:41 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 29 07:16:42 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:16:42 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:16:42 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:16:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 07:16:42 compute-0 ceph-mon[75050]: 4.7 scrub starts
Nov 29 07:16:42 compute-0 ceph-mon[75050]: 4.7 scrub ok
Nov 29 07:16:42 compute-0 ceph-mon[75050]: 3.a scrub starts
Nov 29 07:16:42 compute-0 ceph-mon[75050]: 3.a scrub ok
Nov 29 07:16:42 compute-0 ceph-mon[75050]: 3.1d scrub starts
Nov 29 07:16:42 compute-0 ceph-mon[75050]: 3.1d scrub ok
Nov 29 07:16:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 07:16:42 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 29 07:16:42 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 89 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=89) [0]/[2] r=-1 lpr=89 pi=[63,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:42 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 89 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=89) [0]/[2] r=-1 lpr=89 pi=[63,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:42 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 29 07:16:42 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 89 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=89) [0]/[2] r=0 lpr=89 pi=[63,89)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:42 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 89 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=89) [0]/[2] r=0 lpr=89 pi=[63,89)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:43 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 29 07:16:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 29 07:16:44 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 07:16:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 07:16:44 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.1e deep-scrub starts
Nov 29 07:16:44 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 3.1e deep-scrub ok
Nov 29 07:16:44 compute-0 ceph-mon[75050]: pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:44 compute-0 ceph-mon[75050]: 4.9 scrub starts
Nov 29 07:16:44 compute-0 ceph-mon[75050]: 4.9 scrub ok
Nov 29 07:16:44 compute-0 ceph-mon[75050]: 3.1b scrub starts
Nov 29 07:16:44 compute-0 ceph-mon[75050]: 3.1b scrub ok
Nov 29 07:16:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:16:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:16:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 07:16:44 compute-0 ceph-mon[75050]: osdmap e89: 3 total, 3 up, 3 in
Nov 29 07:16:44 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 29 07:16:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:16:45 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 29 07:16:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:45 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 29 07:16:45 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 29 07:16:45 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 29 07:16:46 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 29 07:16:46 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 29 07:16:46 compute-0 ceph-mon[75050]: 7.1f scrub starts
Nov 29 07:16:46 compute-0 ceph-mon[75050]: 7.1f scrub ok
Nov 29 07:16:46 compute-0 ceph-mon[75050]: pgmap v219: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:46 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 07:16:46 compute-0 ceph-mon[75050]: 3.1e deep-scrub starts
Nov 29 07:16:46 compute-0 ceph-mon[75050]: 3.1e deep-scrub ok
Nov 29 07:16:46 compute-0 ceph-mon[75050]: osdmap e90: 3 total, 3 up, 3 in
Nov 29 07:16:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 90 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=89/90 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=89) [0]/[2] async=[0] r=0 lpr=89 pi=[63,89)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:47 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 29 07:16:47 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 29 07:16:48 compute-0 ceph-mon[75050]: 7.5 scrub starts
Nov 29 07:16:48 compute-0 ceph-mon[75050]: pgmap v221: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:48 compute-0 ceph-mon[75050]: 7.5 scrub ok
Nov 29 07:16:48 compute-0 ceph-mon[75050]: 6.e scrub starts
Nov 29 07:16:48 compute-0 ceph-mon[75050]: 7.9 scrub starts
Nov 29 07:16:48 compute-0 ceph-mon[75050]: 7.9 scrub ok
Nov 29 07:16:48 compute-0 ceph-mon[75050]: 6.e scrub ok
Nov 29 07:16:48 compute-0 ceph-mon[75050]: pgmap v222: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 29 07:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 07:16:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 2/215 objects misplaced (0.930%); 0 B/s, 0 objects/s recovering
Nov 29 07:16:49 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 29 07:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 29 07:16:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 07:16:49 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 29 07:16:49 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 29 07:16:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 91 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=89/90 n=5 ec=47/36 lis/c=89/63 les/c/f=90/64/0 sis=91 pruub=13.220364571s) [0] async=[0] r=-1 lpr=91 pi=[63,91)/1 crt=44'385 mlcod 44'385 active pruub 226.572723389s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:49 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 91 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=89/90 n=5 ec=47/36 lis/c=89/63 les/c/f=90/64/0 sis=91 pruub=13.220275879s) [0] r=-1 lpr=91 pi=[63,91)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 226.572723389s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:50 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Nov 29 07:16:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 29 07:16:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 91 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=89/63 les/c/f=90/64/0 sis=91) [0] r=0 lpr=91 pi=[63,91)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:50 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 91 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=89/63 les/c/f=90/64/0 sis=91) [0] r=0 lpr=91 pi=[63,91)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:50 compute-0 ceph-mon[75050]: 4.8 scrub starts
Nov 29 07:16:50 compute-0 ceph-mon[75050]: 4.8 scrub ok
Nov 29 07:16:50 compute-0 ceph-mon[75050]: pgmap v223: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 2/215 objects misplaced (0.930%); 0 B/s, 0 objects/s recovering
Nov 29 07:16:50 compute-0 ceph-mon[75050]: osdmap e91: 3 total, 3 up, 3 in
Nov 29 07:16:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 07:16:50 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Nov 29 07:16:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 07:16:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 07:16:50 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 29 07:16:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 2/215 objects misplaced (0.930%); 0 B/s, 0 objects/s recovering
Nov 29 07:16:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 29 07:16:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 07:16:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 29 07:16:52 compute-0 ceph-mon[75050]: 4.12 scrub starts
Nov 29 07:16:52 compute-0 ceph-mon[75050]: 4.12 scrub ok
Nov 29 07:16:52 compute-0 ceph-mon[75050]: 7.1a deep-scrub starts
Nov 29 07:16:52 compute-0 ceph-mon[75050]: 7.1a deep-scrub ok
Nov 29 07:16:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 07:16:52 compute-0 ceph-mon[75050]: osdmap e92: 3 total, 3 up, 3 in
Nov 29 07:16:52 compute-0 ceph-mon[75050]: pgmap v226: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 2/215 objects misplaced (0.930%); 0 B/s, 0 objects/s recovering
Nov 29 07:16:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 07:16:52 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 07:16:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 07:16:52 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Nov 29 07:16:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 29 07:16:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 93 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=93 pruub=10.831590652s) [2] r=-1 lpr=93 pi=[55,93)/1 crt=44'385 mlcod 0'0 active pruub 238.007217407s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 93 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=93 pruub=10.831338882s) [2] r=-1 lpr=93 pi=[55,93)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 238.007217407s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:52 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 93 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=93) [2] r=0 lpr=93 pi=[55,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:52 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Nov 29 07:16:52 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 93 pg[9.16( v 44'385 (0'0,44'385] local-lis/les=91/93 n=5 ec=47/36 lis/c=89/63 les/c/f=90/64/0 sis=91) [0] r=0 lpr=91 pi=[63,91)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:52 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.14 deep-scrub starts
Nov 29 07:16:52 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.14 deep-scrub ok
Nov 29 07:16:52 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 29 07:16:52 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 29 07:16:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 29 07:16:53 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 07:16:53 compute-0 ceph-mon[75050]: 10.3 deep-scrub starts
Nov 29 07:16:53 compute-0 ceph-mon[75050]: osdmap e93: 3 total, 3 up, 3 in
Nov 29 07:16:53 compute-0 ceph-mon[75050]: 10.3 deep-scrub ok
Nov 29 07:16:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 07:16:53 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 29 07:16:53 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 94 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=94) [2]/[0] r=0 lpr=94 pi=[55,94)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:53 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 94 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=55/56 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=94) [2]/[0] r=0 lpr=94 pi=[55,94)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:53 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 94 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[55,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:53 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 94 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[55,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:54 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 29 07:16:54 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 29 07:16:54 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Nov 29 07:16:54 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Nov 29 07:16:55 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 29 07:16:55 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 29 07:16:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:55 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 29 07:16:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 29 07:16:55 compute-0 ceph-mon[75050]: 4.14 deep-scrub starts
Nov 29 07:16:55 compute-0 ceph-mon[75050]: 4.14 deep-scrub ok
Nov 29 07:16:55 compute-0 ceph-mon[75050]: 7.18 scrub starts
Nov 29 07:16:55 compute-0 ceph-mon[75050]: 7.18 scrub ok
Nov 29 07:16:55 compute-0 ceph-mon[75050]: osdmap e94: 3 total, 3 up, 3 in
Nov 29 07:16:55 compute-0 ceph-mon[75050]: pgmap v229: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:55 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 29 07:16:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 07:16:55 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 29 07:16:56 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 29 07:16:56 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 29 07:16:56 compute-0 ceph-mon[75050]: 4.10 scrub starts
Nov 29 07:16:56 compute-0 ceph-mon[75050]: 4.10 scrub ok
Nov 29 07:16:56 compute-0 ceph-mon[75050]: 7.6 deep-scrub starts
Nov 29 07:16:56 compute-0 ceph-mon[75050]: 7.6 deep-scrub ok
Nov 29 07:16:56 compute-0 ceph-mon[75050]: 10.5 scrub starts
Nov 29 07:16:56 compute-0 ceph-mon[75050]: 10.5 scrub ok
Nov 29 07:16:56 compute-0 ceph-mon[75050]: pgmap v230: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:56 compute-0 ceph-mon[75050]: 6.1 scrub starts
Nov 29 07:16:56 compute-0 ceph-mon[75050]: 6.1 scrub ok
Nov 29 07:16:56 compute-0 ceph-mon[75050]: osdmap e95: 3 total, 3 up, 3 in
Nov 29 07:16:56 compute-0 ceph-mon[75050]: 10.a scrub starts
Nov 29 07:16:56 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 95 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=94/95 n=5 ec=47/36 lis/c=55/55 les/c/f=56/56/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[55,94)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:16:57 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 29 07:16:57 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 29 07:16:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 29 07:16:58 compute-0 sudo[108587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:58 compute-0 sudo[108587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:58 compute-0 sudo[108587]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:58 compute-0 sudo[108612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:16:58 compute-0 sudo[108612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:58 compute-0 sudo[108612]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:58 compute-0 sudo[108637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:58 compute-0 sudo[108637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:58 compute-0 sudo[108637]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:58 compute-0 sudo[108662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:16:58 compute-0 sudo[108662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:59 compute-0 sudo[108662]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:16:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:16:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:16:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:16:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:16:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 07:16:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Nov 29 07:16:59 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 29 07:16:59 compute-0 ceph-mon[75050]: 10.a scrub ok
Nov 29 07:16:59 compute-0 ceph-mon[75050]: 10.c scrub starts
Nov 29 07:16:59 compute-0 ceph-mon[75050]: 10.c scrub ok
Nov 29 07:16:59 compute-0 ceph-mon[75050]: pgmap v232: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:16:59 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 29 07:16:59 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 96 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=94/55 les/c/f=95/56/0 sis=96) [2] r=0 lpr=96 pi=[55,96)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:59 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 96 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=94/55 les/c/f=95/56/0 sis=96) [2] r=0 lpr=96 pi=[55,96)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:16:59 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 29 07:16:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 07:16:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:16:59 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 96 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=94/95 n=5 ec=47/36 lis/c=94/55 les/c/f=95/56/0 sis=96 pruub=13.213771820s) [2] async=[2] r=-1 lpr=96 pi=[55,96)/1 crt=44'385 mlcod 44'385 active pruub 247.928024292s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:16:59 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 96 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=94/95 n=5 ec=47/36 lis/c=94/55 les/c/f=95/56/0 sis=96 pruub=13.213641167s) [2] r=-1 lpr=96 pi=[55,96)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 247.928024292s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:16:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:16:59 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 485c6045-684b-4413-a78d-361e6020c037 does not exist
Nov 29 07:16:59 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b9e5e4e3-66f4-461c-aef8-a6b377948590 does not exist
Nov 29 07:16:59 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7939ee84-4394-4819-9a10-79a7aeaec35e does not exist
Nov 29 07:16:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:16:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:16:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:16:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:16:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:16:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:16:59 compute-0 sudo[108720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:59 compute-0 sudo[108720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:59 compute-0 sudo[108720]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:59 compute-0 sudo[108745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:16:59 compute-0 sudo[108745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:59 compute-0 sudo[108745]: pam_unix(sudo:session): session closed for user root
Nov 29 07:16:59 compute-0 sudo[108770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:16:59 compute-0 sudo[108770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:16:59 compute-0 sudo[108770]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:00 compute-0 sudo[108795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:17:00 compute-0 sudo[108795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:00 compute-0 podman[108858]: 2025-11-29 07:17:00.353840494 +0000 UTC m=+0.027582411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:17:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 29 07:17:00 compute-0 podman[108858]: 2025-11-29 07:17:00.734032206 +0000 UTC m=+0.407774073 container create e606984d845b9e68e43fec4b65b5eaffff2155f37f134057a4497b74f82b68e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:17:01 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:17:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 07:17:01 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 29 07:17:01 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 29 07:17:01 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 29 07:17:01 compute-0 systemd[1]: Started libpod-conmon-e606984d845b9e68e43fec4b65b5eaffff2155f37f134057a4497b74f82b68e5.scope.
Nov 29 07:17:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:17:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 2 objects/s recovering
Nov 29 07:17:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 07:17:01 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 07:17:01 compute-0 podman[108858]: 2025-11-29 07:17:01.873811983 +0000 UTC m=+1.547553940 container init e606984d845b9e68e43fec4b65b5eaffff2155f37f134057a4497b74f82b68e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:17:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:17:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:17:01 compute-0 ceph-mon[75050]: pgmap v233: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Nov 29 07:17:01 compute-0 ceph-mon[75050]: 6.b scrub starts
Nov 29 07:17:01 compute-0 ceph-mon[75050]: 6.b scrub ok
Nov 29 07:17:01 compute-0 ceph-mon[75050]: osdmap e96: 3 total, 3 up, 3 in
Nov 29 07:17:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 07:17:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:17:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:17:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:17:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:17:01 compute-0 podman[108858]: 2025-11-29 07:17:01.881631495 +0000 UTC m=+1.555373402 container start e606984d845b9e68e43fec4b65b5eaffff2155f37f134057a4497b74f82b68e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:17:01 compute-0 hungry_leakey[108875]: 167 167
Nov 29 07:17:01 compute-0 systemd[1]: libpod-e606984d845b9e68e43fec4b65b5eaffff2155f37f134057a4497b74f82b68e5.scope: Deactivated successfully.
Nov 29 07:17:01 compute-0 podman[108858]: 2025-11-29 07:17:01.903765711 +0000 UTC m=+1.577507658 container attach e606984d845b9e68e43fec4b65b5eaffff2155f37f134057a4497b74f82b68e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:17:01 compute-0 podman[108858]: 2025-11-29 07:17:01.904995875 +0000 UTC m=+1.578737772 container died e606984d845b9e68e43fec4b65b5eaffff2155f37f134057a4497b74f82b68e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:17:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 29 07:17:02 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 29 07:17:02 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 97 pg[9.19( v 44'385 (0'0,44'385] local-lis/les=96/97 n=5 ec=47/36 lis/c=94/55 les/c/f=95/56/0 sis=96) [2] r=0 lpr=96 pi=[55,96)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:17:02 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 29 07:17:03 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 29 07:17:03 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 29 07:17:03 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 07:17:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 07:17:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 134 B/s wr, 4 op/s; 28 B/s, 1 objects/s recovering
Nov 29 07:17:03 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 29 07:17:03 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 29 07:17:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:17:05
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.log', 'images', '.rgw.root']
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 127 B/s wr, 4 op/s; 27 B/s, 1 objects/s recovering
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:17:05 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:17:05 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 29 07:17:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 29 07:17:06 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 29 07:17:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe6b9d73a80e32aaf41bb4019a694a2364bae1bd09045f4f080c35145ed2d836-merged.mount: Deactivated successfully.
Nov 29 07:17:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:17:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 29 07:17:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 29 07:17:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 29 07:17:06 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 29 07:17:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:17:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 07:17:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 07:17:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 07:17:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 07:17:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 29 07:17:06 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 29 07:17:07 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 29 07:17:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 07:17:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 07:17:07 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 29 07:17:07 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 29 07:17:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 07:17:08 compute-0 ceph-mon[75050]: 10.18 scrub starts
Nov 29 07:17:08 compute-0 ceph-mon[75050]: 10.18 scrub ok
Nov 29 07:17:08 compute-0 ceph-mon[75050]: osdmap e97: 3 total, 3 up, 3 in
Nov 29 07:17:08 compute-0 ceph-mon[75050]: pgmap v236: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 2 objects/s recovering
Nov 29 07:17:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 07:17:08 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 29 07:17:08 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.078123569s, txc = 0x55819c373500
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.078113079s, txc = 0x55819c246f00
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.077998161s, txc = 0x55819c3ce600
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.078094959s, txc = 0x55819c89c900
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.077983856s, txc = 0x55819c877b00
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.078019142s, txc = 0x55819c89f200
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.077882290s, txc = 0x55819c89cc00
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.077585697s, txc = 0x55819c89f500
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.077484608s, txc = 0x55819c44c300
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.077333450s, txc = 0x55819c318c00
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.077291012s, txc = 0x55819c89cf00
Nov 29 07:17:08 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.075304508s, txc = 0x55819c89f800
Nov 29 07:17:08 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 29 07:17:08 compute-0 podman[108858]: 2025-11-29 07:17:08.930518428 +0000 UTC m=+8.604260295 container remove e606984d845b9e68e43fec4b65b5eaffff2155f37f134057a4497b74f82b68e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:17:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 07:17:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 07:17:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 07:17:08 compute-0 systemd[1]: libpod-conmon-e606984d845b9e68e43fec4b65b5eaffff2155f37f134057a4497b74f82b68e5.scope: Deactivated successfully.
Nov 29 07:17:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 29 07:17:09 compute-0 podman[108909]: 2025-11-29 07:17:09.117216219 +0000 UTC m=+0.030980747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:17:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 07:17:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 07:17:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 4.5 scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 3.1f scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 3.1f scrub ok
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 4.5 scrub ok
Nov 29 07:17:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 07:17:10 compute-0 ceph-mon[75050]: pgmap v237: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 134 B/s wr, 4 op/s; 28 B/s, 1 objects/s recovering
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 5.11 scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 7.1b scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 2.1b scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 127 B/s wr, 4 op/s; 27 B/s, 1 objects/s recovering
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 5.12 scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: osdmap e98: 3 total, 3 up, 3 in
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 10.1b scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 5.11 scrub ok
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 5.12 scrub ok
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 2.1b scrub ok
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 7.1b scrub ok
Nov 29 07:17:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 07:17:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 10.1b scrub ok
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 10.1c scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 8.1 scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 8.10 scrub starts
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 8.1 scrub ok
Nov 29 07:17:10 compute-0 ceph-mon[75050]: 10.1c scrub ok
Nov 29 07:17:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 07:17:10 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 07:17:10 compute-0 ceph-mon[75050]: osdmap e99: 3 total, 3 up, 3 in
Nov 29 07:17:10 compute-0 podman[108909]: 2025-11-29 07:17:10.157855401 +0000 UTC m=+1.071619939 container create 5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:17:10 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 29 07:17:10 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 29 07:17:10 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 29 07:17:10 compute-0 systemd[1]: Started libpod-conmon-5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d.scope.
Nov 29 07:17:10 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 29 07:17:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c752ed765c638820ce037e01b3a8ef10f1c63918b8ec124bb480407e2991a002/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c752ed765c638820ce037e01b3a8ef10f1c63918b8ec124bb480407e2991a002/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c752ed765c638820ce037e01b3a8ef10f1c63918b8ec124bb480407e2991a002/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c752ed765c638820ce037e01b3a8ef10f1c63918b8ec124bb480407e2991a002/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c752ed765c638820ce037e01b3a8ef10f1c63918b8ec124bb480407e2991a002/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:11 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 29 07:17:11 compute-0 podman[108909]: 2025-11-29 07:17:11.946183431 +0000 UTC m=+2.859947979 container init 5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:17:11 compute-0 podman[108909]: 2025-11-29 07:17:11.954600979 +0000 UTC m=+2.868365537 container start 5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:17:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 07:17:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 07:17:12 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 29 07:17:12 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 29 07:17:12 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 29 07:17:12 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 29 07:17:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 07:17:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 07:17:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 07:17:12 compute-0 podman[108909]: 2025-11-29 07:17:12.848176043 +0000 UTC m=+3.761940591 container attach 5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:17:13 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 29 07:17:13 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 29 07:17:13 compute-0 pensive_mayer[108930]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:17:13 compute-0 pensive_mayer[108930]: --> relative data size: 1.0
Nov 29 07:17:13 compute-0 pensive_mayer[108930]: --> All data devices are unavailable
Nov 29 07:17:13 compute-0 systemd[1]: libpod-5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d.scope: Deactivated successfully.
Nov 29 07:17:13 compute-0 podman[108909]: 2025-11-29 07:17:13.108486315 +0000 UTC m=+4.022250853 container died 5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:17:13 compute-0 systemd[1]: libpod-5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d.scope: Consumed 1.099s CPU time.
Nov 29 07:17:13 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 29 07:17:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 29 07:17:13 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 29 07:17:13 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 29 07:17:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 07:17:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 07:17:13 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 29 07:17:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 29 07:17:13 compute-0 ceph-mon[75050]: 8.10 scrub ok
Nov 29 07:17:13 compute-0 ceph-mon[75050]: pgmap v242: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 07:17:13 compute-0 ceph-mon[75050]: 10.1d scrub starts
Nov 29 07:17:13 compute-0 ceph-mon[75050]: 10.1d scrub ok
Nov 29 07:17:13 compute-0 ceph-mon[75050]: 8.3 scrub starts
Nov 29 07:17:13 compute-0 ceph-mon[75050]: 8.3 scrub ok
Nov 29 07:17:13 compute-0 ceph-mon[75050]: pgmap v243: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 07:17:13 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 29 07:17:14 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:17:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:17:14 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 29 07:17:14 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 29 07:17:15 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 29 07:17:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c752ed765c638820ce037e01b3a8ef10f1c63918b8ec124bb480407e2991a002-merged.mount: Deactivated successfully.
Nov 29 07:17:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:15 compute-0 systemd[76617]: Created slice User Background Tasks Slice.
Nov 29 07:17:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 07:17:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 07:17:15 compute-0 systemd[76617]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 07:17:15 compute-0 systemd[76617]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 07:17:15 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 29 07:17:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 07:17:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 07:17:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 07:17:16 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 29 07:17:16 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 29 07:17:16 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 29 07:17:16 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 29 07:17:16 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 29 07:17:17 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 29 07:17:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 07:17:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 07:17:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 29 07:17:19 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 100 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=75/76 n=5 ec=47/36 lis/c=75/75 les/c/f=76/76/0 sis=100 pruub=10.469964027s) [0] r=-1 lpr=100 pi=[75,100)/1 crt=44'385 mlcod 0'0 active pruub 253.376434326s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 8.14 scrub starts
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 8.14 scrub ok
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 10.1f scrub starts
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 8.5 scrub starts
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 11.1 scrub starts
Nov 29 07:17:19 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 100 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=75/76 n=5 ec=47/36 lis/c=75/75 les/c/f=76/76/0 sis=100 pruub=10.469626427s) [0] r=-1 lpr=100 pi=[75,100)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 253.376434326s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:17:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 07:17:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 8.5 scrub ok
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 10.1f scrub ok
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 11.1 scrub ok
Nov 29 07:17:19 compute-0 ceph-mon[75050]: osdmap e100: 3 total, 3 up, 3 in
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 8.15 scrub starts
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 8.15 scrub ok
Nov 29 07:17:19 compute-0 ceph-mon[75050]: pgmap v245: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 8.c scrub starts
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 8.c scrub ok
Nov 29 07:17:19 compute-0 ceph-mon[75050]: 11.15 scrub starts
Nov 29 07:17:19 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 100 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=75/75 les/c/f=76/76/0 sis=100) [0] r=0 lpr=100 pi=[75,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:17:19 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 29 07:17:19 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 29 07:17:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:20 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 07:17:20 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 29 07:17:20 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 29 07:17:21 compute-0 podman[108909]: 2025-11-29 07:17:21.110163178 +0000 UTC m=+12.023927706 container remove 5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:17:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 07:17:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 07:17:21 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 07:17:21 compute-0 sudo[108795]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:21 compute-0 systemd[1]: libpod-conmon-5f7ec4db56ed153b6720aab7b4a7b00d00197a675e3f70f48c2e62950b41633d.scope: Deactivated successfully.
Nov 29 07:17:21 compute-0 sudo[109005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:21 compute-0 sudo[109005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:21 compute-0 sudo[109005]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:21 compute-0 sudo[109030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:17:21 compute-0 sudo[109030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:21 compute-0 sudo[109030]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 07:17:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 07:17:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 07:17:21 compute-0 sudo[109055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:21 compute-0 sudo[109055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:21 compute-0 sudo[109055]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:21 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 29 07:17:21 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 29 07:17:21 compute-0 sudo[109080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:17:21 compute-0 sudo[109080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:21 compute-0 podman[109145]: 2025-11-29 07:17:21.804363953 +0000 UTC m=+0.026991605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:17:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 07:17:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:17:22 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 29 07:17:22 compute-0 podman[109145]: 2025-11-29 07:17:22.156803631 +0000 UTC m=+0.379431263 container create 665c81507f3accdb1ebf594be822e36e4c24f35cdcac5f59ba950b38860b9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:17:22 compute-0 ceph-mon[75050]: 11.15 scrub ok
Nov 29 07:17:22 compute-0 ceph-mon[75050]: 8.7 scrub starts
Nov 29 07:17:22 compute-0 ceph-mon[75050]: 8.7 scrub ok
Nov 29 07:17:22 compute-0 ceph-mon[75050]: pgmap v246: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 07:17:22 compute-0 ceph-mon[75050]: 8.8 scrub starts
Nov 29 07:17:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 07:17:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 07:17:22 compute-0 ceph-mon[75050]: osdmap e101: 3 total, 3 up, 3 in
Nov 29 07:17:22 compute-0 ceph-mon[75050]: 11.2 scrub starts
Nov 29 07:17:22 compute-0 ceph-mon[75050]: 8.a scrub starts
Nov 29 07:17:22 compute-0 ceph-mon[75050]: 8.8 scrub ok
Nov 29 07:17:22 compute-0 ceph-mon[75050]: 8.a scrub ok
Nov 29 07:17:22 compute-0 ceph-mon[75050]: 8.2 scrub starts
Nov 29 07:17:22 compute-0 ceph-mon[75050]: pgmap v248: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 07:17:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 29 07:17:22 compute-0 systemd[1]: Started libpod-conmon-665c81507f3accdb1ebf594be822e36e4c24f35cdcac5f59ba950b38860b9080.scope.
Nov 29 07:17:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:17:22 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 07:17:22 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:17:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 07:17:22 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 29 07:17:22 compute-0 podman[109145]: 2025-11-29 07:17:22.556827395 +0000 UTC m=+0.779455047 container init 665c81507f3accdb1ebf594be822e36e4c24f35cdcac5f59ba950b38860b9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:17:22 compute-0 podman[109145]: 2025-11-29 07:17:22.567088925 +0000 UTC m=+0.789716557 container start 665c81507f3accdb1ebf594be822e36e4c24f35cdcac5f59ba950b38860b9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:17:22 compute-0 wizardly_shannon[109163]: 167 167
Nov 29 07:17:22 compute-0 systemd[1]: libpod-665c81507f3accdb1ebf594be822e36e4c24f35cdcac5f59ba950b38860b9080.scope: Deactivated successfully.
Nov 29 07:17:22 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 29 07:17:23 compute-0 podman[109145]: 2025-11-29 07:17:23.138838906 +0000 UTC m=+1.361466538 container attach 665c81507f3accdb1ebf594be822e36e4c24f35cdcac5f59ba950b38860b9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:17:23 compute-0 podman[109145]: 2025-11-29 07:17:23.139526765 +0000 UTC m=+1.362154397 container died 665c81507f3accdb1ebf594be822e36e4c24f35cdcac5f59ba950b38860b9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:17:23 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 29 07:17:23 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:23 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 103 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:17:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 1 unknown, 2 active+clean+scrubbing, 302 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:23 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 103 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=75/76 n=5 ec=47/36 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:23 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 103 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=75/76 n=5 ec=47/36 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:17:23 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 103 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=66/67 n=5 ec=47/36 lis/c=66/66 les/c/f=67/67/0 sis=103 pruub=12.992110252s) [1] r=-1 lpr=103 pi=[66,103)/1 crt=44'385 mlcod 0'0 active pruub 260.119201660s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:23 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 103 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=66/67 n=5 ec=47/36 lis/c=66/66 les/c/f=67/67/0 sis=103 pruub=12.991909027s) [1] r=-1 lpr=103 pi=[66,103)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 260.119201660s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:17:23 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 102 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=102 pruub=10.831347466s) [0] r=-1 lpr=102 pi=[63,102)/1 crt=44'385 mlcod 0'0 active pruub 257.959167480s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:23 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 103 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=102 pruub=10.831264496s) [0] r=-1 lpr=102 pi=[63,102)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 257.959167480s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:17:23 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 103 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=102) [0] r=0 lpr=103 pi=[63,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:17:23 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 103 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=66/66 les/c/f=67/67/0 sis=103) [1] r=0 lpr=103 pi=[66,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:17:23 compute-0 ceph-mon[75050]: 11.2 scrub ok
Nov 29 07:17:23 compute-0 ceph-mon[75050]: 8.2 scrub ok
Nov 29 07:17:23 compute-0 ceph-mon[75050]: pgmap v249: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:23 compute-0 ceph-mon[75050]: 11.d scrub starts
Nov 29 07:17:23 compute-0 ceph-mon[75050]: 9.2 scrub starts
Nov 29 07:17:23 compute-0 ceph-mon[75050]: 9.2 scrub ok
Nov 29 07:17:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 07:17:23 compute-0 ceph-mon[75050]: 11.d scrub ok
Nov 29 07:17:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 07:17:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 07:17:23 compute-0 ceph-mon[75050]: osdmap e102: 3 total, 3 up, 3 in
Nov 29 07:17:23 compute-0 ceph-mon[75050]: 11.8 scrub starts
Nov 29 07:17:23 compute-0 ceph-mon[75050]: pgmap v251: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 07:17:23 compute-0 ceph-mon[75050]: 11.8 scrub ok
Nov 29 07:17:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 07:17:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 07:17:23 compute-0 ceph-mon[75050]: osdmap e103: 3 total, 3 up, 3 in
Nov 29 07:17:24 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 29 07:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bfd3e3f00d08babe2ad6ec700cc08e80624cf324118b36b600e0fbc41a53e1d-merged.mount: Deactivated successfully.
Nov 29 07:17:25 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 07:17:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:17:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 29 07:17:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:26 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 29 07:17:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:28 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 29 07:17:29 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 29 07:17:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:30 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 29 07:17:30 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 29 07:17:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:32 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:17:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:17:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:17:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:17:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:17:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:17:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:17:36 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:17:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:40 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 16.805997849s
Nov 29 07:17:40 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 16.805999756s
Nov 29 07:17:40 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 16.922693253s, txc = 0x55819c89d800
Nov 29 07:17:40 compute-0 ceph-osd[88831]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7f50fcef7640' had timed out after 15.000000954s
Nov 29 07:17:40 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 29 07:17:40 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 29 07:17:40 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 29 07:17:40 compute-0 ceph-osd[91083]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fe143b17640' had timed out after 15.000000954s
Nov 29 07:17:40 compute-0 ceph-osd[91083]: heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fe143b17640' had timed out after 15.000000954s
Nov 29 07:17:40 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:17:41 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf MDS connection to Monitors appears to be laggy; 16.2095s since last acked beacon
Nov 29 07:17:41 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:17:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 3 active+clean+scrubbing, 3 unknown, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:41 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 29 07:17:42 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 17.002120972s
Nov 29 07:17:42 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 17.002120972s
Nov 29 07:17:42 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.529909134s, txc = 0x55f94cf66000
Nov 29 07:17:42 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf  MDS is no longer laggy
Nov 29 07:17:42 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 29 07:17:42 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 17.601282120s
Nov 29 07:17:42 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 17.601282120s
Nov 29 07:17:42 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 19.182706833s, txc = 0x560f41133b00
Nov 29 07:17:42 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 19.182376862s, txc = 0x560f414d2300
Nov 29 07:17:42 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 19.180494308s, txc = 0x560f41889b00
Nov 29 07:17:42 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 19.179691315s, txc = 0x560f411f2c00
Nov 29 07:17:42 compute-0 ceph-osd[91083]: heartbeat_map reset_timeout 'OSD::osd_op_tp thread 0x7fe143b17640' had timed out after 15.000000954s
Nov 29 07:17:42 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 29 07:17:42 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 29 07:17:42 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 07:17:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 07:17:43 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.373590469s, txc = 0x55819c863500
Nov 29 07:17:43 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 16.381536484s, txc = 0x55819c352600
Nov 29 07:17:43 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.319671631s, txc = 0x55819c887b00
Nov 29 07:17:43 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 29 07:17:43 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 12.413272858s, txc = 0x55f94ceccc00
Nov 29 07:17:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 29 07:17:43 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 29 07:17:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 7 active+clean+scrubbing, 3 unknown, 295 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:43 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 29 07:17:43 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.472820282s, txc = 0x560f4187e600
Nov 29 07:17:43 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 14.467274666s, txc = 0x560f41495800
Nov 29 07:17:43 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.452898979s, txc = 0x560f41639200
Nov 29 07:17:43 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 29 07:17:44 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 29 07:17:44 compute-0 ceph-mon[75050]: 8.13 scrub starts
Nov 29 07:17:44 compute-0 ceph-mon[75050]: 8.13 scrub ok
Nov 29 07:17:44 compute-0 ceph-mon[75050]: pgmap v253: 305 pgs: 1 unknown, 2 active+clean+scrubbing, 302 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:44 compute-0 podman[109145]: 2025-11-29 07:17:44.088006775 +0000 UTC m=+22.310634437 container remove 665c81507f3accdb1ebf594be822e36e4c24f35cdcac5f59ba950b38860b9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:17:44 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 104 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=104) [0]/[2] r=0 lpr=104 pi=[63,104)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:44 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 104 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=66/67 n=5 ec=47/36 lis/c=66/66 les/c/f=67/67/0 sis=104) [1]/[2] r=0 lpr=104 pi=[66,104)/1 crt=44'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:44 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 104 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=63/64 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=104) [0]/[2] r=0 lpr=104 pi=[63,104)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:17:44 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 104 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=66/67 n=5 ec=47/36 lis/c=66/66 les/c/f=67/67/0 sis=104) [1]/[2] r=0 lpr=104 pi=[66,104)/1 crt=44'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 07:17:44 compute-0 systemd[1]: libpod-conmon-665c81507f3accdb1ebf594be822e36e4c24f35cdcac5f59ba950b38860b9080.scope: Deactivated successfully.
Nov 29 07:17:44 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 104 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=66/66 les/c/f=67/67/0 sis=104) [1]/[2] r=-1 lpr=104 pi=[66,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:44 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 104 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=66/66 les/c/f=67/67/0 sis=104) [1]/[2] r=-1 lpr=104 pi=[66,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:17:44 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 104 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[63,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:44 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 104 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=104) [0]/[2] r=-1 lpr=104 pi=[63,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 07:17:44 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.19 deep-scrub starts
Nov 29 07:17:44 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.19 deep-scrub ok
Nov 29 07:17:44 compute-0 podman[109195]: 2025-11-29 07:17:44.254064373 +0000 UTC m=+0.027799778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:17:44 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 104 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=103/104 n=5 ec=47/36 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:17:44 compute-0 podman[109195]: 2025-11-29 07:17:44.47247982 +0000 UTC m=+0.246215195 container create 4c803604f46b1e0cdad0fbdd9d0d233570c9553cda02d0bc0e7ec690c7991209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:17:44 compute-0 systemd[1]: Started libpod-conmon-4c803604f46b1e0cdad0fbdd9d0d233570c9553cda02d0bc0e7ec690c7991209.scope.
Nov 29 07:17:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e086c1eb9f848ac74c1f4f3778090406388af1377310739ba1705c46eb30b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e086c1eb9f848ac74c1f4f3778090406388af1377310739ba1705c46eb30b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e086c1eb9f848ac74c1f4f3778090406388af1377310739ba1705c46eb30b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9e086c1eb9f848ac74c1f4f3778090406388af1377310739ba1705c46eb30b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:44 compute-0 podman[109195]: 2025-11-29 07:17:44.676908482 +0000 UTC m=+0.450643887 container init 4c803604f46b1e0cdad0fbdd9d0d233570c9553cda02d0bc0e7ec690c7991209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:17:44 compute-0 podman[109195]: 2025-11-29 07:17:44.686695738 +0000 UTC m=+0.460431113 container start 4c803604f46b1e0cdad0fbdd9d0d233570c9553cda02d0bc0e7ec690c7991209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:17:44 compute-0 podman[109195]: 2025-11-29 07:17:44.733325578 +0000 UTC m=+0.507060973 container attach 4c803604f46b1e0cdad0fbdd9d0d233570c9553cda02d0bc0e7ec690c7991209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:17:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 29 07:17:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 07:17:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 29 07:17:45 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 29 07:17:45 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.e scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.9 scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v254: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.f scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v255: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.14 scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.3 scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v256: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.d scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.16 scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v257: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v258: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v259: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v260: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v261: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.14 scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.e scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.f scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v262: 305 pgs: 3 active+clean+scrubbing, 3 unknown, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.e scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.16 scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.3 scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.d scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.9 scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.e scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: osdmap e104: 3 total, 3 up, 3 in
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.17 scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: pgmap v264: 305 pgs: 7 active+clean+scrubbing, 3 unknown, 295 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.17 scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 11.17 scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.17 scrub ok
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.19 deep-scrub starts
Nov 29 07:17:45 compute-0 ceph-mon[75050]: 8.19 deep-scrub ok
Nov 29 07:17:45 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 105 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=103/104 n=5 ec=47/36 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.167807579s) [0] async=[0] r=-1 lpr=105 pi=[75,105)/1 crt=44'385 mlcod 44'385 active pruub 283.941375732s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:45 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 105 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=103/104 n=5 ec=47/36 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.167550087s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 283.941375732s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:17:45 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 105 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:45 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 105 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:17:45 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 105 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=104/105 n=5 ec=47/36 lis/c=66/66 les/c/f=67/67/0 sis=104) [1]/[2] async=[1] r=0 lpr=104 pi=[66,104)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:17:45 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 105 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=104/105 n=5 ec=47/36 lis/c=63/63 les/c/f=64/64/0 sis=104) [0]/[2] async=[0] r=0 lpr=104 pi=[63,104)/1 crt=44'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:17:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 1 peering, 4 active+clean+scrubbing, 2 unknown, 298 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:17:45 compute-0 nifty_allen[109212]: {
Nov 29 07:17:45 compute-0 nifty_allen[109212]:     "0": [
Nov 29 07:17:45 compute-0 nifty_allen[109212]:         {
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "devices": [
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "/dev/loop3"
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             ],
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_name": "ceph_lv0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_size": "21470642176",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "name": "ceph_lv0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "tags": {
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.cluster_name": "ceph",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.crush_device_class": "",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.encrypted": "0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.osd_id": "0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.type": "block",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.vdo": "0"
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             },
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "type": "block",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "vg_name": "ceph_vg0"
Nov 29 07:17:45 compute-0 nifty_allen[109212]:         }
Nov 29 07:17:45 compute-0 nifty_allen[109212]:     ],
Nov 29 07:17:45 compute-0 nifty_allen[109212]:     "1": [
Nov 29 07:17:45 compute-0 nifty_allen[109212]:         {
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "devices": [
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "/dev/loop4"
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             ],
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_name": "ceph_lv1",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_size": "21470642176",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "name": "ceph_lv1",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "tags": {
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.cluster_name": "ceph",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.crush_device_class": "",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.encrypted": "0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.osd_id": "1",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.type": "block",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.vdo": "0"
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             },
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "type": "block",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "vg_name": "ceph_vg1"
Nov 29 07:17:45 compute-0 nifty_allen[109212]:         }
Nov 29 07:17:45 compute-0 nifty_allen[109212]:     ],
Nov 29 07:17:45 compute-0 nifty_allen[109212]:     "2": [
Nov 29 07:17:45 compute-0 nifty_allen[109212]:         {
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "devices": [
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "/dev/loop5"
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             ],
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_name": "ceph_lv2",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_size": "21470642176",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "name": "ceph_lv2",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "tags": {
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.cluster_name": "ceph",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.crush_device_class": "",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.encrypted": "0",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.osd_id": "2",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.type": "block",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:                 "ceph.vdo": "0"
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             },
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "type": "block",
Nov 29 07:17:45 compute-0 nifty_allen[109212]:             "vg_name": "ceph_vg2"
Nov 29 07:17:45 compute-0 nifty_allen[109212]:         }
Nov 29 07:17:45 compute-0 nifty_allen[109212]:     ]
Nov 29 07:17:45 compute-0 nifty_allen[109212]: }
Nov 29 07:17:45 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 29 07:17:45 compute-0 systemd[1]: libpod-4c803604f46b1e0cdad0fbdd9d0d233570c9553cda02d0bc0e7ec690c7991209.scope: Deactivated successfully.
Nov 29 07:17:45 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 29 07:17:45 compute-0 podman[109195]: 2025-11-29 07:17:45.608040607 +0000 UTC m=+1.381776002 container died 4c803604f46b1e0cdad0fbdd9d0d233570c9553cda02d0bc0e7ec690c7991209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:17:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9e086c1eb9f848ac74c1f4f3778090406388af1377310739ba1705c46eb30b4-merged.mount: Deactivated successfully.
Nov 29 07:17:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 29 07:17:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 07:17:47 compute-0 podman[109195]: 2025-11-29 07:17:47.079632007 +0000 UTC m=+2.853367422 container remove 4c803604f46b1e0cdad0fbdd9d0d233570c9553cda02d0bc0e7ec690c7991209 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:17:47 compute-0 ceph-mon[75050]: osdmap e105: 3 total, 3 up, 3 in
Nov 29 07:17:47 compute-0 ceph-mon[75050]: 9.4 scrub starts
Nov 29 07:17:47 compute-0 ceph-mon[75050]: 9.4 scrub ok
Nov 29 07:17:47 compute-0 ceph-mon[75050]: pgmap v266: 305 pgs: 1 peering, 4 active+clean+scrubbing, 2 unknown, 298 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 07:17:47 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 29 07:17:47 compute-0 sudo[109080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 21.6997 seconds
Nov 29 07:17:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:17:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 29 07:17:47 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 106 pg[9.1c( v 44'385 (0'0,44'385] local-lis/les=105/106 n=5 ec=47/36 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:17:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 106 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=104/105 n=5 ec=47/36 lis/c=104/66 les/c/f=105/67/0 sis=106 pruub=14.195176125s) [1] async=[1] r=-1 lpr=106 pi=[66,106)/1 crt=44'385 mlcod 44'385 active pruub 284.867614746s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 106 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=104/105 n=5 ec=47/36 lis/c=104/66 les/c/f=105/67/0 sis=106 pruub=14.194893837s) [1] r=-1 lpr=106 pi=[66,106)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 284.867614746s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:17:47 compute-0 sudo[109250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:47 compute-0 sudo[109250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:47 compute-0 sudo[109250]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:47 compute-0 systemd[1]: libpod-conmon-4c803604f46b1e0cdad0fbdd9d0d233570c9553cda02d0bc0e7ec690c7991209.scope: Deactivated successfully.
Nov 29 07:17:47 compute-0 sudo[109275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:17:47 compute-0 sudo[109275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:47 compute-0 sudo[109275]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:47 compute-0 sudo[109300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:47 compute-0 sudo[109300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:47 compute-0 sudo[109300]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:47 compute-0 sudo[109325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:17:47 compute-0 sudo[109325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 07:17:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 106 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=104/66 les/c/f=105/67/0 sis=106) [1] r=0 lpr=106 pi=[66,106)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:47 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 106 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=104/66 les/c/f=105/67/0 sis=106) [1] r=0 lpr=106 pi=[66,106)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:17:47 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 29 07:17:47 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 107 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=104/63 les/c/f=105/64/0 sis=107) [0] r=0 lpr=107 pi=[63,107)/1 luod=0'0 crt=44'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:47 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 107 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=0/0 n=5 ec=47/36 lis/c=104/63 les/c/f=105/64/0 sis=107) [0] r=0 lpr=107 pi=[63,107)/1 crt=44'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 07:17:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 1 peering, 3 active+clean+scrubbing, 2 unknown, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Nov 29 07:17:47 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 29 07:17:47 compute-0 podman[109390]: 2025-11-29 07:17:47.65062764 +0000 UTC m=+0.023375406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:17:47 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 29 07:17:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 107 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=104/105 n=5 ec=47/36 lis/c=104/63 les/c/f=105/64/0 sis=107 pruub=13.474637032s) [0] async=[0] r=-1 lpr=107 pi=[63,107)/1 crt=44'385 mlcod 44'385 active pruub 284.871185303s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 07:17:47 compute-0 podman[109390]: 2025-11-29 07:17:47.87028942 +0000 UTC m=+0.243037156 container create 24103b0ff1c3dc72ee99932cc84a589aa12633f8b4b72a3ccf1019f9a525a43e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:17:47 compute-0 ceph-osd[91083]: osd.2 pg_epoch: 107 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=104/105 n=5 ec=47/36 lis/c=104/63 les/c/f=105/64/0 sis=107 pruub=13.474039078s) [0] r=-1 lpr=107 pi=[63,107)/1 crt=44'385 mlcod 0'0 unknown NOTIFY pruub 284.871185303s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 07:17:47 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 29 07:17:47 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 29 07:17:47 compute-0 systemd[1]: Started libpod-conmon-24103b0ff1c3dc72ee99932cc84a589aa12633f8b4b72a3ccf1019f9a525a43e.scope.
Nov 29 07:17:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:17:48 compute-0 podman[109390]: 2025-11-29 07:17:48.021273998 +0000 UTC m=+0.394021764 container init 24103b0ff1c3dc72ee99932cc84a589aa12633f8b4b72a3ccf1019f9a525a43e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:17:48 compute-0 ceph-mon[75050]: 11.4 scrub starts
Nov 29 07:17:48 compute-0 ceph-mon[75050]: 11.4 scrub ok
Nov 29 07:17:48 compute-0 ceph-mon[75050]: osdmap e106: 3 total, 3 up, 3 in
Nov 29 07:17:48 compute-0 ceph-mon[75050]: osdmap e107: 3 total, 3 up, 3 in
Nov 29 07:17:48 compute-0 ceph-mon[75050]: pgmap v269: 305 pgs: 1 peering, 3 active+clean+scrubbing, 2 unknown, 299 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Nov 29 07:17:48 compute-0 podman[109390]: 2025-11-29 07:17:48.030346344 +0000 UTC m=+0.403094080 container start 24103b0ff1c3dc72ee99932cc84a589aa12633f8b4b72a3ccf1019f9a525a43e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:17:48 compute-0 vigilant_chebyshev[109407]: 167 167
Nov 29 07:17:48 compute-0 systemd[1]: libpod-24103b0ff1c3dc72ee99932cc84a589aa12633f8b4b72a3ccf1019f9a525a43e.scope: Deactivated successfully.
Nov 29 07:17:48 compute-0 podman[109390]: 2025-11-29 07:17:48.049151885 +0000 UTC m=+0.421899621 container attach 24103b0ff1c3dc72ee99932cc84a589aa12633f8b4b72a3ccf1019f9a525a43e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:17:48 compute-0 podman[109390]: 2025-11-29 07:17:48.049684631 +0000 UTC m=+0.422432367 container died 24103b0ff1c3dc72ee99932cc84a589aa12633f8b4b72a3ccf1019f9a525a43e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-108cc76f2ff53b6f9f31d9c669c2c4a612415ede5754cc3f2c8cfaf5748cf02d-merged.mount: Deactivated successfully.
Nov 29 07:17:48 compute-0 podman[109390]: 2025-11-29 07:17:48.158707267 +0000 UTC m=+0.531455003 container remove 24103b0ff1c3dc72ee99932cc84a589aa12633f8b4b72a3ccf1019f9a525a43e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chebyshev, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:17:48 compute-0 systemd[1]: libpod-conmon-24103b0ff1c3dc72ee99932cc84a589aa12633f8b4b72a3ccf1019f9a525a43e.scope: Deactivated successfully.
Nov 29 07:17:48 compute-0 podman[109432]: 2025-11-29 07:17:48.300617439 +0000 UTC m=+0.028503647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:17:48 compute-0 podman[109432]: 2025-11-29 07:17:48.398688324 +0000 UTC m=+0.126574482 container create 8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:17:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 29 07:17:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 07:17:48 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 29 07:17:48 compute-0 ceph-osd[89840]: osd.1 pg_epoch: 108 pg[9.1f( v 44'385 (0'0,44'385] local-lis/les=106/108 n=5 ec=47/36 lis/c=104/66 les/c/f=105/67/0 sis=106) [1] r=0 lpr=106 pi=[66,106)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:17:48 compute-0 ceph-osd[88831]: osd.0 pg_epoch: 108 pg[9.1e( v 44'385 (0'0,44'385] local-lis/les=107/108 n=5 ec=47/36 lis/c=104/63 les/c/f=105/64/0 sis=107) [0] r=0 lpr=107 pi=[63,107)/1 crt=44'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 07:17:48 compute-0 systemd[1]: Started libpod-conmon-8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68.scope.
Nov 29 07:17:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83761796bfd1c9659b5b3fe3335fb7d44b5b06e18a3c523ad0a98f427a2da0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83761796bfd1c9659b5b3fe3335fb7d44b5b06e18a3c523ad0a98f427a2da0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83761796bfd1c9659b5b3fe3335fb7d44b5b06e18a3c523ad0a98f427a2da0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83761796bfd1c9659b5b3fe3335fb7d44b5b06e18a3c523ad0a98f427a2da0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:17:49 compute-0 ceph-mon[75050]: 8.b scrub starts
Nov 29 07:17:49 compute-0 ceph-mon[75050]: 8.4 scrub starts
Nov 29 07:17:49 compute-0 ceph-mon[75050]: 8.b scrub ok
Nov 29 07:17:49 compute-0 ceph-mon[75050]: 8.4 scrub ok
Nov 29 07:17:49 compute-0 ceph-mon[75050]: osdmap e108: 3 total, 3 up, 3 in
Nov 29 07:17:49 compute-0 podman[109432]: 2025-11-29 07:17:49.077677903 +0000 UTC m=+0.805564021 container init 8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:17:49 compute-0 podman[109432]: 2025-11-29 07:17:49.086470742 +0000 UTC m=+0.814356860 container start 8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:17:49 compute-0 podman[109432]: 2025-11-29 07:17:49.099146823 +0000 UTC m=+0.827032951 container attach 8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:17:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 1 peering, 1 active+clean+scrubbing, 2 unknown, 301 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 1 objects/s recovering
Nov 29 07:17:49 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 29 07:17:49 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 29 07:17:49 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 29 07:17:49 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 29 07:17:50 compute-0 ceph-mon[75050]: pgmap v271: 305 pgs: 1 peering, 1 active+clean+scrubbing, 2 unknown, 301 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 1 objects/s recovering
Nov 29 07:17:50 compute-0 hungry_payne[109449]: {
Nov 29 07:17:50 compute-0 hungry_payne[109449]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "osd_id": 2,
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "type": "bluestore"
Nov 29 07:17:50 compute-0 hungry_payne[109449]:     },
Nov 29 07:17:50 compute-0 hungry_payne[109449]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "osd_id": 1,
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "type": "bluestore"
Nov 29 07:17:50 compute-0 hungry_payne[109449]:     },
Nov 29 07:17:50 compute-0 hungry_payne[109449]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "osd_id": 0,
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:17:50 compute-0 hungry_payne[109449]:         "type": "bluestore"
Nov 29 07:17:50 compute-0 hungry_payne[109449]:     }
Nov 29 07:17:50 compute-0 hungry_payne[109449]: }
Nov 29 07:17:50 compute-0 systemd[1]: libpod-8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68.scope: Deactivated successfully.
Nov 29 07:17:50 compute-0 podman[109432]: 2025-11-29 07:17:50.18391058 +0000 UTC m=+1.911796688 container died 8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:17:50 compute-0 systemd[1]: libpod-8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68.scope: Consumed 1.106s CPU time.
Nov 29 07:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c83761796bfd1c9659b5b3fe3335fb7d44b5b06e18a3c523ad0a98f427a2da0b-merged.mount: Deactivated successfully.
Nov 29 07:17:50 compute-0 podman[109432]: 2025-11-29 07:17:50.289667601 +0000 UTC m=+2.017553719 container remove 8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_payne, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:17:50 compute-0 systemd[1]: libpod-conmon-8932b91bfa7039f48e7682ceb0e59e24b59c1a196dfcd6e1191d7003964d4d68.scope: Deactivated successfully.
Nov 29 07:17:50 compute-0 sudo[109325]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:17:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:17:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:17:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:17:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev f09fee76-717d-47e9-bf18-b10d24362ce8 does not exist
Nov 29 07:17:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5ffbd8e0-fdc7-4a7c-87ed-fe2a05136778 does not exist
Nov 29 07:17:50 compute-0 sudo[109500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:17:50 compute-0 sudo[109500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:50 compute-0 sudo[109500]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:50 compute-0 sudo[109525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:17:50 compute-0 sudo[109525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:17:50 compute-0 sudo[109525]: pam_unix(sudo:session): session closed for user root
Nov 29 07:17:51 compute-0 ceph-mon[75050]: 11.6 scrub starts
Nov 29 07:17:51 compute-0 ceph-mon[75050]: 11.6 scrub ok
Nov 29 07:17:51 compute-0 ceph-mon[75050]: 11.1b scrub starts
Nov 29 07:17:51 compute-0 ceph-mon[75050]: 11.1b scrub ok
Nov 29 07:17:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:17:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:17:51 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 29 07:17:51 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 29 07:17:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 1 active+clean+scrubbing, 1 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Nov 29 07:17:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:17:52 compute-0 ceph-mon[75050]: 8.1e scrub starts
Nov 29 07:17:52 compute-0 ceph-mon[75050]: 8.1e scrub ok
Nov 29 07:17:52 compute-0 ceph-mon[75050]: pgmap v272: 305 pgs: 1 active+clean+scrubbing, 1 unknown, 303 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Nov 29 07:17:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 793 B/s rd, 0 B/s wr, 3 op/s; 68 B/s, 4 objects/s recovering
Nov 29 07:17:53 compute-0 ceph-mon[75050]: pgmap v273: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 793 B/s rd, 0 B/s wr, 3 op/s; 68 B/s, 4 objects/s recovering
Nov 29 07:17:53 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Nov 29 07:17:53 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Nov 29 07:17:54 compute-0 ceph-mon[75050]: 11.1c deep-scrub starts
Nov 29 07:17:54 compute-0 ceph-mon[75050]: 11.1c deep-scrub ok
Nov 29 07:17:55 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 29 07:17:55 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 29 07:17:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 7 op/s; 54 B/s, 3 objects/s recovering
Nov 29 07:17:56 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.6 deep-scrub starts
Nov 29 07:17:56 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.6 deep-scrub ok
Nov 29 07:17:56 compute-0 ceph-mon[75050]: 9.a scrub starts
Nov 29 07:17:56 compute-0 ceph-mon[75050]: 9.a scrub ok
Nov 29 07:17:56 compute-0 ceph-mon[75050]: pgmap v274: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 7 op/s; 54 B/s, 3 objects/s recovering
Nov 29 07:17:57 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 29 07:17:57 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 29 07:17:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:17:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 3 objects/s recovering
Nov 29 07:17:57 compute-0 ceph-mon[75050]: 8.6 deep-scrub starts
Nov 29 07:17:57 compute-0 ceph-mon[75050]: 8.6 deep-scrub ok
Nov 29 07:17:57 compute-0 ceph-mon[75050]: 9.10 scrub starts
Nov 29 07:17:57 compute-0 ceph-mon[75050]: 9.10 scrub ok
Nov 29 07:17:57 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 29 07:17:57 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 29 07:17:58 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Nov 29 07:17:58 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Nov 29 07:17:58 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 29 07:17:58 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 29 07:17:58 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 29 07:17:58 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 29 07:17:59 compute-0 ceph-mon[75050]: pgmap v275: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 3 objects/s recovering
Nov 29 07:17:59 compute-0 ceph-mon[75050]: 11.1e scrub starts
Nov 29 07:17:59 compute-0 ceph-mon[75050]: 11.1e scrub ok
Nov 29 07:17:59 compute-0 ceph-mon[75050]: 9.12 deep-scrub starts
Nov 29 07:17:59 compute-0 ceph-mon[75050]: 9.12 deep-scrub ok
Nov 29 07:17:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 41 B/s, 3 objects/s recovering
Nov 29 07:17:59 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 29 07:17:59 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 29 07:17:59 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 29 07:17:59 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 29 07:18:01 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 29 07:18:01 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 29 07:18:01 compute-0 ceph-mon[75050]: 8.18 scrub starts
Nov 29 07:18:01 compute-0 ceph-mon[75050]: 8.18 scrub ok
Nov 29 07:18:01 compute-0 ceph-mon[75050]: 11.11 scrub starts
Nov 29 07:18:01 compute-0 ceph-mon[75050]: 11.11 scrub ok
Nov 29 07:18:01 compute-0 ceph-mon[75050]: pgmap v276: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 41 B/s, 3 objects/s recovering
Nov 29 07:18:01 compute-0 ceph-mon[75050]: 8.f scrub starts
Nov 29 07:18:01 compute-0 ceph-mon[75050]: 8.f scrub ok
Nov 29 07:18:01 compute-0 anacron[30860]: Job `cron.daily' started
Nov 29 07:18:01 compute-0 anacron[30860]: Job `cron.daily' terminated
Nov 29 07:18:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 2 objects/s recovering
Nov 29 07:18:01 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 29 07:18:01 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 29 07:18:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:02 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 29 07:18:02 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 29 07:18:03 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 29 07:18:03 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 29 07:18:03 compute-0 ceph-mon[75050]: 11.12 scrub starts
Nov 29 07:18:03 compute-0 ceph-mon[75050]: 11.12 scrub ok
Nov 29 07:18:03 compute-0 ceph-mon[75050]: 9.14 scrub starts
Nov 29 07:18:03 compute-0 ceph-mon[75050]: 9.14 scrub ok
Nov 29 07:18:03 compute-0 ceph-mon[75050]: pgmap v277: 305 pgs: 305 active+clean; 457 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 2 objects/s recovering
Nov 29 07:18:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Nov 29 07:18:03 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.11 deep-scrub starts
Nov 29 07:18:04 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.11 deep-scrub ok
Nov 29 07:18:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 29 07:18:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 29 07:18:04 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 29 07:18:05 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 29 07:18:05 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:18:05
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', 'vms']
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:18:05 compute-0 ceph-mon[75050]: 8.12 scrub starts
Nov 29 07:18:05 compute-0 ceph-mon[75050]: 8.12 scrub ok
Nov 29 07:18:05 compute-0 ceph-mon[75050]: 8.1d scrub starts
Nov 29 07:18:05 compute-0 ceph-mon[75050]: 8.1d scrub ok
Nov 29 07:18:05 compute-0 ceph-mon[75050]: 9.1a scrub starts
Nov 29 07:18:05 compute-0 ceph-mon[75050]: 9.1a scrub ok
Nov 29 07:18:05 compute-0 ceph-mon[75050]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Nov 29 07:18:05 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:18:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.a deep-scrub starts
Nov 29 07:18:06 compute-0 sudo[108515]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.a deep-scrub ok
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:18:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:18:06 compute-0 sudo[109701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxzolraegdoqsveqrorggdkkxgubpkwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400686.4200397-137-67093163570819/AnsiballZ_command.py'
Nov 29 07:18:06 compute-0 sudo[109701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:06 compute-0 python3.9[109703]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:18:06 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 29 07:18:06 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 29 07:18:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:07 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 29 07:18:07 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 29 07:18:07 compute-0 sudo[109701]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 8.11 deep-scrub starts
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 8.11 deep-scrub ok
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 11.5 scrub starts
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 11.5 scrub ok
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 11.b scrub starts
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 11.7 scrub starts
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 11.7 scrub ok
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 11.b scrub ok
Nov 29 07:18:08 compute-0 ceph-mon[75050]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 11.a deep-scrub starts
Nov 29 07:18:08 compute-0 ceph-mon[75050]: 11.a deep-scrub ok
Nov 29 07:18:08 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 29 07:18:08 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 29 07:18:08 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 29 07:18:08 compute-0 sudo[109988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atjkcdszknaddzrjosvxssatkijejgub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400687.912194-145-235407972700681/AnsiballZ_selinux.py'
Nov 29 07:18:08 compute-0 sudo[109988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:08 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 29 07:18:09 compute-0 python3.9[109990]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 07:18:09 compute-0 sudo[109988]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:09 compute-0 ceph-mon[75050]: 11.18 scrub starts
Nov 29 07:18:09 compute-0 ceph-mon[75050]: 11.18 scrub ok
Nov 29 07:18:09 compute-0 ceph-mon[75050]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:09 compute-0 ceph-mon[75050]: 11.19 scrub starts
Nov 29 07:18:09 compute-0 ceph-mon[75050]: 11.19 scrub ok
Nov 29 07:18:09 compute-0 ceph-mon[75050]: 11.c scrub starts
Nov 29 07:18:09 compute-0 ceph-mon[75050]: 11.c scrub ok
Nov 29 07:18:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:09 compute-0 sudo[110140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbeljqgrzsqvxnslodassbynklrafiqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400689.3507571-156-166132160149176/AnsiballZ_command.py'
Nov 29 07:18:09 compute-0 sudo[110140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:09 compute-0 python3.9[110142]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 07:18:09 compute-0 sudo[110140]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:10 compute-0 sudo[110292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akmafjasggkbqwtlmdbmilqmdrpkhcnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400690.031649-164-221327279736478/AnsiballZ_file.py'
Nov 29 07:18:10 compute-0 sudo[110292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:10 compute-0 python3.9[110294]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:10 compute-0 sudo[110292]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:10 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 29 07:18:10 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 29 07:18:11 compute-0 sudo[110444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqgduryyauaicrvxfxtyyexrugljqkpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400690.8192847-172-73319218026102/AnsiballZ_mount.py'
Nov 29 07:18:11 compute-0 sudo[110444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:11 compute-0 python3.9[110446]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 07:18:11 compute-0 sudo[110444]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:11 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 29 07:18:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:12 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 29 07:18:12 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 29 07:18:12 compute-0 ceph-mon[75050]: 8.1f scrub starts
Nov 29 07:18:12 compute-0 ceph-mon[75050]: 8.1f scrub ok
Nov 29 07:18:12 compute-0 ceph-mon[75050]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:12 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 29 07:18:12 compute-0 sudo[110596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efqprqasswgvebdozqwkncftrvyhuqan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400692.6682432-200-13420618893277/AnsiballZ_file.py'
Nov 29 07:18:12 compute-0 sudo[110596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:13 compute-0 python3.9[110598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:18:13 compute-0 sudo[110596]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:13 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 29 07:18:13 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 29 07:18:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:13 compute-0 sudo[110748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbapdxtwbvwzzlimihsmlmqratjuwgky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400693.3441267-208-208834937559950/AnsiballZ_stat.py'
Nov 29 07:18:13 compute-0 sudo[110748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:13 compute-0 ceph-mon[75050]: 8.1b scrub starts
Nov 29 07:18:13 compute-0 ceph-mon[75050]: 8.1b scrub ok
Nov 29 07:18:13 compute-0 ceph-mon[75050]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:13 compute-0 ceph-mon[75050]: 11.1a scrub starts
Nov 29 07:18:13 compute-0 ceph-mon[75050]: 11.13 scrub starts
Nov 29 07:18:13 compute-0 ceph-mon[75050]: 11.13 scrub ok
Nov 29 07:18:13 compute-0 ceph-mon[75050]: 11.1a scrub ok
Nov 29 07:18:13 compute-0 python3.9[110750]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:13 compute-0 sudo[110748]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:14 compute-0 sudo[110826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fepyxnogshkkfwyhrxlvralwsifkanxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400693.3441267-208-208834937559950/AnsiballZ_file.py'
Nov 29 07:18:14 compute-0 sudo[110826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:14 compute-0 python3.9[110828]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:18:14 compute-0 sudo[110826]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:14 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 29 07:18:14 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:18:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:18:14 compute-0 ceph-mon[75050]: 8.9 scrub starts
Nov 29 07:18:14 compute-0 ceph-mon[75050]: 8.9 scrub ok
Nov 29 07:18:14 compute-0 ceph-mon[75050]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:15 compute-0 sudo[110978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmiobvprekkailvlahjxujtrcodxnplk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400694.9325652-229-208364161398801/AnsiballZ_stat.py'
Nov 29 07:18:15 compute-0 sudo[110978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:15 compute-0 python3.9[110980]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:18:15 compute-0 sudo[110978]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:15 compute-0 ceph-mon[75050]: 8.1a scrub starts
Nov 29 07:18:15 compute-0 ceph-mon[75050]: 8.1a scrub ok
Nov 29 07:18:15 compute-0 ceph-mon[75050]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:16 compute-0 sudo[111132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tueczmfdppfwdisjysktynlvbnirhsmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400695.9823477-242-48982844489104/AnsiballZ_getent.py'
Nov 29 07:18:16 compute-0 sudo[111132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:16 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 29 07:18:16 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 29 07:18:16 compute-0 python3.9[111134]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 07:18:16 compute-0 sudo[111132]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:17 compute-0 sudo[111285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csubcsvblrilivuhsyjvzgqfmuukiysf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400696.8963733-252-27591267906649/AnsiballZ_getent.py'
Nov 29 07:18:17 compute-0 sudo[111285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:17 compute-0 python3.9[111287]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 07:18:17 compute-0 sudo[111285]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:17 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 29 07:18:17 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 29 07:18:17 compute-0 ceph-mon[75050]: 11.10 scrub starts
Nov 29 07:18:17 compute-0 ceph-mon[75050]: 11.10 scrub ok
Nov 29 07:18:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:17 compute-0 sudo[111438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovzzurhytzygruulbrahawkacqmspsji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400697.5218356-260-200886818524393/AnsiballZ_group.py'
Nov 29 07:18:17 compute-0 sudo[111438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:18 compute-0 python3.9[111440]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:18:18 compute-0 sudo[111438]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:18 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 29 07:18:18 compute-0 ceph-mon[75050]: 10.1e scrub starts
Nov 29 07:18:18 compute-0 ceph-mon[75050]: 10.1e scrub ok
Nov 29 07:18:18 compute-0 ceph-mon[75050]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:18 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 29 07:18:18 compute-0 sudo[111590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwrnlvifugplcgohfhhkxzqwqzxryngv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400698.4038167-269-112997923406878/AnsiballZ_file.py'
Nov 29 07:18:18 compute-0 sudo[111590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:18 compute-0 python3.9[111592]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 07:18:18 compute-0 sudo[111590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:19 compute-0 sudo[111742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzakstnjaovmiabwxkjnkbqxomgxprij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400699.2609243-280-218566297770995/AnsiballZ_dnf.py'
Nov 29 07:18:19 compute-0 sudo[111742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:19 compute-0 python3.9[111744]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:18:19 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Nov 29 07:18:19 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Nov 29 07:18:19 compute-0 ceph-mon[75050]: 10.d scrub starts
Nov 29 07:18:19 compute-0 ceph-mon[75050]: 10.d scrub ok
Nov 29 07:18:20 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 29 07:18:20 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 29 07:18:20 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.1c deep-scrub starts
Nov 29 07:18:20 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 8.1c deep-scrub ok
Nov 29 07:18:20 compute-0 ceph-mon[75050]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:20 compute-0 ceph-mon[75050]: 11.1f deep-scrub starts
Nov 29 07:18:20 compute-0 ceph-mon[75050]: 11.1f deep-scrub ok
Nov 29 07:18:21 compute-0 sudo[111742]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:21 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 29 07:18:21 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 29 07:18:21 compute-0 sudo[111895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnamyoxyxssxzsewkibdvogpypybvnhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400701.470191-288-33801746769629/AnsiballZ_file.py'
Nov 29 07:18:21 compute-0 sudo[111895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:21 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 29 07:18:21 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 29 07:18:21 compute-0 python3.9[111897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:18:21 compute-0 sudo[111895]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:22 compute-0 ceph-mon[75050]: 10.7 scrub starts
Nov 29 07:18:22 compute-0 ceph-mon[75050]: 10.7 scrub ok
Nov 29 07:18:22 compute-0 ceph-mon[75050]: 8.1c deep-scrub starts
Nov 29 07:18:22 compute-0 ceph-mon[75050]: 8.1c deep-scrub ok
Nov 29 07:18:22 compute-0 ceph-mon[75050]: pgmap v287: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:22 compute-0 sudo[112047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlnarhombloduodrqeqpwdlkzriomgat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400702.1640775-296-266809060656184/AnsiballZ_stat.py'
Nov 29 07:18:22 compute-0 sudo[112047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:22 compute-0 python3.9[112049]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:22 compute-0 sudo[112047]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:22 compute-0 sudo[112125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unkdxzxkmhircqemnprqifosgimbopzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400702.1640775-296-266809060656184/AnsiballZ_file.py'
Nov 29 07:18:22 compute-0 sudo[112125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:23 compute-0 python3.9[112127]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:18:23 compute-0 sudo[112125]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:23 compute-0 sudo[112277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnyhtggiqyvhparahtewuqltanfwcjxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400703.3324823-309-46612210670093/AnsiballZ_stat.py'
Nov 29 07:18:23 compute-0 sudo[112277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:23 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 29 07:18:23 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 29 07:18:23 compute-0 ceph-mon[75050]: 10.9 scrub starts
Nov 29 07:18:23 compute-0 ceph-mon[75050]: 10.9 scrub ok
Nov 29 07:18:23 compute-0 ceph-mon[75050]: 9.6 scrub starts
Nov 29 07:18:23 compute-0 ceph-mon[75050]: 9.6 scrub ok
Nov 29 07:18:23 compute-0 python3.9[112279]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:18:23 compute-0 sudo[112277]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:24 compute-0 sudo[112355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trlqmoqwtynhcyvozprixwbccbakbfhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400703.3324823-309-46612210670093/AnsiballZ_file.py'
Nov 29 07:18:24 compute-0 sudo[112355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:24 compute-0 python3.9[112357]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:18:24 compute-0 sudo[112355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:25 compute-0 sudo[112507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feodjviygcbofxhupsuafpyroqwvmhfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400704.855525-324-62528224468021/AnsiballZ_dnf.py'
Nov 29 07:18:25 compute-0 sudo[112507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:25 compute-0 python3.9[112509]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:18:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:26 compute-0 ceph-mon[75050]: pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:26 compute-0 ceph-mon[75050]: 9.e scrub starts
Nov 29 07:18:26 compute-0 ceph-mon[75050]: 9.e scrub ok
Nov 29 07:18:27 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 29 07:18:27 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 29 07:18:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:27 compute-0 sudo[112507]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:27 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 29 07:18:27 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 29 07:18:27 compute-0 ceph-mon[75050]: pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:28 compute-0 python3.9[112660]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:18:28 compute-0 python3.9[112812]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 07:18:29 compute-0 ceph-mon[75050]: 11.16 scrub starts
Nov 29 07:18:29 compute-0 ceph-mon[75050]: 11.16 scrub ok
Nov 29 07:18:29 compute-0 ceph-mon[75050]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:29 compute-0 ceph-mon[75050]: 10.e scrub starts
Nov 29 07:18:29 compute-0 ceph-mon[75050]: 10.e scrub ok
Nov 29 07:18:29 compute-0 python3.9[112962]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:18:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:29 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.4 deep-scrub starts
Nov 29 07:18:29 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.4 deep-scrub ok
Nov 29 07:18:30 compute-0 ceph-mon[75050]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:30 compute-0 sudo[113112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyibblqyduqwflfalsmcsgkxvpbzvxar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400709.9486604-365-216766599720804/AnsiballZ_systemd.py'
Nov 29 07:18:30 compute-0 sudo[113112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:30 compute-0 python3.9[113114]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:18:30 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 07:18:31 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 07:18:31 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 07:18:31 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 07:18:31 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 07:18:31 compute-0 sudo[113112]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:31 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 29 07:18:31 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 29 07:18:32 compute-0 python3.9[113277]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 07:18:32 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Nov 29 07:18:32 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Nov 29 07:18:32 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 29 07:18:32 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 29 07:18:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:33 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.16 deep-scrub starts
Nov 29 07:18:34 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 29 07:18:35 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 29 07:18:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:18:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:18:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:18:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:18:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:18:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:18:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:36 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 29 07:18:36 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 29 07:18:36 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:18:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:37 compute-0 sudo[113427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khaigrjcefinohiyztsoaidahtbnkaph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400714.5125782-422-192179307198243/AnsiballZ_systemd.py'
Nov 29 07:18:37 compute-0 sudo[113427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:37 compute-0 ceph-mon[75050]: 10.4 deep-scrub starts
Nov 29 07:18:37 compute-0 ceph-mon[75050]: 10.4 deep-scrub ok
Nov 29 07:18:37 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 5.925307274s
Nov 29 07:18:37 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 5.925307751s
Nov 29 07:18:37 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.925551891s, txc = 0x560f411c4300
Nov 29 07:18:37 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 29 07:18:37 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.16 deep-scrub ok
Nov 29 07:18:37 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 29 07:18:37 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.912580490s, txc = 0x560f411c9800
Nov 29 07:18:37 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 29 07:18:37 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 29 07:18:37 compute-0 python3.9[113429]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:18:37 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.f deep-scrub starts
Nov 29 07:18:37 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.f deep-scrub ok
Nov 29 07:18:37 compute-0 sudo[113427]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:38 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 29 07:18:38 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 29 07:18:38 compute-0 sudo[113581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qykodrejoxsezxxwfjfqwidqcebiplbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400718.074638-422-44530828694675/AnsiballZ_systemd.py'
Nov 29 07:18:38 compute-0 sudo[113581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:38 compute-0 python3.9[113583]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:18:38 compute-0 sudo[113581]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:39 compute-0 ceph-mon[75050]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 9.17 scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 9.17 scrub ok
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.8 deep-scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.8 deep-scrub ok
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 11.1d scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 11.1d scrub ok
Nov 29 07:18:39 compute-0 ceph-mon[75050]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.16 deep-scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.1 scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.b scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.19 scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 9.7 scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.16 deep-scrub ok
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 9.7 scrub ok
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.b scrub ok
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.1 scrub ok
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.19 scrub ok
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 9.f deep-scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 9.f deep-scrub ok
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.13 scrub starts
Nov 29 07:18:39 compute-0 ceph-mon[75050]: 10.13 scrub ok
Nov 29 07:18:39 compute-0 sshd-session[106603]: Connection closed by 192.168.122.30 port 46094
Nov 29 07:18:39 compute-0 sshd-session[106600]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:18:39 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 07:18:39 compute-0 systemd[1]: session-34.scope: Consumed 1min 5.817s CPU time.
Nov 29 07:18:39 compute-0 systemd-logind[807]: Session 34 logged out. Waiting for processes to exit.
Nov 29 07:18:39 compute-0 systemd-logind[807]: Removed session 34.
Nov 29 07:18:39 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 29 07:18:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 1 active+clean+scrubbing+deep, 3 active+clean+scrubbing, 301 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:39 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 29 07:18:39 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 29 07:18:39 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 29 07:18:40 compute-0 ceph-mon[75050]: 10.15 scrub starts
Nov 29 07:18:40 compute-0 ceph-mon[75050]: pgmap v296: 305 pgs: 1 active+clean+scrubbing+deep, 3 active+clean+scrubbing, 301 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:40 compute-0 ceph-mon[75050]: 10.15 scrub ok
Nov 29 07:18:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 1 active+clean+scrubbing+deep, 3 active+clean+scrubbing, 301 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:42 compute-0 ceph-mon[75050]: 9.8 scrub starts
Nov 29 07:18:42 compute-0 ceph-mon[75050]: 9.8 scrub ok
Nov 29 07:18:42 compute-0 ceph-mon[75050]: pgmap v297: 305 pgs: 1 active+clean+scrubbing+deep, 3 active+clean+scrubbing, 301 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:44 compute-0 sshd-session[113610]: Accepted publickey for zuul from 192.168.122.30 port 56550 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:18:44 compute-0 systemd-logind[807]: New session 35 of user zuul.
Nov 29 07:18:44 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 29 07:18:44 compute-0 sshd-session[113610]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:18:45 compute-0 ceph-mon[75050]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:46 compute-0 python3.9[113763]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:18:46 compute-0 ceph-mon[75050]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:46 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 29 07:18:46 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 29 07:18:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:47 compute-0 sudo[113917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfhexlqtndbyzzptmvuhcmiluomabfki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400726.6593738-36-206259490113364/AnsiballZ_getent.py'
Nov 29 07:18:47 compute-0 sudo[113917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:47 compute-0 python3.9[113919]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 07:18:47 compute-0 sudo[113917]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:48 compute-0 sudo[114070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phgnqmxksrojiddewfqjeprcfguasjaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400727.7651703-48-262080530933474/AnsiballZ_setup.py'
Nov 29 07:18:48 compute-0 sudo[114070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:48 compute-0 ceph-mon[75050]: 10.17 scrub starts
Nov 29 07:18:48 compute-0 ceph-mon[75050]: 10.17 scrub ok
Nov 29 07:18:48 compute-0 python3.9[114072]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:18:48 compute-0 sudo[114070]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:49 compute-0 sudo[114154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytrpamshwhajawnbwkuhnapiinbopgno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400727.7651703-48-262080530933474/AnsiballZ_dnf.py'
Nov 29 07:18:49 compute-0 sudo[114154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:49 compute-0 python3.9[114156]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:18:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:49 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 29 07:18:49 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 29 07:18:50 compute-0 sudo[114158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:50 compute-0 sudo[114158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:50 compute-0 sudo[114158]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:50 compute-0 sudo[114183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:18:50 compute-0 sudo[114183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:50 compute-0 sudo[114183]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:50 compute-0 sudo[114208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:50 compute-0 sudo[114208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:50 compute-0 sudo[114208]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:51 compute-0 sudo[114233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:18:51 compute-0 sudo[114233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:51 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 29 07:18:51 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 29 07:18:51 compute-0 sudo[114233]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:18:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:18:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:18:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:18:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:18:51 compute-0 ceph-mon[75050]: pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:51 compute-0 ceph-mon[75050]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:51 compute-0 ceph-mon[75050]: 9.18 scrub starts
Nov 29 07:18:51 compute-0 ceph-mon[75050]: 9.18 scrub ok
Nov 29 07:18:51 compute-0 ceph-mon[75050]: 10.1a scrub starts
Nov 29 07:18:51 compute-0 ceph-mon[75050]: 10.1a scrub ok
Nov 29 07:18:51 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:18:51 compute-0 sudo[114154]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:18:51 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ed5100d2-0051-4305-8b38-b32482070194 does not exist
Nov 29 07:18:51 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ff18eade-8798-42e2-83af-eda1679548d5 does not exist
Nov 29 07:18:51 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5c8a34cd-49bf-40a2-b046-16c852121e7a does not exist
Nov 29 07:18:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:18:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:18:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:18:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:18:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:18:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:18:51 compute-0 sudo[114290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:51 compute-0 sudo[114290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:51 compute-0 sudo[114290]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:51 compute-0 sudo[114338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:18:51 compute-0 sudo[114338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:51 compute-0 sudo[114338]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:51 compute-0 sudo[114363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:51 compute-0 sudo[114363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:51 compute-0 sudo[114363]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:51 compute-0 sudo[114411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:18:51 compute-0 sudo[114411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:52 compute-0 sudo[114561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcdkehihyxxxwpkcjphggrthcwlwwnxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400731.928873-62-143950971543763/AnsiballZ_dnf.py'
Nov 29 07:18:52 compute-0 sudo[114561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:52 compute-0 podman[114577]: 2025-11-29 07:18:52.351314558 +0000 UTC m=+0.027481686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:18:52 compute-0 python3.9[114565]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:18:52 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 29 07:18:52 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 29 07:18:52 compute-0 podman[114577]: 2025-11-29 07:18:52.698674368 +0000 UTC m=+0.374841416 container create 34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:18:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:18:52 compute-0 ceph-mon[75050]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:18:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:18:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:18:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:18:52 compute-0 systemd[1]: Started libpod-conmon-34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69.scope.
Nov 29 07:18:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:18:52 compute-0 podman[114577]: 2025-11-29 07:18:52.888656064 +0000 UTC m=+0.564823112 container init 34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_aryabhata, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:18:52 compute-0 podman[114577]: 2025-11-29 07:18:52.901380542 +0000 UTC m=+0.577547570 container start 34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:18:52 compute-0 podman[114577]: 2025-11-29 07:18:52.908850668 +0000 UTC m=+0.585017696 container attach 34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:18:52 compute-0 tender_aryabhata[114595]: 167 167
Nov 29 07:18:52 compute-0 systemd[1]: libpod-34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69.scope: Deactivated successfully.
Nov 29 07:18:52 compute-0 conmon[114595]: conmon 34f9beec1eed7bd857dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69.scope/container/memory.events
Nov 29 07:18:52 compute-0 podman[114577]: 2025-11-29 07:18:52.912945526 +0000 UTC m=+0.589112554 container died 34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 07:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-19c9824f9026a77ede9bc3cd7db7058dc7b8da4ed2060951cd26467ab585468a-merged.mount: Deactivated successfully.
Nov 29 07:18:52 compute-0 podman[114577]: 2025-11-29 07:18:52.959252666 +0000 UTC m=+0.635419694 container remove 34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:18:52 compute-0 systemd[1]: libpod-conmon-34f9beec1eed7bd857dd57d993fe36d5d5e5d0336b978058d58806537addeb69.scope: Deactivated successfully.
Nov 29 07:18:53 compute-0 podman[114619]: 2025-11-29 07:18:53.111383258 +0000 UTC m=+0.030994258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:18:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:53 compute-0 podman[114619]: 2025-11-29 07:18:53.865799435 +0000 UTC m=+0.785410415 container create 742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:18:53 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 29 07:18:53 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 29 07:18:54 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 29 07:18:54 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 29 07:18:54 compute-0 ceph-mon[75050]: 9.11 scrub starts
Nov 29 07:18:54 compute-0 ceph-mon[75050]: 9.11 scrub ok
Nov 29 07:18:54 compute-0 ceph-mon[75050]: pgmap v303: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:55 compute-0 systemd[1]: Started libpod-conmon-742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa.scope.
Nov 29 07:18:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82d79fee87b8ec8508200b185a42847a47b413c7a92cdd9c1e558aa4daad735/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82d79fee87b8ec8508200b185a42847a47b413c7a92cdd9c1e558aa4daad735/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82d79fee87b8ec8508200b185a42847a47b413c7a92cdd9c1e558aa4daad735/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82d79fee87b8ec8508200b185a42847a47b413c7a92cdd9c1e558aa4daad735/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82d79fee87b8ec8508200b185a42847a47b413c7a92cdd9c1e558aa4daad735/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:18:55 compute-0 sudo[114561]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:55 compute-0 podman[114619]: 2025-11-29 07:18:55.687619662 +0000 UTC m=+2.607230672 container init 742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:18:55 compute-0 podman[114619]: 2025-11-29 07:18:55.701220355 +0000 UTC m=+2.620831365 container start 742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:18:55 compute-0 podman[114619]: 2025-11-29 07:18:55.736789354 +0000 UTC m=+2.656400514 container attach 742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:18:55 compute-0 ceph-mon[75050]: 9.c scrub starts
Nov 29 07:18:55 compute-0 ceph-mon[75050]: 9.c scrub ok
Nov 29 07:18:55 compute-0 ceph-mon[75050]: 10.10 scrub starts
Nov 29 07:18:55 compute-0 ceph-mon[75050]: 10.10 scrub ok
Nov 29 07:18:55 compute-0 ceph-mon[75050]: pgmap v304: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:55 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 29 07:18:55 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 29 07:18:56 compute-0 sudo[114790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtptpfardynejfipsaulqetodbtzbsop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400735.5591443-70-229175134790267/AnsiballZ_systemd.py'
Nov 29 07:18:56 compute-0 sudo[114790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:56 compute-0 python3.9[114792]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:18:56 compute-0 sudo[114790]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:56 compute-0 lucid_jones[114636]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:18:56 compute-0 lucid_jones[114636]: --> relative data size: 1.0
Nov 29 07:18:56 compute-0 lucid_jones[114636]: --> All data devices are unavailable
Nov 29 07:18:56 compute-0 systemd[1]: libpod-742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa.scope: Deactivated successfully.
Nov 29 07:18:56 compute-0 systemd[1]: libpod-742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa.scope: Consumed 1.086s CPU time.
Nov 29 07:18:56 compute-0 podman[114619]: 2025-11-29 07:18:56.844838181 +0000 UTC m=+3.764449152 container died 742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:18:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:18:57 compute-0 ceph-mon[75050]: 9.13 scrub starts
Nov 29 07:18:57 compute-0 ceph-mon[75050]: 9.13 scrub ok
Nov 29 07:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f82d79fee87b8ec8508200b185a42847a47b413c7a92cdd9c1e558aa4daad735-merged.mount: Deactivated successfully.
Nov 29 07:18:57 compute-0 podman[114619]: 2025-11-29 07:18:57.38192586 +0000 UTC m=+4.301536830 container remove 742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:18:57 compute-0 sudo[114411]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:57 compute-0 systemd[1]: libpod-conmon-742205aa42fb6d17feca97f2ede4033065e868690d678151afcf3a2759e85afa.scope: Deactivated successfully.
Nov 29 07:18:57 compute-0 python3.9[114980]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:18:57 compute-0 sudo[114982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:57 compute-0 sudo[114982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:57 compute-0 sudo[114982]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:57 compute-0 sudo[115012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:18:57 compute-0 sudo[115012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:57 compute-0 sudo[115012]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:57 compute-0 sudo[115056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:18:57 compute-0 sudo[115056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:57 compute-0 sudo[115056]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:57 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 29 07:18:57 compute-0 sudo[115081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:18:57 compute-0 sudo[115081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:18:57 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 29 07:18:58 compute-0 podman[115200]: 2025-11-29 07:18:58.018128696 +0000 UTC m=+0.043423437 container create bf07a2d69be332114712e14892d1e75e2cb261c5818fa1eada44bbd13506f8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_vaughan, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:18:58 compute-0 systemd[1]: Started libpod-conmon-bf07a2d69be332114712e14892d1e75e2cb261c5818fa1eada44bbd13506f8fd.scope.
Nov 29 07:18:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:18:58 compute-0 podman[115200]: 2025-11-29 07:18:58.001199197 +0000 UTC m=+0.026493958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:18:58 compute-0 podman[115200]: 2025-11-29 07:18:58.098948175 +0000 UTC m=+0.124242936 container init bf07a2d69be332114712e14892d1e75e2cb261c5818fa1eada44bbd13506f8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:18:58 compute-0 podman[115200]: 2025-11-29 07:18:58.108929033 +0000 UTC m=+0.134223774 container start bf07a2d69be332114712e14892d1e75e2cb261c5818fa1eada44bbd13506f8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:18:58 compute-0 podman[115200]: 2025-11-29 07:18:58.114436822 +0000 UTC m=+0.139731583 container attach bf07a2d69be332114712e14892d1e75e2cb261c5818fa1eada44bbd13506f8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:18:58 compute-0 stoic_vaughan[115242]: 167 167
Nov 29 07:18:58 compute-0 systemd[1]: libpod-bf07a2d69be332114712e14892d1e75e2cb261c5818fa1eada44bbd13506f8fd.scope: Deactivated successfully.
Nov 29 07:18:58 compute-0 podman[115200]: 2025-11-29 07:18:58.116925995 +0000 UTC m=+0.142220756 container died bf07a2d69be332114712e14892d1e75e2cb261c5818fa1eada44bbd13506f8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8443556f781c3e736feeefb792497c4025a2f4929a86ce37d12975f04078f056-merged.mount: Deactivated successfully.
Nov 29 07:18:58 compute-0 podman[115200]: 2025-11-29 07:18:58.166765077 +0000 UTC m=+0.192059818 container remove bf07a2d69be332114712e14892d1e75e2cb261c5818fa1eada44bbd13506f8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_vaughan, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:18:58 compute-0 systemd[1]: libpod-conmon-bf07a2d69be332114712e14892d1e75e2cb261c5818fa1eada44bbd13506f8fd.scope: Deactivated successfully.
Nov 29 07:18:58 compute-0 sudo[115306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmlwkwyedgbgjajirwclfhzncelqhkpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400737.679912-88-96864421414285/AnsiballZ_sefcontext.py'
Nov 29 07:18:58 compute-0 sudo[115306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:18:58 compute-0 podman[115316]: 2025-11-29 07:18:58.327490936 +0000 UTC m=+0.044347844 container create bc433b0cb690a77d5da42f535cd5b343f0ad4240c2b19c4eff9f454b5d4cf286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:18:58 compute-0 ceph-mon[75050]: pgmap v305: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:58 compute-0 systemd[1]: Started libpod-conmon-bc433b0cb690a77d5da42f535cd5b343f0ad4240c2b19c4eff9f454b5d4cf286.scope.
Nov 29 07:18:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197dc47cf27cb6076cf05db109dd6add96e2b987b634bb5ad7f74ff6c5aad520/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:18:58 compute-0 podman[115316]: 2025-11-29 07:18:58.307149698 +0000 UTC m=+0.024006626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197dc47cf27cb6076cf05db109dd6add96e2b987b634bb5ad7f74ff6c5aad520/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197dc47cf27cb6076cf05db109dd6add96e2b987b634bb5ad7f74ff6c5aad520/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/197dc47cf27cb6076cf05db109dd6add96e2b987b634bb5ad7f74ff6c5aad520/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:18:58 compute-0 podman[115316]: 2025-11-29 07:18:58.417390087 +0000 UTC m=+0.134247025 container init bc433b0cb690a77d5da42f535cd5b343f0ad4240c2b19c4eff9f454b5d4cf286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:18:58 compute-0 podman[115316]: 2025-11-29 07:18:58.427226292 +0000 UTC m=+0.144083240 container start bc433b0cb690a77d5da42f535cd5b343f0ad4240c2b19c4eff9f454b5d4cf286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:18:58 compute-0 podman[115316]: 2025-11-29 07:18:58.43301702 +0000 UTC m=+0.149873958 container attach bc433b0cb690a77d5da42f535cd5b343f0ad4240c2b19c4eff9f454b5d4cf286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:18:58 compute-0 python3.9[115310]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 07:18:58 compute-0 sudo[115306]: pam_unix(sudo:session): session closed for user root
Nov 29 07:18:59 compute-0 frosty_solomon[115334]: {
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:     "0": [
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:         {
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "devices": [
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "/dev/loop3"
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             ],
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_name": "ceph_lv0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_size": "21470642176",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "name": "ceph_lv0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "tags": {
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.cluster_name": "ceph",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.crush_device_class": "",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.encrypted": "0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.osd_id": "0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.type": "block",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.vdo": "0"
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             },
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "type": "block",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "vg_name": "ceph_vg0"
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:         }
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:     ],
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:     "1": [
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:         {
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "devices": [
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "/dev/loop4"
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             ],
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_name": "ceph_lv1",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_size": "21470642176",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "name": "ceph_lv1",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "tags": {
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.cluster_name": "ceph",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.crush_device_class": "",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.encrypted": "0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.osd_id": "1",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.type": "block",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.vdo": "0"
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             },
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "type": "block",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "vg_name": "ceph_vg1"
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:         }
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:     ],
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:     "2": [
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:         {
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "devices": [
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "/dev/loop5"
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             ],
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_name": "ceph_lv2",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_size": "21470642176",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "name": "ceph_lv2",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "tags": {
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.cluster_name": "ceph",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.crush_device_class": "",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.encrypted": "0",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.osd_id": "2",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.type": "block",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:                 "ceph.vdo": "0"
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             },
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "type": "block",
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:             "vg_name": "ceph_vg2"
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:         }
Nov 29 07:18:59 compute-0 frosty_solomon[115334]:     ]
Nov 29 07:18:59 compute-0 frosty_solomon[115334]: }
Nov 29 07:18:59 compute-0 systemd[1]: libpod-bc433b0cb690a77d5da42f535cd5b343f0ad4240c2b19c4eff9f454b5d4cf286.scope: Deactivated successfully.
Nov 29 07:18:59 compute-0 podman[115316]: 2025-11-29 07:18:59.299263351 +0000 UTC m=+1.016120259 container died bc433b0cb690a77d5da42f535cd5b343f0ad4240c2b19c4eff9f454b5d4cf286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:18:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-197dc47cf27cb6076cf05db109dd6add96e2b987b634bb5ad7f74ff6c5aad520-merged.mount: Deactivated successfully.
Nov 29 07:18:59 compute-0 python3.9[115492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:18:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:18:59 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Nov 29 07:18:59 compute-0 ceph-mon[75050]: 9.9 scrub starts
Nov 29 07:18:59 compute-0 ceph-mon[75050]: 9.9 scrub ok
Nov 29 07:18:59 compute-0 ceph-osd[91083]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Nov 29 07:19:00 compute-0 sudo[115660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdfhosynonpwtyynrxdvnbvkqjtiygam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400740.0561185-106-18816259307703/AnsiballZ_dnf.py'
Nov 29 07:19:00 compute-0 sudo[115660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:00 compute-0 python3.9[115662]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:19:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:01 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 29 07:19:01 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 29 07:19:02 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 29 07:19:02 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 29 07:19:02 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 29 07:19:03 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 29 07:19:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 29 07:19:04 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:19:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:04 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 29 07:19:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 29 07:19:04 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 29 07:19:04 compute-0 podman[115316]: 2025-11-29 07:19:04.922916982 +0000 UTC m=+6.639773900 container remove bc433b0cb690a77d5da42f535cd5b343f0ad4240c2b19c4eff9f454b5d4cf286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_solomon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:19:04 compute-0 ceph-mon[75050]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:04 compute-0 ceph-mon[75050]: 9.19 deep-scrub starts
Nov 29 07:19:04 compute-0 ceph-mon[75050]: 9.19 deep-scrub ok
Nov 29 07:19:04 compute-0 ceph-mon[75050]: 9.1 scrub starts
Nov 29 07:19:04 compute-0 ceph-mon[75050]: 9.1 scrub ok
Nov 29 07:19:04 compute-0 ceph-mon[75050]: 10.11 scrub starts
Nov 29 07:19:04 compute-0 ceph-mon[75050]: 10.11 scrub ok
Nov 29 07:19:04 compute-0 ceph-mon[75050]: 9.b scrub starts
Nov 29 07:19:04 compute-0 ceph-mon[75050]: 10.2 scrub starts
Nov 29 07:19:04 compute-0 sudo[115081]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:04 compute-0 systemd[1]: libpod-conmon-bc433b0cb690a77d5da42f535cd5b343f0ad4240c2b19c4eff9f454b5d4cf286.scope: Deactivated successfully.
Nov 29 07:19:05 compute-0 sudo[115660]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:05 compute-0 sudo[115664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:05 compute-0 sudo[115664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:05 compute-0 sudo[115664]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:05 compute-0 sudo[115695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:19:05 compute-0 sudo[115695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:05 compute-0 sudo[115695]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:05 compute-0 sudo[115738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:05 compute-0 sudo[115738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:05 compute-0 sudo[115738]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:05 compute-0 sudo[115793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:19:05 compute-0 sudo[115793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:19:05
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'vms', 'backups', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', '.mgr']
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:19:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:05 compute-0 sudo[115969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asvbqbyfpsrdlhuajedavklcjdwbwlgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400745.202195-114-97743632438416/AnsiballZ_command.py'
Nov 29 07:19:05 compute-0 sudo[115969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:05 compute-0 podman[115929]: 2025-11-29 07:19:05.673497947 +0000 UTC m=+0.031983137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:19:05 compute-0 python3.9[115971]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:19:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Nov 29 07:19:06 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Nov 29 07:19:06 compute-0 podman[115929]: 2025-11-29 07:19:06.197905609 +0000 UTC m=+0.556390799 container create 44e2cc7babd1fa500c92b409032004ca2175fce1d8726555f998b2ee867af03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:19:06 compute-0 ceph-mon[75050]: pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:06 compute-0 ceph-mon[75050]: pgmap v308: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:06 compute-0 ceph-mon[75050]: 10.12 scrub starts
Nov 29 07:19:06 compute-0 ceph-mon[75050]: 9.b scrub ok
Nov 29 07:19:06 compute-0 ceph-mon[75050]: 10.12 scrub ok
Nov 29 07:19:06 compute-0 ceph-mon[75050]: 10.2 scrub ok
Nov 29 07:19:06 compute-0 ceph-mon[75050]: pgmap v309: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:06 compute-0 systemd[1]: Started libpod-conmon-44e2cc7babd1fa500c92b409032004ca2175fce1d8726555f998b2ee867af03a.scope.
Nov 29 07:19:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:19:06 compute-0 podman[115929]: 2025-11-29 07:19:06.301848186 +0000 UTC m=+0.660333356 container init 44e2cc7babd1fa500c92b409032004ca2175fce1d8726555f998b2ee867af03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:19:06 compute-0 podman[115929]: 2025-11-29 07:19:06.314527542 +0000 UTC m=+0.673012742 container start 44e2cc7babd1fa500c92b409032004ca2175fce1d8726555f998b2ee867af03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:19:06 compute-0 optimistic_leakey[115980]: 167 167
Nov 29 07:19:06 compute-0 systemd[1]: libpod-44e2cc7babd1fa500c92b409032004ca2175fce1d8726555f998b2ee867af03a.scope: Deactivated successfully.
Nov 29 07:19:06 compute-0 podman[115929]: 2025-11-29 07:19:06.392294922 +0000 UTC m=+0.750780092 container attach 44e2cc7babd1fa500c92b409032004ca2175fce1d8726555f998b2ee867af03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:19:06 compute-0 podman[115929]: 2025-11-29 07:19:06.392747395 +0000 UTC m=+0.751232575 container died 44e2cc7babd1fa500c92b409032004ca2175fce1d8726555f998b2ee867af03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-91de12018c2cbf0420b2594a338f2f491ffa0f49e63d8d0492f85dab9bbdf327-merged.mount: Deactivated successfully.
Nov 29 07:19:06 compute-0 podman[115929]: 2025-11-29 07:19:06.442229017 +0000 UTC m=+0.800714167 container remove 44e2cc7babd1fa500c92b409032004ca2175fce1d8726555f998b2ee867af03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_leakey, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:19:06 compute-0 systemd[1]: libpod-conmon-44e2cc7babd1fa500c92b409032004ca2175fce1d8726555f998b2ee867af03a.scope: Deactivated successfully.
Nov 29 07:19:06 compute-0 podman[116133]: 2025-11-29 07:19:06.644087697 +0000 UTC m=+0.070070398 container create de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:19:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:19:06 compute-0 systemd[1]: Started libpod-conmon-de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956.scope.
Nov 29 07:19:06 compute-0 podman[116133]: 2025-11-29 07:19:06.612063141 +0000 UTC m=+0.038045882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:19:06 compute-0 sudo[115969]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b41364bef918529882bfa27d6a3cc1c3b8a3505062735b455a3e32e1a3554b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b41364bef918529882bfa27d6a3cc1c3b8a3505062735b455a3e32e1a3554b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b41364bef918529882bfa27d6a3cc1c3b8a3505062735b455a3e32e1a3554b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b41364bef918529882bfa27d6a3cc1c3b8a3505062735b455a3e32e1a3554b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:19:06 compute-0 podman[116133]: 2025-11-29 07:19:06.771491994 +0000 UTC m=+0.197474675 container init de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:19:06 compute-0 podman[116133]: 2025-11-29 07:19:06.789414992 +0000 UTC m=+0.215397653 container start de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:19:06 compute-0 podman[116133]: 2025-11-29 07:19:06.793112788 +0000 UTC m=+0.219095479 container attach de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:19:07 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 29 07:19:07 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 29 07:19:07 compute-0 ceph-mon[75050]: 10.6 deep-scrub starts
Nov 29 07:19:07 compute-0 ceph-mon[75050]: 10.6 deep-scrub ok
Nov 29 07:19:07 compute-0 sudo[116304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihgurzflgtidffvliathqwclrklsnfxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400746.913135-122-211840004531224/AnsiballZ_file.py'
Nov 29 07:19:07 compute-0 sudo[116304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:07 compute-0 python3.9[116306]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:19:07 compute-0 sudo[116304]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]: {
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "osd_id": 2,
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "type": "bluestore"
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:     },
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "osd_id": 1,
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "type": "bluestore"
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:     },
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "osd_id": 0,
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:         "type": "bluestore"
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]:     }
Nov 29 07:19:07 compute-0 flamboyant_cohen[116150]: }
Nov 29 07:19:07 compute-0 systemd[1]: libpod-de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956.scope: Deactivated successfully.
Nov 29 07:19:07 compute-0 systemd[1]: libpod-de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956.scope: Consumed 1.092s CPU time.
Nov 29 07:19:07 compute-0 podman[116133]: 2025-11-29 07:19:07.889056335 +0000 UTC m=+1.315039056 container died de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:19:08 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 29 07:19:08 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 29 07:19:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:10 compute-0 python3.9[116496]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:19:11 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 29 07:19:11 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 29 07:19:11 compute-0 sudo[116648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eissnlsunbownkphdhhbnnkbckuxhfok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400750.9815273-138-187615060733307/AnsiballZ_dnf.py'
Nov 29 07:19:11 compute-0 sudo[116648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:11 compute-0 python3.9[116650]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:19:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:12 compute-0 ceph-mon[75050]: 10.14 scrub starts
Nov 29 07:19:12 compute-0 ceph-mon[75050]: 10.14 scrub ok
Nov 29 07:19:12 compute-0 ceph-mon[75050]: pgmap v310: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b41364bef918529882bfa27d6a3cc1c3b8a3505062735b455a3e32e1a3554b3-merged.mount: Deactivated successfully.
Nov 29 07:19:12 compute-0 podman[116133]: 2025-11-29 07:19:12.508836782 +0000 UTC m=+5.934819473 container remove de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cohen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:19:12 compute-0 sudo[115793]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:19:12 compute-0 systemd[1]: libpod-conmon-de78ae27f6b39a4426d75b38cd3d11774cd9c48c9fc3ebc77de373d6ad18e956.scope: Deactivated successfully.
Nov 29 07:19:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:19:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:19:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:19:13 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 061209d4-fcb7-4d80-bf56-989a71218a65 does not exist
Nov 29 07:19:13 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 4c11200a-125f-4715-b282-93fb7d46ab10 does not exist
Nov 29 07:19:13 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 29 07:19:13 compute-0 sudo[116653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:19:13 compute-0 sudo[116653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:13 compute-0 sudo[116653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:13 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 29 07:19:13 compute-0 sudo[116678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:19:13 compute-0 sudo[116678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:19:13 compute-0 sudo[116678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:13 compute-0 sudo[116648]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:13 compute-0 ceph-mon[75050]: 9.d scrub starts
Nov 29 07:19:13 compute-0 ceph-mon[75050]: 9.d scrub ok
Nov 29 07:19:13 compute-0 ceph-mon[75050]: pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:13 compute-0 ceph-mon[75050]: 10.f scrub starts
Nov 29 07:19:13 compute-0 ceph-mon[75050]: 10.f scrub ok
Nov 29 07:19:13 compute-0 ceph-mon[75050]: pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:19:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:19:13 compute-0 sudo[116852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcgbvnwodokrostxbupzdjuufyjttclv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400753.370697-147-52293600224129/AnsiballZ_dnf.py'
Nov 29 07:19:13 compute-0 sudo[116852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:13 compute-0 python3.9[116854]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:19:14 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 29 07:19:14 compute-0 ceph-osd[89840]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:19:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:19:14 compute-0 ceph-mon[75050]: 9.15 scrub starts
Nov 29 07:19:14 compute-0 ceph-mon[75050]: 9.15 scrub ok
Nov 29 07:19:14 compute-0 ceph-mon[75050]: pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:14 compute-0 ceph-mon[75050]: 9.1f scrub starts
Nov 29 07:19:14 compute-0 ceph-mon[75050]: 9.1f scrub ok
Nov 29 07:19:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:15 compute-0 sudo[116852]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:15 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 29 07:19:15 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 29 07:19:16 compute-0 sudo[117005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxezkpxjwwbbdpeptlvjubealplfjrci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400755.998869-159-123821439462991/AnsiballZ_stat.py'
Nov 29 07:19:16 compute-0 sudo[117005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:16 compute-0 python3.9[117007]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:19:16 compute-0 sudo[117005]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:16 compute-0 ceph-mon[75050]: pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:16 compute-0 ceph-mon[75050]: 9.1b scrub starts
Nov 29 07:19:16 compute-0 ceph-mon[75050]: 9.1b scrub ok
Nov 29 07:19:17 compute-0 sudo[117159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkwpwthoomigkvyxdbtxajlkiitdgkyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400756.7264228-167-210178336163655/AnsiballZ_slurp.py'
Nov 29 07:19:17 compute-0 sudo[117159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:17 compute-0 python3.9[117161]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 07:19:17 compute-0 sudo[117159]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:18 compute-0 ceph-mon[75050]: pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:18 compute-0 sshd-session[113613]: Connection closed by 192.168.122.30 port 56550
Nov 29 07:19:18 compute-0 sshd-session[113610]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:19:18 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 07:19:18 compute-0 systemd[1]: session-35.scope: Consumed 19.958s CPU time.
Nov 29 07:19:18 compute-0 systemd-logind[807]: Session 35 logged out. Waiting for processes to exit.
Nov 29 07:19:18 compute-0 systemd-logind[807]: Removed session 35.
Nov 29 07:19:18 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.3 deep-scrub starts
Nov 29 07:19:18 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.3 deep-scrub ok
Nov 29 07:19:19 compute-0 ceph-mon[75050]: 9.3 deep-scrub starts
Nov 29 07:19:19 compute-0 ceph-mon[75050]: 9.3 deep-scrub ok
Nov 29 07:19:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:20 compute-0 ceph-mon[75050]: pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:21 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 29 07:19:21 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 29 07:19:22 compute-0 ceph-mon[75050]: pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:22 compute-0 ceph-mon[75050]: 9.1d scrub starts
Nov 29 07:19:22 compute-0 ceph-mon[75050]: 9.1d scrub ok
Nov 29 07:19:22 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 29 07:19:22 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 29 07:19:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:23 compute-0 ceph-mon[75050]: 9.5 scrub starts
Nov 29 07:19:23 compute-0 ceph-mon[75050]: 9.5 scrub ok
Nov 29 07:19:24 compute-0 sshd-session[117187]: Accepted publickey for zuul from 192.168.122.30 port 55092 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:19:24 compute-0 systemd-logind[807]: New session 36 of user zuul.
Nov 29 07:19:24 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 29 07:19:24 compute-0 sshd-session[117187]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:19:24 compute-0 ceph-mon[75050]: pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:25 compute-0 python3.9[117340]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:19:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:26 compute-0 ceph-mon[75050]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:26 compute-0 python3.9[117494]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:19:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:26 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 29 07:19:26 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 29 07:19:27 compute-0 python3.9[117687]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:19:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:27 compute-0 sshd-session[117190]: Connection closed by 192.168.122.30 port 55092
Nov 29 07:19:27 compute-0 sshd-session[117187]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:19:27 compute-0 systemd-logind[807]: Session 36 logged out. Waiting for processes to exit.
Nov 29 07:19:27 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 07:19:27 compute-0 systemd[1]: session-36.scope: Consumed 2.479s CPU time.
Nov 29 07:19:27 compute-0 systemd-logind[807]: Removed session 36.
Nov 29 07:19:28 compute-0 ceph-mon[75050]: 9.16 scrub starts
Nov 29 07:19:28 compute-0 ceph-mon[75050]: 9.16 scrub ok
Nov 29 07:19:29 compute-0 ceph-mon[75050]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:29 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 29 07:19:29 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 29 07:19:30 compute-0 ceph-mon[75050]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:30 compute-0 ceph-mon[75050]: 9.1c scrub starts
Nov 29 07:19:30 compute-0 ceph-mon[75050]: 9.1c scrub ok
Nov 29 07:19:30 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Nov 29 07:19:30 compute-0 ceph-osd[88831]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Nov 29 07:19:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:33 compute-0 ceph-mon[75050]: 9.1e deep-scrub starts
Nov 29 07:19:33 compute-0 ceph-mon[75050]: 9.1e deep-scrub ok
Nov 29 07:19:34 compute-0 sshd-session[117714]: Accepted publickey for zuul from 192.168.122.30 port 45136 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:19:34 compute-0 systemd-logind[807]: New session 37 of user zuul.
Nov 29 07:19:34 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 29 07:19:34 compute-0 sshd-session[117714]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:19:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:19:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:19:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:19:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:19:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:19:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:19:35 compute-0 python3.9[117867]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:19:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:36 compute-0 ceph-mon[75050]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:36 compute-0 ceph-mon[75050]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:40 compute-0 ceph-mon[75050]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:40 compute-0 ceph-mon[75050]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:40 compute-0 ceph-mon[75050]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:40 compute-0 python3.9[118021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:19:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:41 compute-0 sudo[118175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuqgnprzstnmjemjfstifdeyyfhchbsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400781.383809-40-138435634933617/AnsiballZ_setup.py'
Nov 29 07:19:41 compute-0 sudo[118175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:41 compute-0 ceph-mon[75050]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:42 compute-0 python3.9[118177]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:19:42 compute-0 sudo[118175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:42 compute-0 sudo[118259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieocwgyxuvcxjymexjpzxvokfrpsfdfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400781.383809-40-138435634933617/AnsiballZ_dnf.py'
Nov 29 07:19:42 compute-0 sudo[118259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:42 compute-0 python3.9[118261]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:19:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:44 compute-0 ceph-mon[75050]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:44 compute-0 sudo[118259]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:44 compute-0 sudo[118412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psywrtkcevbkjoduojshrovwttufxxms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400784.5610046-52-23767462256190/AnsiballZ_setup.py'
Nov 29 07:19:44 compute-0 sudo[118412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:45 compute-0 python3.9[118414]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:19:45 compute-0 sudo[118412]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:46 compute-0 ceph-mon[75050]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:46 compute-0 sudo[118607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvpntmcarrecezkqcpdnvcluzgscobmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400785.873848-63-193202506731998/AnsiballZ_file.py'
Nov 29 07:19:46 compute-0 sudo[118607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:46 compute-0 python3.9[118609]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:46 compute-0 sudo[118607]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:47 compute-0 sudo[118759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zawhwgrtjzhfiolupmsqugavkktinuky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400786.8841424-71-70274510889276/AnsiballZ_command.py'
Nov 29 07:19:47 compute-0 sudo[118759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:47 compute-0 ceph-mon[75050]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:47 compute-0 python3.9[118761]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:19:48 compute-0 sudo[118759]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:48 compute-0 sudo[118924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zikymxaygbtsifrzowzgegwadluhsnrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400788.278167-79-276326321755968/AnsiballZ_stat.py'
Nov 29 07:19:48 compute-0 sudo[118924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:48 compute-0 python3.9[118926]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:19:49 compute-0 sudo[118924]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:49 compute-0 sudo[119002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trozvybeorkkcqwgkpczfwcfyqhbbxoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400788.278167-79-276326321755968/AnsiballZ_file.py'
Nov 29 07:19:49 compute-0 sudo[119002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:49 compute-0 python3.9[119004]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:19:49 compute-0 sudo[119002]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:49 compute-0 ceph-mon[75050]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:50 compute-0 sudo[119154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dimrgvobwnnuvatyqfwmojihxonwjomo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400789.7017057-91-118255916147181/AnsiballZ_stat.py'
Nov 29 07:19:50 compute-0 sudo[119154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:50 compute-0 python3.9[119156]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:19:50 compute-0 sudo[119154]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:50 compute-0 sudo[119232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzpvmnfohiprtgzqthcbgjzogmlhadkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400789.7017057-91-118255916147181/AnsiballZ_file.py'
Nov 29 07:19:50 compute-0 sudo[119232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:50 compute-0 python3.9[119234]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:19:50 compute-0 sudo[119232]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:51 compute-0 sudo[119384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iolzclnhxkphmhtxvbsnbeopooisjxhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400791.0234878-104-124081690381414/AnsiballZ_ini_file.py'
Nov 29 07:19:51 compute-0 sudo[119384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:51 compute-0 python3.9[119386]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:19:51 compute-0 sudo[119384]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:52 compute-0 ceph-mon[75050]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:52 compute-0 sudo[119536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-habifjhadxqncxidvmqcuxpezhsbjqog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400791.896843-104-84677358773203/AnsiballZ_ini_file.py'
Nov 29 07:19:52 compute-0 sudo[119536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:52 compute-0 python3.9[119538]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:19:52 compute-0 sudo[119536]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:53 compute-0 sudo[119688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzyybbqnjeznmrcunbkjbinesixwgsze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400792.8359528-104-30399351225425/AnsiballZ_ini_file.py'
Nov 29 07:19:53 compute-0 sudo[119688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:53 compute-0 python3.9[119690]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:19:53 compute-0 sudo[119688]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:53 compute-0 sudo[119840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jswjudekzkowwqmnrtfqkirwcjzwvzpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400793.5329108-104-8902486063963/AnsiballZ_ini_file.py'
Nov 29 07:19:53 compute-0 sudo[119840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:53 compute-0 ceph-mon[75050]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:54 compute-0 python3.9[119842]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:19:54 compute-0 sudo[119840]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:54 compute-0 sudo[119992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulvwtedizkmwmglgbyunfnpbovryptgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400794.5353692-135-200337022703358/AnsiballZ_dnf.py'
Nov 29 07:19:54 compute-0 sudo[119992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:55 compute-0 python3.9[119994]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:19:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:19:57 compute-0 sudo[119992]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:57 compute-0 ceph-mon[75050]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:58 compute-0 sudo[120145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tapktkhrixvilpjimhjxteyckktexshh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400797.7006133-146-2140284374276/AnsiballZ_setup.py'
Nov 29 07:19:58 compute-0 sudo[120145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:58 compute-0 python3.9[120147]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:19:58 compute-0 sudo[120145]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:58 compute-0 ceph-mon[75050]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:59 compute-0 sudo[120299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpbmhxoguwrvogidqvkibhcobkfcumvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400798.6571221-154-223373450828031/AnsiballZ_stat.py'
Nov 29 07:19:59 compute-0 sudo[120299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:19:59 compute-0 python3.9[120301]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:19:59 compute-0 sudo[120299]: pam_unix(sudo:session): session closed for user root
Nov 29 07:19:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:19:59 compute-0 sudo[120451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovnscxxhadhsipqmlutbifwrqksbkdaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400799.4955888-163-18518102389004/AnsiballZ_stat.py'
Nov 29 07:19:59 compute-0 sudo[120451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:00 compute-0 python3.9[120453]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:20:00 compute-0 sudo[120451]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:00 compute-0 ceph-mon[75050]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:00 compute-0 sudo[120603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyomuykelpqfsxmjpmizmthnxforrnwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400800.3846571-173-157466040767756/AnsiballZ_command.py'
Nov 29 07:20:00 compute-0 sudo[120603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:00 compute-0 python3.9[120605]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:20:00 compute-0 sudo[120603]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:01 compute-0 sudo[120756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykptdbgqbozqphonwndwegulzfttczgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400801.3805304-183-149265545745294/AnsiballZ_service_facts.py'
Nov 29 07:20:01 compute-0 sudo[120756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:02 compute-0 python3.9[120758]: ansible-service_facts Invoked
Nov 29 07:20:02 compute-0 network[120775]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:20:02 compute-0 network[120776]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:20:02 compute-0 network[120777]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:20:02 compute-0 ceph-mon[75050]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:04 compute-0 ceph-mon[75050]: pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:20:05
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log']
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:20:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:05 compute-0 ceph-mon[75050]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:05 compute-0 sudo[120756]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:20:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:20:06 compute-0 sudo[121060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yexqqdvizryhlqxzubfmacjfhrmokbzf ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764400806.460085-198-277515989728899/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764400806.460085-198-277515989728899/args'
Nov 29 07:20:06 compute-0 sudo[121060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:06 compute-0 sudo[121060]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:07 compute-0 sudo[121227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oidtnpilkivubcmvreteijfknjkznkpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400807.1477394-209-85630646760522/AnsiballZ_dnf.py'
Nov 29 07:20:07 compute-0 sudo[121227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:07 compute-0 python3.9[121229]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:20:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:11 compute-0 sudo[121227]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:12 compute-0 ceph-mon[75050]: pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:12 compute-0 ceph-mon[75050]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:13 compute-0 sudo[121380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhruwgauyvakvmyvheobxwrzalafrgjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400812.312062-222-275426156164632/AnsiballZ_package_facts.py'
Nov 29 07:20:13 compute-0 sudo[121380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:13 compute-0 sudo[121383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:13 compute-0 sudo[121383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:13 compute-0 sudo[121383]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:13 compute-0 sudo[121408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:20:13 compute-0 sudo[121408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:13 compute-0 sudo[121408]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:13 compute-0 python3.9[121382]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 07:20:13 compute-0 sudo[121433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:13 compute-0 sudo[121433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:13 compute-0 sudo[121433]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:13 compute-0 sudo[121458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:20:13 compute-0 sudo[121458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:13 compute-0 sudo[121380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:13 compute-0 sudo[121458]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:20:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:20:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:20:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:20:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:20:14 compute-0 ceph-mon[75050]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:14 compute-0 ceph-mon[75050]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:14 compute-0 sudo[121662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bknriiogetqpflahflprcrkcrcdskqrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400814.0605345-232-178603194973327/AnsiballZ_stat.py'
Nov 29 07:20:14 compute-0 sudo[121662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:20:14 compute-0 python3.9[121664]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:14 compute-0 sudo[121662]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev f4f3be9a-631f-401a-9aaf-d581e7ca0287 does not exist
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 614e525f-03a7-4907-baef-3a2c4051d433 does not exist
Nov 29 07:20:14 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 69933063-72f1-4e02-8333-f065a3188a8d does not exist
Nov 29 07:20:15 compute-0 sudo[121740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnpafrnmgaxjxiqfscwbuygivtmaivsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400814.0605345-232-178603194973327/AnsiballZ_file.py'
Nov 29 07:20:15 compute-0 sudo[121740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:20:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:20:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:20:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:20:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:20:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:20:15 compute-0 python3.9[121742]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:15 compute-0 sudo[121740]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:15 compute-0 sudo[121743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:15 compute-0 sudo[121743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:15 compute-0 sudo[121743]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:20:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:20:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:20:15 compute-0 ceph-mon[75050]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:20:16 compute-0 sudo[121768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:20:16 compute-0 sudo[121768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:16 compute-0 sudo[121768]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:16 compute-0 sudo[121817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:16 compute-0 sudo[121817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:16 compute-0 sudo[121817]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:16 compute-0 sudo[121846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:20:16 compute-0 sudo[121846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:16 compute-0 sudo[122028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thminsrzsrsgkitbzuiughhbzrhphyjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400816.1875393-244-214205350298820/AnsiballZ_stat.py'
Nov 29 07:20:16 compute-0 sudo[122028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:16 compute-0 podman[122034]: 2025-11-29 07:20:16.641328683 +0000 UTC m=+0.029435646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:20:16 compute-0 python3.9[122033]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:16 compute-0 podman[122034]: 2025-11-29 07:20:16.828445878 +0000 UTC m=+0.216552811 container create bb85911ed3bc4c9b8aee326d9d4c206814c5b345769b4896ba5c34b5b72f55e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 29 07:20:16 compute-0 sudo[122028]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:16 compute-0 systemd[1]: Started libpod-conmon-bb85911ed3bc4c9b8aee326d9d4c206814c5b345769b4896ba5c34b5b72f55e0.scope.
Nov 29 07:20:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:20:16 compute-0 podman[122034]: 2025-11-29 07:20:16.946502507 +0000 UTC m=+0.334609460 container init bb85911ed3bc4c9b8aee326d9d4c206814c5b345769b4896ba5c34b5b72f55e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:20:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:20:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:20:16 compute-0 podman[122034]: 2025-11-29 07:20:16.955857251 +0000 UTC m=+0.343964184 container start bb85911ed3bc4c9b8aee326d9d4c206814c5b345769b4896ba5c34b5b72f55e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:20:16 compute-0 podman[122034]: 2025-11-29 07:20:16.958762166 +0000 UTC m=+0.346869119 container attach bb85911ed3bc4c9b8aee326d9d4c206814c5b345769b4896ba5c34b5b72f55e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:20:16 compute-0 laughing_germain[122071]: 167 167
Nov 29 07:20:16 compute-0 systemd[1]: libpod-bb85911ed3bc4c9b8aee326d9d4c206814c5b345769b4896ba5c34b5b72f55e0.scope: Deactivated successfully.
Nov 29 07:20:16 compute-0 podman[122034]: 2025-11-29 07:20:16.963714072 +0000 UTC m=+0.351821015 container died bb85911ed3bc4c9b8aee326d9d4c206814c5b345769b4896ba5c34b5b72f55e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-04e3f27185180238b269297b03c344799a3defdc8baba10b2a6b1b937391d64f-merged.mount: Deactivated successfully.
Nov 29 07:20:17 compute-0 sudo[122145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sniovttifuviwspoemanuhxghujuhajz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400816.1875393-244-214205350298820/AnsiballZ_file.py'
Nov 29 07:20:17 compute-0 sudo[122145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:17 compute-0 podman[122034]: 2025-11-29 07:20:17.087616281 +0000 UTC m=+0.475723224 container remove bb85911ed3bc4c9b8aee326d9d4c206814c5b345769b4896ba5c34b5b72f55e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:20:17 compute-0 systemd[1]: libpod-conmon-bb85911ed3bc4c9b8aee326d9d4c206814c5b345769b4896ba5c34b5b72f55e0.scope: Deactivated successfully.
Nov 29 07:20:17 compute-0 python3.9[122147]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:17 compute-0 podman[122155]: 2025-11-29 07:20:17.260935712 +0000 UTC m=+0.056406018 container create 4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nobel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:20:17 compute-0 sudo[122145]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:17 compute-0 systemd[1]: Started libpod-conmon-4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201.scope.
Nov 29 07:20:17 compute-0 podman[122155]: 2025-11-29 07:20:17.230619171 +0000 UTC m=+0.026089507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:20:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd4935164c0f5e5e47dd5820ab590845cbafb3748841bbe8b701137aab05f62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd4935164c0f5e5e47dd5820ab590845cbafb3748841bbe8b701137aab05f62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd4935164c0f5e5e47dd5820ab590845cbafb3748841bbe8b701137aab05f62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd4935164c0f5e5e47dd5820ab590845cbafb3748841bbe8b701137aab05f62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd4935164c0f5e5e47dd5820ab590845cbafb3748841bbe8b701137aab05f62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:17 compute-0 podman[122155]: 2025-11-29 07:20:17.698884425 +0000 UTC m=+0.494354791 container init 4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:20:17 compute-0 podman[122155]: 2025-11-29 07:20:17.708078785 +0000 UTC m=+0.503549091 container start 4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nobel, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:20:17 compute-0 podman[122155]: 2025-11-29 07:20:17.839062402 +0000 UTC m=+0.634532738 container attach 4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:20:17 compute-0 ceph-mon[75050]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:18 compute-0 sudo[122326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viskdetwuyjvbcigxqpijtmdzjcupdiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400817.7599337-262-39375524459304/AnsiballZ_lineinfile.py'
Nov 29 07:20:18 compute-0 sudo[122326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:18 compute-0 python3.9[122328]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:18 compute-0 sudo[122326]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:18 compute-0 hopeful_nobel[122172]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:20:18 compute-0 hopeful_nobel[122172]: --> relative data size: 1.0
Nov 29 07:20:18 compute-0 hopeful_nobel[122172]: --> All data devices are unavailable
Nov 29 07:20:18 compute-0 podman[122155]: 2025-11-29 07:20:18.846258086 +0000 UTC m=+1.641728382 container died 4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:20:18 compute-0 systemd[1]: libpod-4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201.scope: Deactivated successfully.
Nov 29 07:20:18 compute-0 systemd[1]: libpod-4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201.scope: Consumed 1.057s CPU time.
Nov 29 07:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fd4935164c0f5e5e47dd5820ab590845cbafb3748841bbe8b701137aab05f62-merged.mount: Deactivated successfully.
Nov 29 07:20:19 compute-0 podman[122155]: 2025-11-29 07:20:19.077012844 +0000 UTC m=+1.872483160 container remove 4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:20:19 compute-0 systemd[1]: libpod-conmon-4f3c0a70fccbef42e660e16ad5d471fcbfc0a94e7766d08f88edfd0180a63201.scope: Deactivated successfully.
Nov 29 07:20:19 compute-0 sudo[121846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:19 compute-0 sudo[122489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:19 compute-0 sudo[122489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:19 compute-0 sudo[122489]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:19 compute-0 sudo[122544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxotmpomjyfrnozymigbfscomvjyapyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400818.928391-277-103309868359917/AnsiballZ_setup.py'
Nov 29 07:20:19 compute-0 sudo[122544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:19 compute-0 sudo[122540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:20:19 compute-0 sudo[122540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:19 compute-0 sudo[122540]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:19 compute-0 sudo[122569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:19 compute-0 sudo[122569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:19 compute-0 sudo[122569]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:19 compute-0 sudo[122594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:20:19 compute-0 sudo[122594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:19 compute-0 python3.9[122561]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:20:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:19 compute-0 sudo[122544]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:19 compute-0 podman[122665]: 2025-11-29 07:20:19.660358978 +0000 UTC m=+0.021964137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:20:19 compute-0 podman[122665]: 2025-11-29 07:20:19.770946646 +0000 UTC m=+0.132551735 container create 0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:20:19 compute-0 ceph-mon[75050]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:19 compute-0 systemd[1]: Started libpod-conmon-0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c.scope.
Nov 29 07:20:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:20:20 compute-0 podman[122665]: 2025-11-29 07:20:20.11736621 +0000 UTC m=+0.478971329 container init 0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:20:20 compute-0 podman[122665]: 2025-11-29 07:20:20.129199539 +0000 UTC m=+0.490804658 container start 0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:20:20 compute-0 suspicious_carver[122684]: 167 167
Nov 29 07:20:20 compute-0 systemd[1]: libpod-0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c.scope: Deactivated successfully.
Nov 29 07:20:20 compute-0 conmon[122684]: conmon 0b5f0d1eb0261acdfaaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c.scope/container/memory.events
Nov 29 07:20:20 compute-0 podman[122665]: 2025-11-29 07:20:20.206030065 +0000 UTC m=+0.567635164 container attach 0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 29 07:20:20 compute-0 podman[122665]: 2025-11-29 07:20:20.207570341 +0000 UTC m=+0.569175420 container died 0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:20:20 compute-0 sudo[122776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzllvskijxwozejduqofxuzpbdedhnka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400818.928391-277-103309868359917/AnsiballZ_systemd.py'
Nov 29 07:20:20 compute-0 sudo[122776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-72787d3f02b323501a3d8c201df1d5c8e69cce0b6c94c07c291d5950e9429c86-merged.mount: Deactivated successfully.
Nov 29 07:20:20 compute-0 podman[122665]: 2025-11-29 07:20:20.544346462 +0000 UTC m=+0.905951571 container remove 0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:20:20 compute-0 systemd[1]: libpod-conmon-0b5f0d1eb0261acdfaaf2fc11f34a91b2af5299b758598e7834c375d82cf824c.scope: Deactivated successfully.
Nov 29 07:20:20 compute-0 podman[122786]: 2025-11-29 07:20:20.712439799 +0000 UTC m=+0.056220142 container create b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:20:20 compute-0 python3.9[122778]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:20:20 compute-0 podman[122786]: 2025-11-29 07:20:20.688352282 +0000 UTC m=+0.032132625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:20:20 compute-0 sudo[122776]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:20 compute-0 systemd[1]: Started libpod-conmon-b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df.scope.
Nov 29 07:20:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40176ebf2b451272ca20c4c322fa10250df714165a37ea0e0143e79005ff4d64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40176ebf2b451272ca20c4c322fa10250df714165a37ea0e0143e79005ff4d64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40176ebf2b451272ca20c4c322fa10250df714165a37ea0e0143e79005ff4d64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40176ebf2b451272ca20c4c322fa10250df714165a37ea0e0143e79005ff4d64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:21 compute-0 podman[122786]: 2025-11-29 07:20:21.154466273 +0000 UTC m=+0.498246656 container init b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:20:21 compute-0 podman[122786]: 2025-11-29 07:20:21.164380944 +0000 UTC m=+0.508161287 container start b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:20:21 compute-0 podman[122786]: 2025-11-29 07:20:21.428526352 +0000 UTC m=+0.772306705 container attach b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:20:21 compute-0 sshd-session[117717]: Connection closed by 192.168.122.30 port 45136
Nov 29 07:20:21 compute-0 sshd-session[117714]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:20:21 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 07:20:21 compute-0 systemd[1]: session-37.scope: Consumed 27.687s CPU time.
Nov 29 07:20:21 compute-0 systemd-logind[807]: Session 37 logged out. Waiting for processes to exit.
Nov 29 07:20:21 compute-0 systemd-logind[807]: Removed session 37.
Nov 29 07:20:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:21 compute-0 sharp_thompson[122827]: {
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:     "0": [
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:         {
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "devices": [
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "/dev/loop3"
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             ],
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_name": "ceph_lv0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_size": "21470642176",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "name": "ceph_lv0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "tags": {
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.cluster_name": "ceph",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.crush_device_class": "",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.encrypted": "0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.osd_id": "0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.type": "block",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.vdo": "0"
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             },
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "type": "block",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "vg_name": "ceph_vg0"
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:         }
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:     ],
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:     "1": [
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:         {
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "devices": [
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "/dev/loop4"
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             ],
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_name": "ceph_lv1",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_size": "21470642176",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "name": "ceph_lv1",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "tags": {
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.cluster_name": "ceph",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.crush_device_class": "",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.encrypted": "0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.osd_id": "1",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.type": "block",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.vdo": "0"
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             },
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "type": "block",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "vg_name": "ceph_vg1"
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:         }
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:     ],
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:     "2": [
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:         {
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "devices": [
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "/dev/loop5"
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             ],
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_name": "ceph_lv2",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_size": "21470642176",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "name": "ceph_lv2",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "tags": {
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.cluster_name": "ceph",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.crush_device_class": "",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.encrypted": "0",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.osd_id": "2",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.type": "block",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:                 "ceph.vdo": "0"
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             },
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "type": "block",
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:             "vg_name": "ceph_vg2"
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:         }
Nov 29 07:20:21 compute-0 sharp_thompson[122827]:     ]
Nov 29 07:20:21 compute-0 sharp_thompson[122827]: }
Nov 29 07:20:22 compute-0 systemd[1]: libpod-b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df.scope: Deactivated successfully.
Nov 29 07:20:22 compute-0 conmon[122827]: conmon b7c64a92f787f0799881 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df.scope/container/memory.events
Nov 29 07:20:22 compute-0 podman[122786]: 2025-11-29 07:20:22.011034871 +0000 UTC m=+1.354815284 container died b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:20:22 compute-0 ceph-mon[75050]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-40176ebf2b451272ca20c4c322fa10250df714165a37ea0e0143e79005ff4d64-merged.mount: Deactivated successfully.
Nov 29 07:20:22 compute-0 podman[122786]: 2025-11-29 07:20:22.797995746 +0000 UTC m=+2.141776109 container remove b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_thompson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:20:22 compute-0 systemd[1]: libpod-conmon-b7c64a92f787f07998817ec1361ca2f7aaed8299d32aa552f412c45f5ba199df.scope: Deactivated successfully.
Nov 29 07:20:22 compute-0 sudo[122594]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:22 compute-0 sudo[122850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:22 compute-0 sudo[122850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:22 compute-0 sudo[122850]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:22 compute-0 sudo[122875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:20:23 compute-0 sudo[122875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:23 compute-0 sudo[122875]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:23 compute-0 sudo[122900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:23 compute-0 sudo[122900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:23 compute-0 sudo[122900]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:23 compute-0 sudo[122925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:20:23 compute-0 sudo[122925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:23 compute-0 podman[122985]: 2025-11-29 07:20:23.473172937 +0000 UTC m=+0.052781490 container create 2d7e8b6748d9ee1417855babf910c410ae3bef7ac918f44155eb56d63574483e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:20:23 compute-0 systemd[1]: Started libpod-conmon-2d7e8b6748d9ee1417855babf910c410ae3bef7ac918f44155eb56d63574483e.scope.
Nov 29 07:20:23 compute-0 podman[122985]: 2025-11-29 07:20:23.443207438 +0000 UTC m=+0.022815981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:20:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:20:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:23 compute-0 podman[122985]: 2025-11-29 07:20:23.7963547 +0000 UTC m=+0.375963323 container init 2d7e8b6748d9ee1417855babf910c410ae3bef7ac918f44155eb56d63574483e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:20:23 compute-0 podman[122985]: 2025-11-29 07:20:23.811146084 +0000 UTC m=+0.390754657 container start 2d7e8b6748d9ee1417855babf910c410ae3bef7ac918f44155eb56d63574483e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:20:23 compute-0 hopeful_diffie[123001]: 167 167
Nov 29 07:20:23 compute-0 systemd[1]: libpod-2d7e8b6748d9ee1417855babf910c410ae3bef7ac918f44155eb56d63574483e.scope: Deactivated successfully.
Nov 29 07:20:23 compute-0 ceph-mon[75050]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:23 compute-0 podman[122985]: 2025-11-29 07:20:23.915828169 +0000 UTC m=+0.495436782 container attach 2d7e8b6748d9ee1417855babf910c410ae3bef7ac918f44155eb56d63574483e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:20:23 compute-0 podman[122985]: 2025-11-29 07:20:23.916462688 +0000 UTC m=+0.496071261 container died 2d7e8b6748d9ee1417855babf910c410ae3bef7ac918f44155eb56d63574483e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-17831e928fa4b94745ffcb0cc9cd3f5fdf889be1ebf0481bea79c7600816387f-merged.mount: Deactivated successfully.
Nov 29 07:20:23 compute-0 podman[122985]: 2025-11-29 07:20:23.972985698 +0000 UTC m=+0.552594241 container remove 2d7e8b6748d9ee1417855babf910c410ae3bef7ac918f44155eb56d63574483e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_diffie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:20:23 compute-0 systemd[1]: libpod-conmon-2d7e8b6748d9ee1417855babf910c410ae3bef7ac918f44155eb56d63574483e.scope: Deactivated successfully.
Nov 29 07:20:24 compute-0 podman[123026]: 2025-11-29 07:20:24.1545131 +0000 UTC m=+0.054005398 container create cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:20:24 compute-0 systemd[1]: Started libpod-conmon-cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547.scope.
Nov 29 07:20:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c8cdc3475c457d21d705175790c1f14b62633ecff73b86ce4f5c9274f79a2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c8cdc3475c457d21d705175790c1f14b62633ecff73b86ce4f5c9274f79a2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c8cdc3475c457d21d705175790c1f14b62633ecff73b86ce4f5c9274f79a2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c8cdc3475c457d21d705175790c1f14b62633ecff73b86ce4f5c9274f79a2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:20:24 compute-0 podman[123026]: 2025-11-29 07:20:24.132577656 +0000 UTC m=+0.032069984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:20:24 compute-0 podman[123026]: 2025-11-29 07:20:24.245245325 +0000 UTC m=+0.144737633 container init cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:20:24 compute-0 podman[123026]: 2025-11-29 07:20:24.254887448 +0000 UTC m=+0.154379746 container start cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:20:24 compute-0 podman[123026]: 2025-11-29 07:20:24.259831703 +0000 UTC m=+0.159324021 container attach cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:20:25 compute-0 gifted_curran[123042]: {
Nov 29 07:20:25 compute-0 gifted_curran[123042]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "osd_id": 2,
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "type": "bluestore"
Nov 29 07:20:25 compute-0 gifted_curran[123042]:     },
Nov 29 07:20:25 compute-0 gifted_curran[123042]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "osd_id": 1,
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "type": "bluestore"
Nov 29 07:20:25 compute-0 gifted_curran[123042]:     },
Nov 29 07:20:25 compute-0 gifted_curran[123042]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "osd_id": 0,
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:20:25 compute-0 gifted_curran[123042]:         "type": "bluestore"
Nov 29 07:20:25 compute-0 gifted_curran[123042]:     }
Nov 29 07:20:25 compute-0 gifted_curran[123042]: }
Nov 29 07:20:25 compute-0 systemd[1]: libpod-cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547.scope: Deactivated successfully.
Nov 29 07:20:25 compute-0 systemd[1]: libpod-cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547.scope: Consumed 1.061s CPU time.
Nov 29 07:20:25 compute-0 podman[123026]: 2025-11-29 07:20:25.313987955 +0000 UTC m=+1.213480253 container died cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2c8cdc3475c457d21d705175790c1f14b62633ecff73b86ce4f5c9274f79a2c-merged.mount: Deactivated successfully.
Nov 29 07:20:25 compute-0 podman[123026]: 2025-11-29 07:20:25.37438656 +0000 UTC m=+1.273878858 container remove cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:20:25 compute-0 systemd[1]: libpod-conmon-cb9afc8befab26b367c42c1962b8b25844190fd72adc3e3b162e64312aec4547.scope: Deactivated successfully.
Nov 29 07:20:25 compute-0 sudo[122925]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:20:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:20:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:20:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:20:25 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev eae8e747-e47c-4ccb-93bf-f7bf57051554 does not exist
Nov 29 07:20:25 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 62ec8544-33cd-489a-b91f-f835d97fce09 does not exist
Nov 29 07:20:25 compute-0 sudo[123086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:20:25 compute-0 sudo[123086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:25 compute-0 sudo[123086]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:25 compute-0 sudo[123111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:20:25 compute-0 sudo[123111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:20:25 compute-0 sudo[123111]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:20:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:20:26 compute-0 ceph-mon[75050]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:27 compute-0 sshd-session[123136]: Accepted publickey for zuul from 192.168.122.30 port 35298 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:20:27 compute-0 systemd-logind[807]: New session 38 of user zuul.
Nov 29 07:20:27 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 29 07:20:27 compute-0 sshd-session[123136]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:20:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:27 compute-0 ceph-mon[75050]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:27 compute-0 sudo[123289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydqdbsoknjhmalcqiyexdmpyfbbiueco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400827.2681363-22-177552742010410/AnsiballZ_file.py'
Nov 29 07:20:27 compute-0 sudo[123289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:27 compute-0 python3.9[123291]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:28 compute-0 sudo[123289]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:28 compute-0 sudo[123441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phhpeljfobwwzlghsiqucdasmyesyytp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400828.204901-34-87650261636981/AnsiballZ_stat.py'
Nov 29 07:20:28 compute-0 sudo[123441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:28 compute-0 python3.9[123443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:28 compute-0 sudo[123441]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:29 compute-0 sudo[123519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iammqylonikmhmsmdaeaegcvrfuddahb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400828.204901-34-87650261636981/AnsiballZ_file.py'
Nov 29 07:20:29 compute-0 sudo[123519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:29 compute-0 python3.9[123521]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:29 compute-0 sudo[123519]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:29 compute-0 ceph-mon[75050]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:29 compute-0 sshd-session[123139]: Connection closed by 192.168.122.30 port 35298
Nov 29 07:20:29 compute-0 sshd-session[123136]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:20:29 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 07:20:29 compute-0 systemd[1]: session-38.scope: Consumed 1.677s CPU time.
Nov 29 07:20:29 compute-0 systemd-logind[807]: Session 38 logged out. Waiting for processes to exit.
Nov 29 07:20:29 compute-0 systemd-logind[807]: Removed session 38.
Nov 29 07:20:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:31 compute-0 ceph-mon[75050]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:33 compute-0 ceph-mon[75050]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.069622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400834069756, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7380, "num_deletes": 251, "total_data_size": 10051570, "memory_usage": 10225944, "flush_reason": "Manual Compaction"}
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400834158419, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 8229707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7518, "table_properties": {"data_size": 8201697, "index_size": 18359, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 80299, "raw_average_key_size": 23, "raw_value_size": 8135750, "raw_average_value_size": 2382, "num_data_blocks": 805, "num_entries": 3415, "num_filter_entries": 3415, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400277, "oldest_key_time": 1764400277, "file_creation_time": 1764400834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 88895 microseconds, and 31167 cpu microseconds.
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.158523) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 8229707 bytes OK
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.158551) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.160215) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.160241) EVENT_LOG_v1 {"time_micros": 1764400834160233, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.160271) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10018847, prev total WAL file size 10018847, number of live WAL files 2.
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.164070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(8036KB) 13(53KB) 8(1944B)]
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400834164510, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8286908, "oldest_snapshot_seqno": -1}
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3231 keys, 8242308 bytes, temperature: kUnknown
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400834617505, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8242308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8214676, "index_size": 18434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 78371, "raw_average_key_size": 24, "raw_value_size": 8150298, "raw_average_value_size": 2522, "num_data_blocks": 810, "num_entries": 3231, "num_filter_entries": 3231, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764400834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.617872) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8242308 bytes
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.619725) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 18.3 rd, 18.2 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.9, 0.0 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3521, records dropped: 290 output_compression: NoCompression
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.619766) EVENT_LOG_v1 {"time_micros": 1764400834619746, "job": 4, "event": "compaction_finished", "compaction_time_micros": 452891, "compaction_time_cpu_micros": 26375, "output_level": 6, "num_output_files": 1, "total_output_size": 8242308, "num_input_records": 3521, "num_output_records": 3231, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400834623330, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400834623450, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400834623510, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 07:20:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:20:34.163765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:20:34 compute-0 sshd-session[71319]: Received disconnect from 38.102.83.150 port 36968:11: disconnected by user
Nov 29 07:20:34 compute-0 sshd-session[71319]: Disconnected from user zuul 38.102.83.150 port 36968
Nov 29 07:20:34 compute-0 sshd-session[71316]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:20:34 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 07:20:34 compute-0 systemd[1]: session-17.scope: Consumed 1min 30.026s CPU time.
Nov 29 07:20:34 compute-0 systemd-logind[807]: Session 17 logged out. Waiting for processes to exit.
Nov 29 07:20:34 compute-0 systemd-logind[807]: Removed session 17.
Nov 29 07:20:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:20:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:20:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:20:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:20:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:20:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:20:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:35 compute-0 ceph-mon[75050]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:35 compute-0 sshd-session[123547]: Accepted publickey for zuul from 192.168.122.30 port 32928 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:20:35 compute-0 systemd-logind[807]: New session 39 of user zuul.
Nov 29 07:20:35 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 29 07:20:35 compute-0 sshd-session[123547]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:20:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:37 compute-0 python3.9[123700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:20:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:37 compute-0 ceph-mon[75050]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:37 compute-0 sudo[123854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeltsoflzjyygzvosjgsytddizcprggw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400837.469453-33-119386277604899/AnsiballZ_file.py'
Nov 29 07:20:37 compute-0 sudo[123854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:38 compute-0 python3.9[123856]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:38 compute-0 sudo[123854]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:38 compute-0 sudo[124029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kilknqvtrpoqcdjsitdjahuxkityuaul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400838.376973-41-277078449191831/AnsiballZ_stat.py'
Nov 29 07:20:38 compute-0 sudo[124029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:39 compute-0 python3.9[124031]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:39 compute-0 sudo[124029]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:39 compute-0 sudo[124107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqxkqwnqdewbnyudjsipzlmforvejelo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400838.376973-41-277078449191831/AnsiballZ_file.py'
Nov 29 07:20:39 compute-0 sudo[124107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:39 compute-0 python3.9[124109]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.z6ovbi81 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:39 compute-0 sudo[124107]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:40 compute-0 ceph-mon[75050]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:40 compute-0 sudo[124259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhijzyaiwqxxgmwlbklfgkggrgdktkcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400840.510562-61-215233130222977/AnsiballZ_stat.py'
Nov 29 07:20:40 compute-0 sudo[124259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:41 compute-0 python3.9[124261]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:41 compute-0 sudo[124259]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:41 compute-0 sudo[124337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfolxlqdacwuqhsaegvnyqeetmcactey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400840.510562-61-215233130222977/AnsiballZ_file.py'
Nov 29 07:20:41 compute-0 sudo[124337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:41 compute-0 python3.9[124339]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.03of31xx recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:41 compute-0 sudo[124337]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:42 compute-0 sudo[124489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtegxzexpeuldvuiehiaphpvrnjolcgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400841.8041615-74-82316024532794/AnsiballZ_file.py'
Nov 29 07:20:42 compute-0 sudo[124489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:42 compute-0 python3.9[124491]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:42 compute-0 sudo[124489]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:42 compute-0 ceph-mon[75050]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:42 compute-0 sudo[124641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wirlxklclatqjliifuxahnxtkvdctlvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400842.5903533-82-13329930179428/AnsiballZ_stat.py'
Nov 29 07:20:42 compute-0 sudo[124641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:43 compute-0 python3.9[124643]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:43 compute-0 sudo[124641]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:43 compute-0 sudo[124719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieduswznqmiqqnnhsnlzrrwontqoqbpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400842.5903533-82-13329930179428/AnsiballZ_file.py'
Nov 29 07:20:43 compute-0 sudo[124719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:43 compute-0 python3.9[124721]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:43 compute-0 sudo[124719]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:44 compute-0 sudo[124871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuuermkvalmbvwkumcvwgxschxxnfpvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400843.8253467-82-196825445682362/AnsiballZ_stat.py'
Nov 29 07:20:44 compute-0 sudo[124871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:44 compute-0 ceph-mon[75050]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:44 compute-0 python3.9[124873]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:44 compute-0 sudo[124871]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:44 compute-0 sudo[124949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyqcstntawshgjhevlqezbadoekqbeln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400843.8253467-82-196825445682362/AnsiballZ_file.py'
Nov 29 07:20:44 compute-0 sudo[124949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:44 compute-0 python3.9[124951]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:20:44 compute-0 sudo[124949]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:45 compute-0 sudo[125101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amejlreuwsswvykblianwuwpesobryvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400845.1353934-105-214156946380671/AnsiballZ_file.py'
Nov 29 07:20:45 compute-0 sudo[125101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:45 compute-0 python3.9[125103]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:45 compute-0 sudo[125101]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:46 compute-0 sudo[125253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aidzcximintdmvawncbexdwswbmppqxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400845.9921758-113-70283212749705/AnsiballZ_stat.py'
Nov 29 07:20:46 compute-0 sudo[125253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:46 compute-0 python3.9[125255]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:46 compute-0 sudo[125253]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:46 compute-0 ceph-mon[75050]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:46 compute-0 sudo[125331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xurwvctmswrnwbbogzfhkrgyrwoevrnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400845.9921758-113-70283212749705/AnsiballZ_file.py'
Nov 29 07:20:46 compute-0 sudo[125331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:46 compute-0 python3.9[125333]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:47 compute-0 sudo[125331]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:47 compute-0 sudo[125483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehgltsrnqdyeepeaixhoatufenelfez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400847.20028-125-18177336096383/AnsiballZ_stat.py'
Nov 29 07:20:47 compute-0 sudo[125483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:47 compute-0 python3.9[125485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:47 compute-0 sudo[125483]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:47 compute-0 sudo[125561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eowdfgyhsedtztbrixlonrlydhpimqde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400847.20028-125-18177336096383/AnsiballZ_file.py'
Nov 29 07:20:47 compute-0 sudo[125561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:48 compute-0 python3.9[125563]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:48 compute-0 sudo[125561]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:48 compute-0 ceph-mon[75050]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:49 compute-0 sudo[125713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvjrxqzaeyivmrsmdbghrmazpctfzhoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400848.3881798-137-39695232265136/AnsiballZ_systemd.py'
Nov 29 07:20:49 compute-0 sudo[125713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:49 compute-0 python3.9[125715]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:20:49 compute-0 systemd[1]: Reloading.
Nov 29 07:20:49 compute-0 systemd-sysv-generator[125744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:20:49 compute-0 systemd-rc-local-generator[125738]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:20:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:49 compute-0 sudo[125713]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:49 compute-0 ceph-mon[75050]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:50 compute-0 sudo[125902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hevuyplfoxabxzuycudwpnwsyhysfihs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400850.1307166-145-60315406387374/AnsiballZ_stat.py'
Nov 29 07:20:50 compute-0 sudo[125902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:50 compute-0 python3.9[125904]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:50 compute-0 sudo[125902]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:51 compute-0 sudo[125980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhpysxmktamxbnmnntdhggiguaukuslr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400850.1307166-145-60315406387374/AnsiballZ_file.py'
Nov 29 07:20:51 compute-0 sudo[125980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:51 compute-0 python3.9[125982]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:51 compute-0 sudo[125980]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:51 compute-0 sudo[126132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvkidxyhjfnoqsgdxtubirmnetjjxjyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400851.3907678-157-165689910583572/AnsiballZ_stat.py'
Nov 29 07:20:51 compute-0 sudo[126132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:51 compute-0 ceph-mon[75050]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:51 compute-0 python3.9[126134]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:51 compute-0 sudo[126132]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:52 compute-0 sudo[126210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbfwdzrpczmtmxdxziapcsaavuzmquc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400851.3907678-157-165689910583572/AnsiballZ_file.py'
Nov 29 07:20:52 compute-0 sudo[126210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:52 compute-0 python3.9[126212]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:52 compute-0 sudo[126210]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:52 compute-0 sudo[126362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teympqzifsckvavbofebscqrtjhgsvov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400852.572249-169-41429859552334/AnsiballZ_systemd.py'
Nov 29 07:20:52 compute-0 sudo[126362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:53 compute-0 python3.9[126364]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:20:53 compute-0 systemd[1]: Reloading.
Nov 29 07:20:53 compute-0 systemd-sysv-generator[126395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:20:53 compute-0 systemd-rc-local-generator[126389]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:20:53 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:20:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:20:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:20:53 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:20:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:53 compute-0 sudo[126362]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:53 compute-0 ceph-mon[75050]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:54 compute-0 python3.9[126555]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:20:54 compute-0 network[126572]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:20:54 compute-0 network[126573]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:20:54 compute-0 network[126574]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:20:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:56 compute-0 ceph-mon[75050]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:20:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:58 compute-0 sudo[126834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onxhxnofxnrcmhzyrnqzyxvwfokomqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400858.2600317-195-274889471097755/AnsiballZ_stat.py'
Nov 29 07:20:58 compute-0 sudo[126834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:58 compute-0 python3.9[126836]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:20:58 compute-0 sudo[126834]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:59 compute-0 ceph-mon[75050]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:59 compute-0 sudo[126912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niqshaguajhblkrvbokogbdiskcemnic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400858.2600317-195-274889471097755/AnsiballZ_file.py'
Nov 29 07:20:59 compute-0 sudo[126912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:20:59 compute-0 python3.9[126914]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:20:59 compute-0 sudo[126912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:20:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:20:59 compute-0 sudo[127064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tifufenevbgapklynbafzhrjpgswsgvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400859.617069-208-244989854476042/AnsiballZ_file.py'
Nov 29 07:20:59 compute-0 sudo[127064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:00 compute-0 python3.9[127066]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:00 compute-0 sudo[127064]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:00 compute-0 ceph-mon[75050]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:00 compute-0 sudo[127216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztpvuuqcwyhxkeypuznmtefuqbfltdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400860.4553826-216-75417763115617/AnsiballZ_stat.py'
Nov 29 07:21:00 compute-0 sudo[127216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:00 compute-0 python3.9[127218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:00 compute-0 sudo[127216]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:01 compute-0 sudo[127294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htryymemqwswzkymchxrpvxoylbrchtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400860.4553826-216-75417763115617/AnsiballZ_file.py'
Nov 29 07:21:01 compute-0 sudo[127294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:01 compute-0 python3.9[127296]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:01 compute-0 sudo[127294]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:21:02 compute-0 sudo[127446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcpyfseczhyiavdeopczxlnbeydlijlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400861.7082512-231-54734054317798/AnsiballZ_timezone.py'
Nov 29 07:21:02 compute-0 sudo[127446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:02 compute-0 python3.9[127448]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 07:21:02 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 07:21:02 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 07:21:02 compute-0 sudo[127446]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:02 compute-0 ceph-mon[75050]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:03 compute-0 sudo[127602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpdcvqjpxovbajfjpttnfqefzwwmxryg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400862.856125-240-215899033887730/AnsiballZ_file.py'
Nov 29 07:21:03 compute-0 sudo[127602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:03 compute-0 python3.9[127604]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:03 compute-0 sudo[127602]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:03 compute-0 sudo[127754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmnllilcdqneyxjhqzecrevabjfbcmhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400863.6081045-248-41991485546633/AnsiballZ_stat.py'
Nov 29 07:21:03 compute-0 sudo[127754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:03 compute-0 ceph-mon[75050]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:04 compute-0 python3.9[127756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:04 compute-0 sudo[127754]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:04 compute-0 sudo[127832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suiqdvjjvdfjzvvepakbkblbrrfayjmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400863.6081045-248-41991485546633/AnsiballZ_file.py'
Nov 29 07:21:04 compute-0 sudo[127832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:04 compute-0 python3.9[127834]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:04 compute-0 sudo[127832]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:05 compute-0 sudo[127984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whwdfihazgsqxovxievfqmxcykgzihnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400864.8695762-260-67711466664851/AnsiballZ_stat.py'
Nov 29 07:21:05 compute-0 sudo[127984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:05 compute-0 python3.9[127986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:21:05
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta']
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:21:05 compute-0 sudo[127984]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:21:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:05 compute-0 sudo[128062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqtyvzfstrzjnsxdenhpqnmaueyefhmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400864.8695762-260-67711466664851/AnsiballZ_file.py'
Nov 29 07:21:05 compute-0 sudo[128062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:05 compute-0 python3.9[128064]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.l81420eu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:05 compute-0 sudo[128062]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:06 compute-0 sudo[128214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opjsgytpmwnwemtulvowfxutqjjolluo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400866.1591697-272-47904022999374/AnsiballZ_stat.py'
Nov 29 07:21:06 compute-0 sudo[128214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:21:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:21:06 compute-0 python3.9[128216]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:06 compute-0 sudo[128214]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:06 compute-0 ceph-mon[75050]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:07 compute-0 sudo[128292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlsjkmiqugqxgmabdlbmcsfsgqxkmzoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400866.1591697-272-47904022999374/AnsiballZ_file.py'
Nov 29 07:21:07 compute-0 sudo[128292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:07 compute-0 python3.9[128294]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:07 compute-0 sudo[128292]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:08 compute-0 ceph-mon[75050]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:08 compute-0 sudo[128444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wexnciuimarbdvndexyailbirpecpbsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400867.5936885-285-141043922871337/AnsiballZ_command.py'
Nov 29 07:21:08 compute-0 sudo[128444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:08 compute-0 python3.9[128446]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:08 compute-0 sudo[128444]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:09 compute-0 sudo[128597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecblwbdxvggufcnufuynhuibetfwkoip ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764400868.5758772-293-118110554649086/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:21:09 compute-0 sudo[128597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:09 compute-0 python3[128599]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:21:09 compute-0 sudo[128597]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:09 compute-0 sudo[128749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irbmxnyambhaxnskknabiixiybqdbtlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400869.5014539-301-27896853800535/AnsiballZ_stat.py'
Nov 29 07:21:09 compute-0 sudo[128749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:10 compute-0 python3.9[128751]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:10 compute-0 sudo[128749]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:10 compute-0 sudo[128827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooknbkgfxcefcprhdluadtxrtzlyxpnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400869.5014539-301-27896853800535/AnsiballZ_file.py'
Nov 29 07:21:10 compute-0 sudo[128827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:10 compute-0 python3.9[128829]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:10 compute-0 sudo[128827]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:11 compute-0 sudo[128979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvofflmhgeuyswljcymuulkgmckjxcym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400870.854085-313-261935543709813/AnsiballZ_stat.py'
Nov 29 07:21:11 compute-0 sudo[128979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:11 compute-0 python3.9[128981]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:11 compute-0 sudo[128979]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:21:11 compute-0 ceph-mon[75050]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:11 compute-0 sudo[129057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwrslqhyumlwmtmvcmbehqjukjebmhuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400870.854085-313-261935543709813/AnsiballZ_file.py'
Nov 29 07:21:11 compute-0 sudo[129057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:12 compute-0 python3.9[129059]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:12 compute-0 sudo[129057]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:12 compute-0 sudo[129209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljvlvtukjyuztkcrkpmqdjgtfehrhwua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400872.2528863-325-227980055025206/AnsiballZ_stat.py'
Nov 29 07:21:12 compute-0 sudo[129209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:12 compute-0 python3.9[129211]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:12 compute-0 sudo[129209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:13 compute-0 sudo[129287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffvnvhqfhwvizpojlxoneutamovimeoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400872.2528863-325-227980055025206/AnsiballZ_file.py'
Nov 29 07:21:13 compute-0 sudo[129287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:13 compute-0 python3.9[129289]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:13 compute-0 sudo[129287]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:13 compute-0 sudo[129439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfaotrkceitgqxrhbrawmsbtgvuoljlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400873.449948-337-116360036414624/AnsiballZ_stat.py'
Nov 29 07:21:13 compute-0 sudo[129439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:13 compute-0 python3.9[129441]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:14 compute-0 sudo[129439]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:14 compute-0 ceph-mon[75050]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:14 compute-0 sudo[129517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbstjyasomcoplyjkmzoaglxhlhtmrul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400873.449948-337-116360036414624/AnsiballZ_file.py'
Nov 29 07:21:14 compute-0 sudo[129517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:14 compute-0 python3.9[129519]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:21:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:21:14 compute-0 sudo[129517]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:15 compute-0 sudo[129669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzdesyaomkvkdvhbwgzzzcqvmnspezap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400874.8508682-349-164172014297126/AnsiballZ_stat.py'
Nov 29 07:21:15 compute-0 sudo[129669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:15 compute-0 python3.9[129671]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:15 compute-0 sudo[129669]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:15 compute-0 sudo[129747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spraqpfziwswcrzucvgaugateaeimezp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400874.8508682-349-164172014297126/AnsiballZ_file.py'
Nov 29 07:21:15 compute-0 sudo[129747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:16 compute-0 ceph-mon[75050]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:16 compute-0 python3.9[129749]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:16 compute-0 sudo[129747]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:16 compute-0 sudo[129899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijokbscqaishtknvklecutdigyoqzmky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400876.3808324-362-89008469563833/AnsiballZ_command.py'
Nov 29 07:21:16 compute-0 sudo[129899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:16 compute-0 python3.9[129901]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:21:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1680 writes, 7663 keys, 1680 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 1679 writes, 1679 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1680 writes, 7663 keys, 1680 commit groups, 1.0 writes per commit group, ingest: 10.09 MB, 0.02 MB/s
                                           Interval WAL: 1679 writes, 1679 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     86.6      0.09              0.03         2    0.046       0      0       0.0       0.0
                                             L6      1/0    7.86 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0     17.5     17.4      0.45              0.03         1    0.453    3521    290       0.0       0.0
                                            Sum      1/0    7.86 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     14.5     29.0      0.54              0.06         3    0.181    3521    290       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     14.6     29.0      0.54              0.06         2    0.271    3521    290       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     17.5     17.4      0.45              0.03         1    0.453    3521    290       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     88.3      0.09              0.03         1    0.089       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.5 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdb5ecb1f0#2 capacity: 308.00 MB usage: 261.80 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000112 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(21,203.80 KB,0.064617%) FilterBlock(4,17.48 KB,0.0055437%) IndexBlock(4,40.52 KB,0.0128461%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:21:17 compute-0 sudo[129899]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:21:17 compute-0 ceph-mon[75050]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:17 compute-0 sudo[130054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvkgtnxtvbylcimsdtulemnpzuearkdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400877.2679737-370-252384691005322/AnsiballZ_blockinfile.py'
Nov 29 07:21:17 compute-0 sudo[130054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:18 compute-0 python3.9[130056]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:18 compute-0 sudo[130054]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:18 compute-0 sudo[130206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tveviyaithxotrqsminelsjjcegmurqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400878.2666786-379-261683821431912/AnsiballZ_file.py'
Nov 29 07:21:18 compute-0 sudo[130206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:18 compute-0 python3.9[130208]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:18 compute-0 sudo[130206]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:18 compute-0 ceph-mon[75050]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:19 compute-0 sudo[130358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxsvckgkgklrfqinuuzaevhsxwqmlgqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400878.9329243-379-179706181324711/AnsiballZ_file.py'
Nov 29 07:21:19 compute-0 sudo[130358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:19 compute-0 python3.9[130360]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:19 compute-0 sudo[130358]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:20 compute-0 sudo[130510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuevqrqnqoqrqbyswpbkduitlwgjfrzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400879.6686745-394-231555180304339/AnsiballZ_mount.py'
Nov 29 07:21:20 compute-0 sudo[130510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:20 compute-0 python3.9[130512]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:21:20 compute-0 sudo[130510]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:20 compute-0 ceph-mon[75050]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:21 compute-0 sudo[130662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyfnvhmgpubclcffxdejpuqvuddpnxha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400880.7655215-394-55066639607228/AnsiballZ_mount.py'
Nov 29 07:21:21 compute-0 sudo[130662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:21 compute-0 python3.9[130664]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 07:21:21 compute-0 sudo[130662]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:21 compute-0 sshd-session[123550]: Connection closed by 192.168.122.30 port 32928
Nov 29 07:21:21 compute-0 sshd-session[123547]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:21:21 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 07:21:21 compute-0 systemd[1]: session-39.scope: Consumed 34.010s CPU time.
Nov 29 07:21:21 compute-0 systemd-logind[807]: Session 39 logged out. Waiting for processes to exit.
Nov 29 07:21:21 compute-0 systemd-logind[807]: Removed session 39.
Nov 29 07:21:22 compute-0 ceph-mon[75050]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:21:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:25 compute-0 ceph-mon[75050]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:25 compute-0 sudo[130689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:25 compute-0 sudo[130689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:25 compute-0 sudo[130689]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:25 compute-0 sudo[130714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:21:25 compute-0 sudo[130714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:25 compute-0 sudo[130714]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:25 compute-0 sudo[130739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:25 compute-0 sudo[130739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:25 compute-0 sudo[130739]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:25 compute-0 sudo[130764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:21:25 compute-0 sudo[130764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:26 compute-0 sudo[130764]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:21:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:21:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:21:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:21:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:21:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:21:27 compute-0 sshd-session[130820]: Accepted publickey for zuul from 192.168.122.30 port 36296 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:21:27 compute-0 systemd-logind[807]: New session 40 of user zuul.
Nov 29 07:21:27 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 29 07:21:27 compute-0 sshd-session[130820]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:21:27 compute-0 ceph-mon[75050]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:21:27 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1114dc52-b1d2-4f25-998a-37977c997504 does not exist
Nov 29 07:21:27 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 501fe53c-1dd7-44d1-b7ce-36a0aba02127 does not exist
Nov 29 07:21:27 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 93dc69fe-d199-4fe1-9e4f-fa39a938163a does not exist
Nov 29 07:21:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:21:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:21:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:21:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:21:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:21:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:21:27 compute-0 sudo[130876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:27 compute-0 sudo[130876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:27 compute-0 sudo[130876]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:27 compute-0 sudo[130901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:21:27 compute-0 sudo[130901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:27 compute-0 sudo[130901]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:28 compute-0 sudo[130926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:21:28 compute-0 sudo[130926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:28 compute-0 sudo[130926]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:28 compute-0 sudo[130978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:21:28 compute-0 sudo[130978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:21:28 compute-0 sudo[131073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-desvqiiwyfggeynvzjizfjmzbgguaxxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400887.6747603-16-52305077391590/AnsiballZ_tempfile.py'
Nov 29 07:21:28 compute-0 sudo[131073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:28 compute-0 python3.9[131077]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 07:21:28 compute-0 sudo[131073]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:28 compute-0 podman[131116]: 2025-11-29 07:21:28.540891482 +0000 UTC m=+0.037643602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:21:29 compute-0 sudo[131279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvldpbijfrnunkrqupmfqlmanrnhzaye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400888.7233167-28-249816824896670/AnsiballZ_stat.py'
Nov 29 07:21:29 compute-0 sudo[131279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:29 compute-0 python3.9[131281]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:21:29 compute-0 sudo[131279]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:30 compute-0 sudo[131433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnbrpmktzgwkpfeiuqqzhtknwbkmzibo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400889.7405877-36-139750016772551/AnsiballZ_slurp.py'
Nov 29 07:21:30 compute-0 sudo[131433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:30 compute-0 python3.9[131435]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 07:21:30 compute-0 sudo[131433]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:31 compute-0 sudo[131585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzcfknjvvhmnlgzuxjwulfkhwqepuqge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400890.728122-44-267934093218857/AnsiballZ_stat.py'
Nov 29 07:21:31 compute-0 sudo[131585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:31 compute-0 python3.9[131587]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.vpnwub5q follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:21:31 compute-0 sudo[131585]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:31 compute-0 sudo[131710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aibtnqeburlfkqrthkwxwvrzmkaovqcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400890.728122-44-267934093218857/AnsiballZ_copy.py'
Nov 29 07:21:31 compute-0 sudo[131710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:32 compute-0 python3.9[131712]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.vpnwub5q mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400890.728122-44-267934093218857/.source.vpnwub5q _original_basename=.5z3njta7 follow=False checksum=2a7c0def085d52ba2a6413c08617b5814727efe7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:32 compute-0 sudo[131710]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:32 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 07:21:32 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:21:33 compute-0 sudo[131864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncfjrbrixjjzaqdrhicuzekxumxuudic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400892.3905966-59-120116855548863/AnsiballZ_setup.py'
Nov 29 07:21:33 compute-0 sudo[131864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:33 compute-0 python3.9[131866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:21:33 compute-0 sudo[131864]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:34 compute-0 sudo[132016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkjflgjjfuneqjxgdactumnqqjssnzns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400893.671666-68-156780625863508/AnsiballZ_blockinfile.py'
Nov 29 07:21:34 compute-0 sudo[132016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:34 compute-0 python3.9[132018]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCza5ScvnoM/dqQpaH+pvxwwKnNah93wNZa7JkYjYwcf0yzvTcgB7IrdaPpAf5eKVndtUyXmuruiZQSyBMhatW+OmlsmvNubCZeHO9GtMqkyN6eHYYmdkMmu+vGtio3ULiYYvbsjqJATEfYAvDYeme2YoH1RXQ1e1EY+kTGZoeI6Y9V85ZNO2094ciXmznQ14DqBuxwwYqByOmXgdicstYSeSDC8EXEB68Ext+sts+Gw0ac6A/wBdccTwvepraCPwR5AfJgg4oep7I5WiAld6KhDkFGkd4vknjxrvfMFbBvNRE90+ta7JcTzkloX8FHnQxlePa9UiN6/wH7Lmk7E3EzrvWkkQmx3t4kwZ5w5cxBXMKrRjQ3QnrM7G4Z5IC5ZzFGbr1tDqPmw3UoE2+0P97Ak9c02uhCosskOFkSnL7WBSvxMqjT0bJsL8YX6DMJEpty+w7cMhBlxWiJt4xdb+fIOOfor9NfiqCqgzcq6VDHZ3fFxG+qSWqTrLqBgtmZ528=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBUl1dE8VCsfZqAat9Qop5dua4RQ4wkN+XwdjeNkxaB
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAqjt56KYMdHywRK1fsT+jfYkSKLb885ExLCF7SqvFQibCZB692C/0zfgTGmaA0M2XuwDg5/jNkNgmlrs4vcqr4=
                                              create=True mode=0644 path=/tmp/ansible.vpnwub5q state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:34 compute-0 sudo[132016]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:35 compute-0 sudo[132168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csfbxspbwanlvulacnundxerrtegwzop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400894.6670694-76-94187847023984/AnsiballZ_command.py'
Nov 29 07:21:35 compute-0 sudo[132168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:35 compute-0 python3.9[132170]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vpnwub5q' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:35 compute-0 sudo[132168]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:21:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:21:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:21:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:21:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:21:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:21:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:36 compute-0 sudo[132322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztifwsxluiqwruhvqqqoczkdftaoyydp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400895.6642013-84-234286859584063/AnsiballZ_file.py'
Nov 29 07:21:36 compute-0 sudo[132322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:36 compute-0 python3.9[132324]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vpnwub5q state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:36 compute-0 sudo[132322]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:36 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:21:36 compute-0 sshd-session[130823]: Connection closed by 192.168.122.30 port 36296
Nov 29 07:21:36 compute-0 sshd-session[130820]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:21:36 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 07:21:36 compute-0 systemd[1]: session-40.scope: Consumed 6.579s CPU time.
Nov 29 07:21:36 compute-0 systemd-logind[807]: Session 40 logged out. Waiting for processes to exit.
Nov 29 07:21:36 compute-0 systemd-logind[807]: Removed session 40.
Nov 29 07:21:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:40 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:21:41 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf MDS connection to Monitors appears to be laggy; 16.2795s since last acked beacon
Nov 29 07:21:41 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:21:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:43 compute-0 sshd-session[132349]: Accepted publickey for zuul from 192.168.122.30 port 39900 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:21:43 compute-0 systemd-logind[807]: New session 41 of user zuul.
Nov 29 07:21:43 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 29 07:21:43 compute-0 sshd-session[132349]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:21:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:44 compute-0 python3.9[132502]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:21:44 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:21:45 compute-0 sudo[132656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cifwfediztanihjerpnjrguxypthegdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400904.8071554-32-113770801364228/AnsiballZ_systemd.py'
Nov 29 07:21:45 compute-0 sudo[132656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:45 compute-0 python3.9[132658]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 07:21:45 compute-0 sudo[132656]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:46 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:21:46 compute-0 sudo[132810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwnqylhsepsztkklvefsqyzusefoknud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400906.0732715-40-155304162408612/AnsiballZ_systemd.py'
Nov 29 07:21:46 compute-0 sudo[132810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 19.2323 seconds
Nov 29 07:21:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:21:46 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf  MDS is no longer laggy
Nov 29 07:21:46 compute-0 python3.9[132812]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:21:46 compute-0 sudo[132810]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:47 compute-0 podman[131116]: 2025-11-29 07:21:47.577133992 +0000 UTC m=+19.073886092 container create 1b2a48851572f80709805da8e2ab7fb2f10063b0ed27502af37c7c842b84bac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:21:47 compute-0 sudo[132963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syfdumuiipuxkimnayzzepqzvgmtbwbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400907.068441-49-122820181206231/AnsiballZ_command.py'
Nov 29 07:21:47 compute-0 sudo[132963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:47 compute-0 python3.9[132965]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:21:47 compute-0 sudo[132963]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:47 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:21:47 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:21:47 compute-0 ceph-mon[75050]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:47 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:21:47 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:21:47 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:21:47 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:21:47 compute-0 systemd[1]: Started libpod-conmon-1b2a48851572f80709805da8e2ab7fb2f10063b0ed27502af37c7c842b84bac3.scope.
Nov 29 07:21:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:21:48 compute-0 podman[131116]: 2025-11-29 07:21:48.228215736 +0000 UTC m=+19.724967876 container init 1b2a48851572f80709805da8e2ab7fb2f10063b0ed27502af37c7c842b84bac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chatterjee, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:21:48 compute-0 podman[131116]: 2025-11-29 07:21:48.242236522 +0000 UTC m=+19.738988672 container start 1b2a48851572f80709805da8e2ab7fb2f10063b0ed27502af37c7c842b84bac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chatterjee, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:21:48 compute-0 cranky_chatterjee[132993]: 167 167
Nov 29 07:21:48 compute-0 systemd[1]: libpod-1b2a48851572f80709805da8e2ab7fb2f10063b0ed27502af37c7c842b84bac3.scope: Deactivated successfully.
Nov 29 07:21:48 compute-0 sudo[133135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxmntyfhjydwfydehruzrbkghhlxigzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400908.049095-57-176240277479984/AnsiballZ_stat.py'
Nov 29 07:21:48 compute-0 sudo[133135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:48 compute-0 podman[131116]: 2025-11-29 07:21:48.63608067 +0000 UTC m=+20.132832850 container attach 1b2a48851572f80709805da8e2ab7fb2f10063b0ed27502af37c7c842b84bac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chatterjee, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:21:48 compute-0 podman[131116]: 2025-11-29 07:21:48.639286242 +0000 UTC m=+20.136038402 container died 1b2a48851572f80709805da8e2ab7fb2f10063b0ed27502af37c7c842b84bac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:21:48 compute-0 python3.9[133137]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:21:48 compute-0 sudo[133135]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:49 compute-0 sudo[133287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owjaerxlufxswpscgnlbeegckmlkooqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400908.9625313-66-204287276328638/AnsiballZ_file.py'
Nov 29 07:21:49 compute-0 sudo[133287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:49 compute-0 python3.9[133289]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:21:49 compute-0 sudo[133287]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:50 compute-0 sshd-session[132352]: Connection closed by 192.168.122.30 port 39900
Nov 29 07:21:50 compute-0 sshd-session[132349]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:21:50 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 07:21:50 compute-0 systemd[1]: session-41.scope: Consumed 4.754s CPU time.
Nov 29 07:21:50 compute-0 systemd-logind[807]: Session 41 logged out. Waiting for processes to exit.
Nov 29 07:21:50 compute-0 systemd-logind[807]: Removed session 41.
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 ceph-mon[75050]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a46ad778cd7a3abcc4015b061017b65ca06c32600f05472d05472a8e29fee8c-merged.mount: Deactivated successfully.
Nov 29 07:21:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:21:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:52 compute-0 podman[131116]: 2025-11-29 07:21:52.494254234 +0000 UTC m=+23.991006364 container remove 1b2a48851572f80709805da8e2ab7fb2f10063b0ed27502af37c7c842b84bac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:21:52 compute-0 systemd[1]: libpod-conmon-1b2a48851572f80709805da8e2ab7fb2f10063b0ed27502af37c7c842b84bac3.scope: Deactivated successfully.
Nov 29 07:21:52 compute-0 ceph-mon[75050]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:52 compute-0 ceph-mon[75050]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:52 compute-0 podman[133323]: 2025-11-29 07:21:52.640716219 +0000 UTC m=+0.022956367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:21:53 compute-0 podman[133323]: 2025-11-29 07:21:53.038105429 +0000 UTC m=+0.420345587 container create 05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:21:53 compute-0 systemd[1]: Started libpod-conmon-05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a.scope.
Nov 29 07:21:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f2c5ad5027ecbaeb54ccb47d5785de39f9deb9465f924f5a86c7791c1b1214/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f2c5ad5027ecbaeb54ccb47d5785de39f9deb9465f924f5a86c7791c1b1214/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f2c5ad5027ecbaeb54ccb47d5785de39f9deb9465f924f5a86c7791c1b1214/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f2c5ad5027ecbaeb54ccb47d5785de39f9deb9465f924f5a86c7791c1b1214/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f2c5ad5027ecbaeb54ccb47d5785de39f9deb9465f924f5a86c7791c1b1214/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:21:54 compute-0 podman[133323]: 2025-11-29 07:21:54.294297384 +0000 UTC m=+1.676537602 container init 05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldberg, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:21:54 compute-0 podman[133323]: 2025-11-29 07:21:54.306465358 +0000 UTC m=+1.688705516 container start 05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:21:55 compute-0 podman[133323]: 2025-11-29 07:21:55.244043846 +0000 UTC m=+2.626284074 container attach 05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:21:55 compute-0 pedantic_goldberg[133340]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:21:55 compute-0 pedantic_goldberg[133340]: --> relative data size: 1.0
Nov 29 07:21:55 compute-0 pedantic_goldberg[133340]: --> All data devices are unavailable
Nov 29 07:21:55 compute-0 systemd[1]: libpod-05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a.scope: Deactivated successfully.
Nov 29 07:21:55 compute-0 systemd[1]: libpod-05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a.scope: Consumed 1.099s CPU time.
Nov 29 07:21:55 compute-0 podman[133323]: 2025-11-29 07:21:55.44939586 +0000 UTC m=+2.831636028 container died 05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:21:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:56 compute-0 ceph-mon[75050]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:56 compute-0 sshd-session[133381]: Accepted publickey for zuul from 192.168.122.30 port 51606 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:21:56 compute-0 systemd-logind[807]: New session 42 of user zuul.
Nov 29 07:21:56 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 29 07:21:56 compute-0 sshd-session[133381]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:21:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:21:58 compute-0 python3.9[133534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:21:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:21:58 compute-0 sudo[133689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgoglgkjoptnhygqozssbhgsvbavxbqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400918.539732-34-21644536992982/AnsiballZ_setup.py'
Nov 29 07:21:58 compute-0 sudo[133689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:21:59 compute-0 python3.9[133691]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:21:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-55f2c5ad5027ecbaeb54ccb47d5785de39f9deb9465f924f5a86c7791c1b1214-merged.mount: Deactivated successfully.
Nov 29 07:21:59 compute-0 sudo[133689]: pam_unix(sudo:session): session closed for user root
Nov 29 07:21:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:00 compute-0 sudo[133773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkpyrdypejtdiheqevtivgxmulgjfnkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400918.539732-34-21644536992982/AnsiballZ_dnf.py'
Nov 29 07:22:00 compute-0 sudo[133773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:00 compute-0 python3.9[133775]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 07:22:01 compute-0 ceph-mon[75050]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:01 compute-0 ceph-mon[75050]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:02 compute-0 sudo[133773]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:02 compute-0 ceph-mon[75050]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:02 compute-0 ceph-mon[75050]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:03 compute-0 podman[133323]: 2025-11-29 07:22:03.417827835 +0000 UTC m=+10.800067973 container remove 05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_goldberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:22:03 compute-0 sudo[130978]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:03 compute-0 systemd[1]: libpod-conmon-05e5aeec8dfddea675d76805c5ec85302d6cb3bdd452c1404ee8424b214ecd5a.scope: Deactivated successfully.
Nov 29 07:22:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:03 compute-0 sudo[133927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:03 compute-0 sudo[133927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:03 compute-0 sudo[133927]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:03 compute-0 sudo[133952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:22:03 compute-0 sudo[133952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:03 compute-0 python3.9[133926]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:22:03 compute-0 sudo[133952]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:03 compute-0 sudo[133978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:03 compute-0 sudo[133978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:03 compute-0 sudo[133978]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:03 compute-0 sudo[134003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:22:03 compute-0 sudo[134003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:04 compute-0 podman[134092]: 2025-11-29 07:22:04.101606947 +0000 UTC m=+0.026202940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:22:04 compute-0 podman[134092]: 2025-11-29 07:22:04.308555737 +0000 UTC m=+0.233151700 container create 0f1dc5d595e5aec2333af692b2bc0b00a88ae600f5ec16002035580759e8ed60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:22:04 compute-0 ceph-mon[75050]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:04 compute-0 systemd[1]: Started libpod-conmon-0f1dc5d595e5aec2333af692b2bc0b00a88ae600f5ec16002035580759e8ed60.scope.
Nov 29 07:22:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:22:04 compute-0 podman[134092]: 2025-11-29 07:22:04.984326719 +0000 UTC m=+0.908922702 container init 0f1dc5d595e5aec2333af692b2bc0b00a88ae600f5ec16002035580759e8ed60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:22:04 compute-0 podman[134092]: 2025-11-29 07:22:04.998252251 +0000 UTC m=+0.922848204 container start 0f1dc5d595e5aec2333af692b2bc0b00a88ae600f5ec16002035580759e8ed60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 07:22:05 compute-0 charming_jones[134161]: 167 167
Nov 29 07:22:05 compute-0 systemd[1]: libpod-0f1dc5d595e5aec2333af692b2bc0b00a88ae600f5ec16002035580759e8ed60.scope: Deactivated successfully.
Nov 29 07:22:05 compute-0 podman[134092]: 2025-11-29 07:22:05.024533309 +0000 UTC m=+0.949129282 container attach 0f1dc5d595e5aec2333af692b2bc0b00a88ae600f5ec16002035580759e8ed60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:22:05 compute-0 podman[134092]: 2025-11-29 07:22:05.026530655 +0000 UTC m=+0.951126678 container died 0f1dc5d595e5aec2333af692b2bc0b00a88ae600f5ec16002035580759e8ed60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:22:05 compute-0 python3.9[134237]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:22:05
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['vms', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', '.mgr']
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:22:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:06 compute-0 python3.9[134401]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:22:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8500d723840bc77b25ee4b20d86aabdffdf32a86d01ce81455a43b6174260892-merged.mount: Deactivated successfully.
Nov 29 07:22:06 compute-0 ceph-mon[75050]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:06 compute-0 podman[134092]: 2025-11-29 07:22:06.51802814 +0000 UTC m=+2.442624113 container remove 0f1dc5d595e5aec2333af692b2bc0b00a88ae600f5ec16002035580759e8ed60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:22:06 compute-0 systemd[1]: libpod-conmon-0f1dc5d595e5aec2333af692b2bc0b00a88ae600f5ec16002035580759e8ed60.scope: Deactivated successfully.
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:22:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:22:06 compute-0 podman[134559]: 2025-11-29 07:22:06.761026904 +0000 UTC m=+0.092202690 container create f26bf43bd9e81c1ee777f2d1b51ac554797a106c39736137d29921a3643fb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:22:06 compute-0 podman[134559]: 2025-11-29 07:22:06.695780668 +0000 UTC m=+0.026956474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:22:06 compute-0 python3.9[134553]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:22:07 compute-0 systemd[1]: Started libpod-conmon-f26bf43bd9e81c1ee777f2d1b51ac554797a106c39736137d29921a3643fb570.scope.
Nov 29 07:22:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da24e8ba2dd923725b25aa5dda9aa2a189f9ccddf91e1e1dde72e6e758226571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da24e8ba2dd923725b25aa5dda9aa2a189f9ccddf91e1e1dde72e6e758226571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da24e8ba2dd923725b25aa5dda9aa2a189f9ccddf91e1e1dde72e6e758226571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da24e8ba2dd923725b25aa5dda9aa2a189f9ccddf91e1e1dde72e6e758226571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:22:07 compute-0 podman[134559]: 2025-11-29 07:22:07.210807968 +0000 UTC m=+0.541983754 container init f26bf43bd9e81c1ee777f2d1b51ac554797a106c39736137d29921a3643fb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:22:07 compute-0 podman[134559]: 2025-11-29 07:22:07.223540002 +0000 UTC m=+0.554715798 container start f26bf43bd9e81c1ee777f2d1b51ac554797a106c39736137d29921a3643fb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:22:07 compute-0 podman[134559]: 2025-11-29 07:22:07.269326513 +0000 UTC m=+0.600502339 container attach f26bf43bd9e81c1ee777f2d1b51ac554797a106c39736137d29921a3643fb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:22:07 compute-0 sshd-session[133384]: Connection closed by 192.168.122.30 port 51606
Nov 29 07:22:07 compute-0 sshd-session[133381]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:22:07 compute-0 systemd-logind[807]: Session 42 logged out. Waiting for processes to exit.
Nov 29 07:22:07 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 07:22:07 compute-0 systemd[1]: session-42.scope: Consumed 6.710s CPU time.
Nov 29 07:22:07 compute-0 systemd-logind[807]: Removed session 42.
Nov 29 07:22:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:07 compute-0 ceph-mon[75050]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:08 compute-0 charming_archimedes[134599]: {
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:     "0": [
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:         {
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "devices": [
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "/dev/loop3"
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             ],
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_name": "ceph_lv0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_size": "21470642176",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "name": "ceph_lv0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "tags": {
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.cluster_name": "ceph",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.crush_device_class": "",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.encrypted": "0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.osd_id": "0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.type": "block",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.vdo": "0"
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             },
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "type": "block",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "vg_name": "ceph_vg0"
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:         }
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:     ],
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:     "1": [
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:         {
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "devices": [
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "/dev/loop4"
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             ],
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_name": "ceph_lv1",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_size": "21470642176",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "name": "ceph_lv1",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "tags": {
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.cluster_name": "ceph",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.crush_device_class": "",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.encrypted": "0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.osd_id": "1",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.type": "block",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.vdo": "0"
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             },
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "type": "block",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "vg_name": "ceph_vg1"
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:         }
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:     ],
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:     "2": [
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:         {
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "devices": [
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "/dev/loop5"
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             ],
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_name": "ceph_lv2",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_size": "21470642176",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "name": "ceph_lv2",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "tags": {
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.cluster_name": "ceph",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.crush_device_class": "",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.encrypted": "0",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.osd_id": "2",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.type": "block",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:                 "ceph.vdo": "0"
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             },
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "type": "block",
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:             "vg_name": "ceph_vg2"
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:         }
Nov 29 07:22:08 compute-0 charming_archimedes[134599]:     ]
Nov 29 07:22:08 compute-0 charming_archimedes[134599]: }
Nov 29 07:22:08 compute-0 systemd[1]: libpod-f26bf43bd9e81c1ee777f2d1b51ac554797a106c39736137d29921a3643fb570.scope: Deactivated successfully.
Nov 29 07:22:08 compute-0 podman[134559]: 2025-11-29 07:22:08.085433972 +0000 UTC m=+1.416609788 container died f26bf43bd9e81c1ee777f2d1b51ac554797a106c39736137d29921a3643fb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-da24e8ba2dd923725b25aa5dda9aa2a189f9ccddf91e1e1dde72e6e758226571-merged.mount: Deactivated successfully.
Nov 29 07:22:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:08 compute-0 podman[134559]: 2025-11-29 07:22:08.93122981 +0000 UTC m=+2.262405596 container remove f26bf43bd9e81c1ee777f2d1b51ac554797a106c39736137d29921a3643fb570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:22:08 compute-0 systemd[1]: libpod-conmon-f26bf43bd9e81c1ee777f2d1b51ac554797a106c39736137d29921a3643fb570.scope: Deactivated successfully.
Nov 29 07:22:08 compute-0 sudo[134003]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:09 compute-0 sudo[134620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:09 compute-0 sudo[134620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:09 compute-0 sudo[134620]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:09 compute-0 sudo[134645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:22:09 compute-0 sudo[134645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:09 compute-0 sudo[134645]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:09 compute-0 sudo[134670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:09 compute-0 sudo[134670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:09 compute-0 sudo[134670]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:09 compute-0 sudo[134695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:22:09 compute-0 sudo[134695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:09 compute-0 podman[134760]: 2025-11-29 07:22:09.648702897 +0000 UTC m=+0.028639913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:22:09 compute-0 podman[134760]: 2025-11-29 07:22:09.871886779 +0000 UTC m=+0.251823765 container create a17ff0501c17ebae9a2e329864df5fc90d6cf3f6ca770057336fb8e08ff45afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:22:09 compute-0 systemd[1]: Started libpod-conmon-a17ff0501c17ebae9a2e329864df5fc90d6cf3f6ca770057336fb8e08ff45afe.scope.
Nov 29 07:22:09 compute-0 ceph-mon[75050]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:22:10 compute-0 podman[134760]: 2025-11-29 07:22:10.109592407 +0000 UTC m=+0.489529443 container init a17ff0501c17ebae9a2e329864df5fc90d6cf3f6ca770057336fb8e08ff45afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:22:10 compute-0 podman[134760]: 2025-11-29 07:22:10.120831025 +0000 UTC m=+0.500768011 container start a17ff0501c17ebae9a2e329864df5fc90d6cf3f6ca770057336fb8e08ff45afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:22:10 compute-0 quirky_jennings[134776]: 167 167
Nov 29 07:22:10 compute-0 systemd[1]: libpod-a17ff0501c17ebae9a2e329864df5fc90d6cf3f6ca770057336fb8e08ff45afe.scope: Deactivated successfully.
Nov 29 07:22:10 compute-0 podman[134760]: 2025-11-29 07:22:10.149897948 +0000 UTC m=+0.529834914 container attach a17ff0501c17ebae9a2e329864df5fc90d6cf3f6ca770057336fb8e08ff45afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:22:10 compute-0 podman[134760]: 2025-11-29 07:22:10.150363959 +0000 UTC m=+0.530300935 container died a17ff0501c17ebae9a2e329864df5fc90d6cf3f6ca770057336fb8e08ff45afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:22:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c72a4aa2b8d01a1a20d88af163bbba336612e1c18663d98886a50eae6947bc69-merged.mount: Deactivated successfully.
Nov 29 07:22:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:11 compute-0 podman[134760]: 2025-11-29 07:22:11.90933484 +0000 UTC m=+2.289271826 container remove a17ff0501c17ebae9a2e329864df5fc90d6cf3f6ca770057336fb8e08ff45afe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:22:11 compute-0 ceph-mon[75050]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:12 compute-0 systemd[1]: libpod-conmon-a17ff0501c17ebae9a2e329864df5fc90d6cf3f6ca770057336fb8e08ff45afe.scope: Deactivated successfully.
Nov 29 07:22:12 compute-0 podman[134801]: 2025-11-29 07:22:12.160540581 +0000 UTC m=+0.079087537 container create 104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:22:12 compute-0 podman[134801]: 2025-11-29 07:22:12.110677971 +0000 UTC m=+0.029224917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:22:12 compute-0 systemd[1]: Started libpod-conmon-104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b.scope.
Nov 29 07:22:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab072a72de527ac3bcfc4228bef8b3e7596dc7ff45b78dc00df248d20a90650/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab072a72de527ac3bcfc4228bef8b3e7596dc7ff45b78dc00df248d20a90650/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab072a72de527ac3bcfc4228bef8b3e7596dc7ff45b78dc00df248d20a90650/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab072a72de527ac3bcfc4228bef8b3e7596dc7ff45b78dc00df248d20a90650/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:22:12 compute-0 podman[134801]: 2025-11-29 07:22:12.388349672 +0000 UTC m=+0.306896648 container init 104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:22:12 compute-0 podman[134801]: 2025-11-29 07:22:12.397619544 +0000 UTC m=+0.316166490 container start 104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:22:12 compute-0 podman[134801]: 2025-11-29 07:22:12.427841063 +0000 UTC m=+0.346388009 container attach 104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]: {
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "osd_id": 2,
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "type": "bluestore"
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:     },
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "osd_id": 1,
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "type": "bluestore"
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:     },
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "osd_id": 0,
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:         "type": "bluestore"
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]:     }
Nov 29 07:22:13 compute-0 hardcore_murdock[134817]: }
Nov 29 07:22:13 compute-0 systemd[1]: libpod-104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b.scope: Deactivated successfully.
Nov 29 07:22:13 compute-0 podman[134801]: 2025-11-29 07:22:13.412678406 +0000 UTC m=+1.331225322 container died 104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:22:13 compute-0 systemd[1]: libpod-104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b.scope: Consumed 1.021s CPU time.
Nov 29 07:22:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-dab072a72de527ac3bcfc4228bef8b3e7596dc7ff45b78dc00df248d20a90650-merged.mount: Deactivated successfully.
Nov 29 07:22:13 compute-0 podman[134801]: 2025-11-29 07:22:13.62753276 +0000 UTC m=+1.546079706 container remove 104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:22:13 compute-0 systemd[1]: libpod-conmon-104198c48eb2d6643fa5ba4c4ffac74ea7cfd0a7221ba546c64d339c48c9200b.scope: Deactivated successfully.
Nov 29 07:22:13 compute-0 sudo[134695]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:22:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:22:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:22:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:22:13 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3a93f485-a49a-49ba-8170-2fb5cca7ae49 does not exist
Nov 29 07:22:13 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev d62013da-e8c8-4e0a-bd20-b7480d1a53cc does not exist
Nov 29 07:22:13 compute-0 sshd-session[134862]: Accepted publickey for zuul from 192.168.122.30 port 56422 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:22:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:13 compute-0 systemd-logind[807]: New session 43 of user zuul.
Nov 29 07:22:13 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 29 07:22:13 compute-0 sshd-session[134862]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:22:13 compute-0 sudo[134864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:22:13 compute-0 sudo[134864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:13 compute-0 sudo[134864]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:13 compute-0 sudo[134891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:22:13 compute-0 sudo[134891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:22:13 compute-0 sudo[134891]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:22:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:22:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:22:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:22:14 compute-0 ceph-mon[75050]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:14 compute-0 python3.9[135065]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:22:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:15 compute-0 ceph-mon[75050]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:16 compute-0 sudo[135219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysylusivyntzbflyyoynrszndkzjieqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400935.9837918-50-205980216805358/AnsiballZ_file.py'
Nov 29 07:22:16 compute-0 sudo[135219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:16 compute-0 python3.9[135221]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:16 compute-0 sudo[135219]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:17 compute-0 sudo[135371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhstcogbnrxetequycqqadlirenqzsea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400936.7805307-50-19910174958230/AnsiballZ_file.py'
Nov 29 07:22:17 compute-0 sudo[135371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:17 compute-0 python3.9[135373]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:17 compute-0 sudo[135371]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:17 compute-0 ceph-mon[75050]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:18 compute-0 sudo[135523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgehlbgzrdywoufegdblxhghxaxjdtta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400937.6377828-65-144854122856303/AnsiballZ_stat.py'
Nov 29 07:22:18 compute-0 sudo[135523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:18 compute-0 python3.9[135525]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:18 compute-0 sudo[135523]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:18 compute-0 sudo[135646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jubahnmjsgwpenreiuohbqsaabjnqtux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400937.6377828-65-144854122856303/AnsiballZ_copy.py'
Nov 29 07:22:18 compute-0 sudo[135646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:18 compute-0 python3.9[135648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400937.6377828-65-144854122856303/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3190ef3b97b47ae193345b06f6df566dc4155a2e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:19 compute-0 sudo[135646]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:19 compute-0 sudo[135798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nepvqrlulccyjsfmccigacesjwxegedp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400939.1801715-65-132286219834903/AnsiballZ_stat.py'
Nov 29 07:22:19 compute-0 sudo[135798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:19 compute-0 python3.9[135800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:19 compute-0 sudo[135798]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:19 compute-0 ceph-mon[75050]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:20 compute-0 sudo[135921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovuifgxwhevnkequlzyksxwerdseykxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400939.1801715-65-132286219834903/AnsiballZ_copy.py'
Nov 29 07:22:20 compute-0 sudo[135921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:20 compute-0 python3.9[135923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400939.1801715-65-132286219834903/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cfa7f2efc7847309cd193fa6c9193a2c763d889f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:20 compute-0 sudo[135921]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:20 compute-0 sudo[136073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zonvvlyjtrftmuyngcnhehcougwxerzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400940.5243597-65-47329047266376/AnsiballZ_stat.py'
Nov 29 07:22:20 compute-0 sudo[136073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:21 compute-0 python3.9[136075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:21 compute-0 sudo[136073]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:21 compute-0 sudo[136196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqzrkadowlpfckmsmzdfdgeyszhnmaee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400940.5243597-65-47329047266376/AnsiballZ_copy.py'
Nov 29 07:22:21 compute-0 sudo[136196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:21 compute-0 python3.9[136198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400940.5243597-65-47329047266376/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4660e64d703615a84931fc06884ba32c9ddec0a0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:21 compute-0 sudo[136196]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:21 compute-0 ceph-mon[75050]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:22 compute-0 sudo[136348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlupbxilxssvgooeewmtggohdjytkrwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400942.233921-109-155383710777717/AnsiballZ_file.py'
Nov 29 07:22:22 compute-0 sudo[136348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:22 compute-0 python3.9[136350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:22 compute-0 sudo[136348]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:23 compute-0 sudo[136500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtsjwtrnktjxcmywmfbmejkynbigklrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400942.8830771-109-119613505677141/AnsiballZ_file.py'
Nov 29 07:22:23 compute-0 sudo[136500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:23 compute-0 python3.9[136502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:23 compute-0 sudo[136500]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:23 compute-0 ceph-mon[75050]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:23 compute-0 sudo[136652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnjbdwixrzpwhoquqefzegvwqsugqonm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400943.5426614-124-120432698074899/AnsiballZ_stat.py'
Nov 29 07:22:23 compute-0 sudo[136652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:24 compute-0 python3.9[136654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:24 compute-0 sudo[136652]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:24 compute-0 sudo[136775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzddcuhdfjeuihiojrtlivwpzhijirsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400943.5426614-124-120432698074899/AnsiballZ_copy.py'
Nov 29 07:22:24 compute-0 sudo[136775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:24 compute-0 python3.9[136777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400943.5426614-124-120432698074899/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f2f25b84fd8f2efb6fa30530a7d68777976cd082 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:24 compute-0 sudo[136775]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:25 compute-0 sudo[136927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npaihzajdswqsvbqxuapbxbgkxsbdsoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400944.845841-124-41725572148915/AnsiballZ_stat.py'
Nov 29 07:22:25 compute-0 sudo[136927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:25 compute-0 python3.9[136929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:25 compute-0 sudo[136927]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:25 compute-0 sudo[137050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyvvhsxpcioeuikjtjmpvwjbgniipffn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400944.845841-124-41725572148915/AnsiballZ_copy.py'
Nov 29 07:22:25 compute-0 sudo[137050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:25 compute-0 ceph-mon[75050]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:25 compute-0 python3.9[137052]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400944.845841-124-41725572148915/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d4b5680ff5d1e62d9135001f3a9c24a13a9d9021 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:25 compute-0 sudo[137050]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:26 compute-0 sudo[137202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beamdecuehuidoarruibpvtlxztknddu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400946.1109798-124-174502222643497/AnsiballZ_stat.py'
Nov 29 07:22:26 compute-0 sudo[137202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:26 compute-0 python3.9[137204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:26 compute-0 sudo[137202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:27 compute-0 sudo[137325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yberurrqjasemgthufepisxbtrejqxqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400946.1109798-124-174502222643497/AnsiballZ_copy.py'
Nov 29 07:22:27 compute-0 sudo[137325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:27 compute-0 python3.9[137327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400946.1109798-124-174502222643497/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=12a5e64fa63023cb3d1b105d9d1dbb546ea6552c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:27 compute-0 sudo[137325]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:27 compute-0 ceph-mon[75050]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:27 compute-0 sudo[137477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygneibrsfeatzioqmvtfzbkkqkqcnuju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400947.5817122-168-228266881173217/AnsiballZ_file.py'
Nov 29 07:22:27 compute-0 sudo[137477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:28 compute-0 python3.9[137479]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:28 compute-0 sudo[137477]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:28 compute-0 sudo[137629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qszoyhetpmrlwiclbspxrjkdlloujvza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400948.2285113-168-245634200904926/AnsiballZ_file.py'
Nov 29 07:22:28 compute-0 sudo[137629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:28 compute-0 python3.9[137631]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:28 compute-0 sudo[137629]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:29 compute-0 sudo[137781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wchrcrtfseysrnvbsossqfpzuzwnritg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400948.9391448-183-260543297249626/AnsiballZ_stat.py'
Nov 29 07:22:29 compute-0 sudo[137781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:29 compute-0 python3.9[137783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:29 compute-0 sudo[137781]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:29 compute-0 ceph-mon[75050]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:29 compute-0 sudo[137904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhvrltcnqifccnqwohcfqdwkzwgflnqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400948.9391448-183-260543297249626/AnsiballZ_copy.py'
Nov 29 07:22:29 compute-0 sudo[137904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:30 compute-0 python3.9[137906]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400948.9391448-183-260543297249626/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=cac1577ee535f9b2132cf8fed3e2fa1125729d39 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:30 compute-0 sudo[137904]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:30 compute-0 sudo[138056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brjpqpmqndpjkjewzdebmwzhvawcjfug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400950.2544682-183-53718065714058/AnsiballZ_stat.py'
Nov 29 07:22:30 compute-0 sudo[138056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:30 compute-0 python3.9[138058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:30 compute-0 sudo[138056]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:31 compute-0 sudo[138179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abidxgspaonseeyremflavyzujjlhutd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400950.2544682-183-53718065714058/AnsiballZ_copy.py'
Nov 29 07:22:31 compute-0 sudo[138179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:31 compute-0 python3.9[138181]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400950.2544682-183-53718065714058/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d4b5680ff5d1e62d9135001f3a9c24a13a9d9021 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:31 compute-0 sudo[138179]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:31 compute-0 ceph-mon[75050]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:31 compute-0 sudo[138331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnspojthqvkxrhjivtfgpwkjuqnvneus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400951.5061214-183-107823809549156/AnsiballZ_stat.py'
Nov 29 07:22:31 compute-0 sudo[138331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:32 compute-0 python3.9[138333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:32 compute-0 sudo[138331]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:32 compute-0 sudo[138454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmgjtljgsthxvjgaowsiqikyljkuwtql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400951.5061214-183-107823809549156/AnsiballZ_copy.py'
Nov 29 07:22:32 compute-0 sudo[138454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:32 compute-0 python3.9[138456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400951.5061214-183-107823809549156/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b1e1007b910a165158a9003ab99571a1b85d1d52 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:32 compute-0 sudo[138454]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:33 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 29 07:22:33 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:33.685700) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:22:33 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 29 07:22:33 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400953685834, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 960, "num_deletes": 251, "total_data_size": 1407220, "memory_usage": 1435400, "flush_reason": "Manual Compaction"}
Nov 29 07:22:33 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 29 07:22:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:33 compute-0 sudo[138606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgupfdnnooazbpndkuftsysfqmamblvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400953.5207224-243-191141422020966/AnsiballZ_file.py'
Nov 29 07:22:33 compute-0 sudo[138606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:33 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400953963322, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 843165, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7519, "largest_seqno": 8478, "table_properties": {"data_size": 839411, "index_size": 1470, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9576, "raw_average_key_size": 19, "raw_value_size": 831319, "raw_average_value_size": 1714, "num_data_blocks": 69, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400835, "oldest_key_time": 1764400835, "file_creation_time": 1764400953, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:22:33 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 277666 microseconds, and 3743 cpu microseconds.
Nov 29 07:22:33 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:33.963394) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 843165 bytes OK
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:33.963417) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.019739) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.019800) EVENT_LOG_v1 {"time_micros": 1764400954019787, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.019840) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1402601, prev total WAL file size 1402601, number of live WAL files 2.
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.020897) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(823KB)], [20(8049KB)]
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400954020983, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9085473, "oldest_snapshot_seqno": -1}
Nov 29 07:22:34 compute-0 python3.9[138608]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:34 compute-0 sudo[138606]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3244 keys, 6788122 bytes, temperature: kUnknown
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400954119064, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6788122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6763304, "index_size": 15621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 79030, "raw_average_key_size": 24, "raw_value_size": 6701401, "raw_average_value_size": 2065, "num_data_blocks": 691, "num_entries": 3244, "num_filter_entries": 3244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764400954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.119410) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6788122 bytes
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.124312) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.5 rd, 69.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 7.9 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(18.8) write-amplify(8.1) OK, records in: 3716, records dropped: 472 output_compression: NoCompression
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.124345) EVENT_LOG_v1 {"time_micros": 1764400954124329, "job": 6, "event": "compaction_finished", "compaction_time_micros": 98194, "compaction_time_cpu_micros": 21974, "output_level": 6, "num_output_files": 1, "total_output_size": 6788122, "num_input_records": 3716, "num_output_records": 3244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400954124617, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400954126357, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.020819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.126390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.126394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.126396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.126397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:22:34 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:22:34.126399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:22:34 compute-0 sudo[138758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rofwoetsmaryzdacmebysaciarenmlbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400954.2741666-251-145390226231018/AnsiballZ_stat.py'
Nov 29 07:22:34 compute-0 sudo[138758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:34 compute-0 python3.9[138760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:34 compute-0 sudo[138758]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:34 compute-0 ceph-mon[75050]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:35 compute-0 sudo[138881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yobagjbpzlruqbgzudlgqawpmwiiismm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400954.2741666-251-145390226231018/AnsiballZ_copy.py'
Nov 29 07:22:35 compute-0 sudo[138881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:35 compute-0 python3.9[138883]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400954.2741666-251-145390226231018/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bda240d6fa5d122e3a0e28b9ac9ad93e386be357 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:35 compute-0 sudo[138881]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:22:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:22:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:22:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:22:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:22:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:22:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:35 compute-0 sudo[139033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfzbvrbmxxznglsxkheowilfgqxxxrzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400955.5255966-267-25472282426045/AnsiballZ_file.py'
Nov 29 07:22:35 compute-0 sudo[139033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:36 compute-0 python3.9[139035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:36 compute-0 sudo[139033]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:36 compute-0 ceph-mon[75050]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:36 compute-0 sudo[139185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzzuqgimavefpgkfjyeyoqmusdqfylaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400956.220774-275-267482403733198/AnsiballZ_stat.py'
Nov 29 07:22:36 compute-0 sudo[139185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:36 compute-0 python3.9[139187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:36 compute-0 sudo[139185]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:37 compute-0 sudo[139308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxffotekylmgtgqpsygrshqeggybavxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400956.220774-275-267482403733198/AnsiballZ_copy.py'
Nov 29 07:22:37 compute-0 sudo[139308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:37 compute-0 python3.9[139310]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400956.220774-275-267482403733198/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bda240d6fa5d122e3a0e28b9ac9ad93e386be357 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:37 compute-0 sudo[139308]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:37 compute-0 sudo[139460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dofsqpwkowjmgwtcbeeulkjwcoqjtlsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400957.4522672-291-89881021661791/AnsiballZ_file.py'
Nov 29 07:22:37 compute-0 sudo[139460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:37 compute-0 ceph-mon[75050]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:37 compute-0 python3.9[139462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:37 compute-0 sudo[139460]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:38 compute-0 sudo[139612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjsdmxwluumpjaqpmdesldjgtpadusmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400958.1528387-299-56521914912724/AnsiballZ_stat.py'
Nov 29 07:22:38 compute-0 sudo[139612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:38 compute-0 python3.9[139614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:38 compute-0 sudo[139612]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:39 compute-0 sudo[139735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqyiwasinanmurkezehunisymtfjhkot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400958.1528387-299-56521914912724/AnsiballZ_copy.py'
Nov 29 07:22:39 compute-0 sudo[139735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:39 compute-0 python3.9[139737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400958.1528387-299-56521914912724/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bda240d6fa5d122e3a0e28b9ac9ad93e386be357 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:39 compute-0 sudo[139735]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:39 compute-0 ceph-mon[75050]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:39 compute-0 sudo[139887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sadasyiyfzhrgzdbmtrqrqsoyhslzilf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400959.5494294-315-40312870041294/AnsiballZ_file.py'
Nov 29 07:22:39 compute-0 sudo[139887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:40 compute-0 python3.9[139889]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:40 compute-0 sudo[139887]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:40 compute-0 sudo[140039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfkcfhkiabtjsohsjxhvjxrekxzcqbqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400960.2709792-323-225691341780276/AnsiballZ_stat.py'
Nov 29 07:22:40 compute-0 sudo[140039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:40 compute-0 python3.9[140041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:40 compute-0 sudo[140039]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:41 compute-0 sudo[140162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwrgylbfkdipqkbypaugiblauvemktmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400960.2709792-323-225691341780276/AnsiballZ_copy.py'
Nov 29 07:22:41 compute-0 sudo[140162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:41 compute-0 python3.9[140164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400960.2709792-323-225691341780276/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bda240d6fa5d122e3a0e28b9ac9ad93e386be357 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:41 compute-0 sudo[140162]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:41 compute-0 ceph-mon[75050]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:41 compute-0 sudo[140314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjaembpmoypejodhigclrzticpbllztm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400961.6353962-339-103732373676966/AnsiballZ_file.py'
Nov 29 07:22:41 compute-0 sudo[140314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:42 compute-0 python3.9[140316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:42 compute-0 sudo[140314]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:42 compute-0 sudo[140466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkrhsvcwpwheplhbojibvurhsqphujkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400962.3651044-347-129262271548392/AnsiballZ_stat.py'
Nov 29 07:22:42 compute-0 sudo[140466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:42 compute-0 python3.9[140468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:42 compute-0 sudo[140466]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:43 compute-0 sudo[140589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjnmrmtmmmlirjtdgwzdwklshtxcjsgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400962.3651044-347-129262271548392/AnsiballZ_copy.py'
Nov 29 07:22:43 compute-0 sudo[140589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:43 compute-0 python3.9[140591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400962.3651044-347-129262271548392/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bda240d6fa5d122e3a0e28b9ac9ad93e386be357 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:43 compute-0 sudo[140589]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:43 compute-0 ceph-mon[75050]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:44 compute-0 sudo[140741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sabgxlmixvmvccmkznkgzhczxicnozga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400963.7382073-363-151768366021623/AnsiballZ_file.py'
Nov 29 07:22:44 compute-0 sudo[140741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:44 compute-0 python3.9[140743]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:22:44 compute-0 sudo[140741]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:44 compute-0 sudo[140893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eosoezbxxgkiqreltoqohtkvabxwcwvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400964.4360158-371-271179922454580/AnsiballZ_stat.py'
Nov 29 07:22:44 compute-0 sudo[140893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:44 compute-0 python3.9[140895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:44 compute-0 sudo[140893]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:45 compute-0 sudo[141016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckewraarzbfqensktnxafpvfqjajqgmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400964.4360158-371-271179922454580/AnsiballZ_copy.py'
Nov 29 07:22:45 compute-0 sudo[141016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:45 compute-0 python3.9[141018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400964.4360158-371-271179922454580/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bda240d6fa5d122e3a0e28b9ac9ad93e386be357 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:45 compute-0 sudo[141016]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:45 compute-0 ceph-mon[75050]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:46 compute-0 sshd-session[134888]: Connection closed by 192.168.122.30 port 56422
Nov 29 07:22:46 compute-0 sshd-session[134862]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:22:46 compute-0 systemd-logind[807]: Session 43 logged out. Waiting for processes to exit.
Nov 29 07:22:46 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 07:22:46 compute-0 systemd[1]: session-43.scope: Consumed 24.551s CPU time.
Nov 29 07:22:46 compute-0 systemd-logind[807]: Removed session 43.
Nov 29 07:22:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:47 compute-0 ceph-mon[75050]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:49 compute-0 ceph-mon[75050]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:51 compute-0 ceph-mon[75050]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:53 compute-0 sshd-session[141043]: Accepted publickey for zuul from 192.168.122.30 port 45076 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:22:53 compute-0 systemd-logind[807]: New session 44 of user zuul.
Nov 29 07:22:53 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 29 07:22:53 compute-0 sshd-session[141043]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:22:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:53 compute-0 ceph-mon[75050]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:53 compute-0 sudo[141196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edpvmfxjwnaozjotfgtpldbpjkyzthay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400973.2438385-22-157536956886115/AnsiballZ_file.py'
Nov 29 07:22:53 compute-0 sudo[141196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:54 compute-0 python3.9[141198]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:54 compute-0 sudo[141196]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:54 compute-0 sudo[141348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meazqaxrlxydrvoqsghftdajcnnyfkqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400974.2305377-34-219258870759951/AnsiballZ_stat.py'
Nov 29 07:22:54 compute-0 sudo[141348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:54 compute-0 python3.9[141350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:54 compute-0 sudo[141348]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:55 compute-0 sudo[141471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdqdyipshjoiuwtjgclqqyfkpoedkrxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400974.2305377-34-219258870759951/AnsiballZ_copy.py'
Nov 29 07:22:55 compute-0 sudo[141471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:55 compute-0 python3.9[141473]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400974.2305377-34-219258870759951/.source.conf _original_basename=ceph.conf follow=False checksum=c6b66ee05921a8321c036a1fb2c5a5a675af9445 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:55 compute-0 sudo[141471]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:55 compute-0 ceph-mon[75050]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:56 compute-0 sudo[141623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csizjnjdesnkypegrrenoztbsrvxfmsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400975.9186704-34-96101941227291/AnsiballZ_stat.py'
Nov 29 07:22:56 compute-0 sudo[141623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:56 compute-0 python3.9[141625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:22:56 compute-0 sudo[141623]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:56 compute-0 sudo[141746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbfoafldipgubbfbdlzkrexcrntmrpqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400975.9186704-34-96101941227291/AnsiballZ_copy.py'
Nov 29 07:22:56 compute-0 sudo[141746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:22:57 compute-0 python3.9[141748]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400975.9186704-34-96101941227291/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=a78ba328f966a508b10a905b8c648b006cefb08a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:22:57 compute-0 sudo[141746]: pam_unix(sudo:session): session closed for user root
Nov 29 07:22:57 compute-0 sshd-session[141046]: Connection closed by 192.168.122.30 port 45076
Nov 29 07:22:57 compute-0 sshd-session[141043]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:22:57 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 07:22:57 compute-0 systemd[1]: session-44.scope: Consumed 2.985s CPU time.
Nov 29 07:22:57 compute-0 systemd-logind[807]: Session 44 logged out. Waiting for processes to exit.
Nov 29 07:22:57 compute-0 systemd-logind[807]: Removed session 44.
Nov 29 07:22:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:57 compute-0 ceph-mon[75050]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:22:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:22:59 compute-0 ceph-mon[75050]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:01 compute-0 ceph-mon[75050]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:03 compute-0 ceph-mon[75050]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:03 compute-0 sshd-session[141773]: Accepted publickey for zuul from 192.168.122.30 port 54614 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:23:04 compute-0 systemd-logind[807]: New session 45 of user zuul.
Nov 29 07:23:04 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 29 07:23:04 compute-0 sshd-session[141773]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:23:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:23:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5256 writes, 22K keys, 5256 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5256 writes, 712 syncs, 7.38 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5256 writes, 22K keys, 5256 commit groups, 1.0 writes per commit group, ingest: 18.10 MB, 0.03 MB/s
                                           Interval WAL: 5256 writes, 712 syncs, 7.38 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:23:05 compute-0 python3.9[141926]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:23:05
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images']
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:23:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:05 compute-0 ceph-mon[75050]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:06 compute-0 sudo[142080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daiabvpcjaitmtgbudpdnynwcqawncnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400985.6699672-34-214635467337670/AnsiballZ_file.py'
Nov 29 07:23:06 compute-0 sudo[142080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:06 compute-0 python3.9[142082]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:06 compute-0 sudo[142080]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:23:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:23:06 compute-0 sudo[142232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uafmwijawhadmcryjxdvqfgdxkdbbfxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400986.53916-34-239216562405568/AnsiballZ_file.py'
Nov 29 07:23:06 compute-0 sudo[142232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:07 compute-0 python3.9[142234]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:07 compute-0 sudo[142232]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:07 compute-0 python3.9[142384]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:23:08 compute-0 ceph-mon[75050]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:08 compute-0 sudo[142534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdpbzcszolndxgxfjmuetqderpfgadtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400988.211553-57-7358700206231/AnsiballZ_seboolean.py'
Nov 29 07:23:08 compute-0 sudo[142534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:08 compute-0 python3.9[142536]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 07:23:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:23:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6356 writes, 27K keys, 6356 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6356 writes, 1027 syncs, 6.19 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6356 writes, 27K keys, 6356 commit groups, 1.0 writes per commit group, ingest: 19.07 MB, 0.03 MB/s
                                           Interval WAL: 6356 writes, 1027 syncs, 6.19 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f949605090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f949605090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f949605090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:23:10 compute-0 ceph-mon[75050]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:11 compute-0 sudo[142534]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:11 compute-0 sudo[142693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuhjngavdnohkhfingxmkcgtggmbdyyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400991.3670838-67-25628156890178/AnsiballZ_setup.py'
Nov 29 07:23:11 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 07:23:11 compute-0 sudo[142693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:11 compute-0 ceph-mon[75050]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:11 compute-0 python3.9[142695]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:23:12 compute-0 sudo[142693]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:12 compute-0 sudo[142777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwkxduoxcwrjbfybjgrcvzhcesqjsmgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400991.3670838-67-25628156890178/AnsiballZ_dnf.py'
Nov 29 07:23:12 compute-0 sudo[142777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:12 compute-0 python3.9[142779]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:23:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:13 compute-0 ceph-mon[75050]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:13 compute-0 sudo[142781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:13 compute-0 sudo[142781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:13 compute-0 sudo[142781]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:13 compute-0 sudo[142806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:23:13 compute-0 sudo[142806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:13 compute-0 sudo[142806]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:14 compute-0 sudo[142831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:14 compute-0 sudo[142831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:14 compute-0 sudo[142831]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:14 compute-0 sudo[142856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:23:14 compute-0 sudo[142856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:14 compute-0 sudo[142777]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:14 compute-0 sudo[142856]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:23:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:23:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:14 compute-0 sudo[142926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:14 compute-0 sudo[142926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:14 compute-0 sudo[142926]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:14 compute-0 sudo[142964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:23:14 compute-0 sudo[142964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:14 compute-0 sudo[142964]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:14 compute-0 sudo[143020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:14 compute-0 sudo[143020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:14 compute-0 sudo[143020]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:14 compute-0 sudo[143053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:23:14 compute-0 sudo[143053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:23:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:23:15 compute-0 sudo[143053]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:23:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:23:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:23:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:23:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:23:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:15 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ce190235-a0eb-4f39-ba55-151e35159b26 does not exist
Nov 29 07:23:15 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev d96b2216-6fd3-438d-a411-7b7130ba49b0 does not exist
Nov 29 07:23:15 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 52da60c6-947d-480a-af41-5aed56d0722d does not exist
Nov 29 07:23:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:23:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:23:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:23:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:23:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:23:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:23:15 compute-0 sudo[143202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoigrkppvmuzoaurqwezqpyyqcsuuprm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400994.483538-79-239612894517800/AnsiballZ_systemd.py'
Nov 29 07:23:15 compute-0 sudo[143202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:15 compute-0 sudo[143165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:15 compute-0 sudo[143165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:15 compute-0 sudo[143165]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:15 compute-0 sudo[143210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:23:15 compute-0 sudo[143210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:15 compute-0 sudo[143210]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:23:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:23:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:23:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:23:15 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:23:15 compute-0 sudo[143235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:15 compute-0 sudo[143235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:15 compute-0 sudo[143235]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:15 compute-0 sudo[143260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:23:15 compute-0 sudo[143260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:15 compute-0 python3.9[143207]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:23:15 compute-0 sudo[143202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:15 compute-0 podman[143351]: 2025-11-29 07:23:15.741884187 +0000 UTC m=+0.051263947 container create da2adc3a9784e475f0ddf4792c78ffb28e27b6c46c0c127d4f2922bffab5cdf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:23:15 compute-0 systemd[1]: Started libpod-conmon-da2adc3a9784e475f0ddf4792c78ffb28e27b6c46c0c127d4f2922bffab5cdf2.scope.
Nov 29 07:23:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:23:15 compute-0 podman[143351]: 2025-11-29 07:23:15.720666815 +0000 UTC m=+0.030046595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:23:15 compute-0 podman[143351]: 2025-11-29 07:23:15.829811629 +0000 UTC m=+0.139191479 container init da2adc3a9784e475f0ddf4792c78ffb28e27b6c46c0c127d4f2922bffab5cdf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_darwin, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:23:15 compute-0 podman[143351]: 2025-11-29 07:23:15.837420458 +0000 UTC m=+0.146800218 container start da2adc3a9784e475f0ddf4792c78ffb28e27b6c46c0c127d4f2922bffab5cdf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_darwin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:23:15 compute-0 podman[143351]: 2025-11-29 07:23:15.84112031 +0000 UTC m=+0.150500120 container attach da2adc3a9784e475f0ddf4792c78ffb28e27b6c46c0c127d4f2922bffab5cdf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:23:15 compute-0 sharp_darwin[143390]: 167 167
Nov 29 07:23:15 compute-0 systemd[1]: libpod-da2adc3a9784e475f0ddf4792c78ffb28e27b6c46c0c127d4f2922bffab5cdf2.scope: Deactivated successfully.
Nov 29 07:23:15 compute-0 podman[143351]: 2025-11-29 07:23:15.844833381 +0000 UTC m=+0.154213151 container died da2adc3a9784e475f0ddf4792c78ffb28e27b6c46c0c127d4f2922bffab5cdf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-65874c627f8dd9c722d8de726f5cd03eaf4621b6b8851312e635d05df14c21f1-merged.mount: Deactivated successfully.
Nov 29 07:23:15 compute-0 podman[143351]: 2025-11-29 07:23:15.889443535 +0000 UTC m=+0.198823315 container remove da2adc3a9784e475f0ddf4792c78ffb28e27b6c46c0c127d4f2922bffab5cdf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:23:15 compute-0 systemd[1]: libpod-conmon-da2adc3a9784e475f0ddf4792c78ffb28e27b6c46c0c127d4f2922bffab5cdf2.scope: Deactivated successfully.
Nov 29 07:23:16 compute-0 podman[143442]: 2025-11-29 07:23:16.067539251 +0000 UTC m=+0.048850182 container create 631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:23:16 compute-0 systemd[1]: Started libpod-conmon-631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2.scope.
Nov 29 07:23:16 compute-0 podman[143442]: 2025-11-29 07:23:16.048510898 +0000 UTC m=+0.029821859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:23:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f0ae4fc69338ddf8e6b5e5c3d41e59e2e002cd60b174afcfdde037ddef879ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f0ae4fc69338ddf8e6b5e5c3d41e59e2e002cd60b174afcfdde037ddef879ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f0ae4fc69338ddf8e6b5e5c3d41e59e2e002cd60b174afcfdde037ddef879ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f0ae4fc69338ddf8e6b5e5c3d41e59e2e002cd60b174afcfdde037ddef879ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f0ae4fc69338ddf8e6b5e5c3d41e59e2e002cd60b174afcfdde037ddef879ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:16 compute-0 podman[143442]: 2025-11-29 07:23:16.178513344 +0000 UTC m=+0.159824315 container init 631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:23:16 compute-0 podman[143442]: 2025-11-29 07:23:16.187668176 +0000 UTC m=+0.168979137 container start 631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermat, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:23:16 compute-0 podman[143442]: 2025-11-29 07:23:16.192077206 +0000 UTC m=+0.173388167 container attach 631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermat, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:23:16 compute-0 sudo[143537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeuhhwwklesskrouyhyxukszdkgsgcit ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764400995.7897408-87-104922124402222/AnsiballZ_edpm_nftables_snippet.py'
Nov 29 07:23:16 compute-0 sudo[143537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:16 compute-0 ceph-mon[75050]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:23:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Cumulative writes: 5296 writes, 23K keys, 5296 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5296 writes, 707 syncs, 7.49 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5296 writes, 23K keys, 5296 commit groups, 1.0 writes per commit group, ingest: 18.06 MB, 0.03 MB/s
                                           Interval WAL: 5296 writes, 707 syncs, 7.49 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc49090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc49090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc49090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:23:16 compute-0 python3[143539]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 07:23:16 compute-0 sudo[143537]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:17 compute-0 sudo[143706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtrqdssjxkajjawvvqxhxutqqrkhvtfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400996.886738-96-217820612150768/AnsiballZ_file.py'
Nov 29 07:23:17 compute-0 sudo[143706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:17 compute-0 objective_fermat[143474]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:23:17 compute-0 objective_fermat[143474]: --> relative data size: 1.0
Nov 29 07:23:17 compute-0 objective_fermat[143474]: --> All data devices are unavailable
Nov 29 07:23:17 compute-0 systemd[1]: libpod-631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2.scope: Deactivated successfully.
Nov 29 07:23:17 compute-0 systemd[1]: libpod-631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2.scope: Consumed 1.127s CPU time.
Nov 29 07:23:17 compute-0 conmon[143474]: conmon 631a8ac605ab46ea3039 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2.scope/container/memory.events
Nov 29 07:23:17 compute-0 podman[143442]: 2025-11-29 07:23:17.379259492 +0000 UTC m=+1.360570413 container died 631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:23:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f0ae4fc69338ddf8e6b5e5c3d41e59e2e002cd60b174afcfdde037ddef879ca-merged.mount: Deactivated successfully.
Nov 29 07:23:17 compute-0 podman[143442]: 2025-11-29 07:23:17.438597549 +0000 UTC m=+1.419908480 container remove 631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:23:17 compute-0 systemd[1]: libpod-conmon-631a8ac605ab46ea303944cd037bbbea57a1ef358edd62020867db509ae44af2.scope: Deactivated successfully.
Nov 29 07:23:17 compute-0 sudo[143260]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:17 compute-0 python3.9[143710]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:17 compute-0 sudo[143706]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:17 compute-0 sudo[143730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:17 compute-0 sudo[143730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:17 compute-0 sudo[143730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:17 compute-0 sudo[143778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:23:17 compute-0 sudo[143778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:17 compute-0 sudo[143778]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:17 compute-0 sudo[143804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:17 compute-0 sudo[143804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:17 compute-0 sudo[143804]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:17 compute-0 sudo[143841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:23:17 compute-0 sudo[143841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:17 compute-0 ceph-mon[75050]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:18 compute-0 podman[143947]: 2025-11-29 07:23:18.078540584 +0000 UTC m=+0.054741283 container create 75ea98ee70b96a8b4b4386ac726668adc578885424ceada86a72597bf00d7dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_franklin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:23:18 compute-0 systemd[1]: Started libpod-conmon-75ea98ee70b96a8b4b4386ac726668adc578885424ceada86a72597bf00d7dec.scope.
Nov 29 07:23:18 compute-0 podman[143947]: 2025-11-29 07:23:18.062028501 +0000 UTC m=+0.038229220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:23:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:23:18 compute-0 podman[143947]: 2025-11-29 07:23:18.185830997 +0000 UTC m=+0.162031696 container init 75ea98ee70b96a8b4b4386ac726668adc578885424ceada86a72597bf00d7dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_franklin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:23:18 compute-0 podman[143947]: 2025-11-29 07:23:18.196118159 +0000 UTC m=+0.172318868 container start 75ea98ee70b96a8b4b4386ac726668adc578885424ceada86a72597bf00d7dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 07:23:18 compute-0 podman[143947]: 2025-11-29 07:23:18.200769536 +0000 UTC m=+0.176970275 container attach 75ea98ee70b96a8b4b4386ac726668adc578885424ceada86a72597bf00d7dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:23:18 compute-0 happy_franklin[144005]: 167 167
Nov 29 07:23:18 compute-0 systemd[1]: libpod-75ea98ee70b96a8b4b4386ac726668adc578885424ceada86a72597bf00d7dec.scope: Deactivated successfully.
Nov 29 07:23:18 compute-0 podman[143947]: 2025-11-29 07:23:18.205658111 +0000 UTC m=+0.181858820 container died 75ea98ee70b96a8b4b4386ac726668adc578885424ceada86a72597bf00d7dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:23:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-df68d43468992eaae01121c4059e0d9888c09dc6ba3c787dff913f0f3a72fc77-merged.mount: Deactivated successfully.
Nov 29 07:23:18 compute-0 podman[143947]: 2025-11-29 07:23:18.24828815 +0000 UTC m=+0.224488849 container remove 75ea98ee70b96a8b4b4386ac726668adc578885424ceada86a72597bf00d7dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_franklin, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:23:18 compute-0 sudo[144047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojjomajdxhqsmwyvbpaqjfdkdkohtjkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400997.6847453-104-18035824633821/AnsiballZ_stat.py'
Nov 29 07:23:18 compute-0 systemd[1]: libpod-conmon-75ea98ee70b96a8b4b4386ac726668adc578885424ceada86a72597bf00d7dec.scope: Deactivated successfully.
Nov 29 07:23:18 compute-0 sudo[144047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:18 compute-0 python3.9[144053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:18 compute-0 podman[144061]: 2025-11-29 07:23:18.423170098 +0000 UTC m=+0.055236387 container create 923b9f077a5f8eefe99fc597cadfdb7ede56adfbf8a80a61063431d41fdcb3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:23:18 compute-0 systemd[1]: Started libpod-conmon-923b9f077a5f8eefe99fc597cadfdb7ede56adfbf8a80a61063431d41fdcb3ca.scope.
Nov 29 07:23:18 compute-0 sudo[144047]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23564f1b48ba322081c722551512a57194eb780608b7b388fe421840f5dac2ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23564f1b48ba322081c722551512a57194eb780608b7b388fe421840f5dac2ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23564f1b48ba322081c722551512a57194eb780608b7b388fe421840f5dac2ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23564f1b48ba322081c722551512a57194eb780608b7b388fe421840f5dac2ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:18 compute-0 podman[144061]: 2025-11-29 07:23:18.392616679 +0000 UTC m=+0.024683018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:23:18 compute-0 podman[144061]: 2025-11-29 07:23:18.501286299 +0000 UTC m=+0.133352568 container init 923b9f077a5f8eefe99fc597cadfdb7ede56adfbf8a80a61063431d41fdcb3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:23:18 compute-0 podman[144061]: 2025-11-29 07:23:18.509494065 +0000 UTC m=+0.141560344 container start 923b9f077a5f8eefe99fc597cadfdb7ede56adfbf8a80a61063431d41fdcb3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:23:18 compute-0 podman[144061]: 2025-11-29 07:23:18.513155376 +0000 UTC m=+0.145221665 container attach 923b9f077a5f8eefe99fc597cadfdb7ede56adfbf8a80a61063431d41fdcb3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_panini, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:23:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:18 compute-0 sudo[144157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzyjjgbmrxmgbcbjijucruxilhylymlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400997.6847453-104-18035824633821/AnsiballZ_file.py'
Nov 29 07:23:18 compute-0 sudo[144157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:18 compute-0 python3.9[144159]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:18 compute-0 sudo[144157]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:19 compute-0 charming_panini[144079]: {
Nov 29 07:23:19 compute-0 charming_panini[144079]:     "0": [
Nov 29 07:23:19 compute-0 charming_panini[144079]:         {
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "devices": [
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "/dev/loop3"
Nov 29 07:23:19 compute-0 charming_panini[144079]:             ],
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_name": "ceph_lv0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_size": "21470642176",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "name": "ceph_lv0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "tags": {
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.cluster_name": "ceph",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.crush_device_class": "",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.encrypted": "0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.osd_id": "0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.type": "block",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.vdo": "0"
Nov 29 07:23:19 compute-0 charming_panini[144079]:             },
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "type": "block",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "vg_name": "ceph_vg0"
Nov 29 07:23:19 compute-0 charming_panini[144079]:         }
Nov 29 07:23:19 compute-0 charming_panini[144079]:     ],
Nov 29 07:23:19 compute-0 charming_panini[144079]:     "1": [
Nov 29 07:23:19 compute-0 charming_panini[144079]:         {
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "devices": [
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "/dev/loop4"
Nov 29 07:23:19 compute-0 charming_panini[144079]:             ],
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_name": "ceph_lv1",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_size": "21470642176",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "name": "ceph_lv1",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "tags": {
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.cluster_name": "ceph",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.crush_device_class": "",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.encrypted": "0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.osd_id": "1",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.type": "block",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.vdo": "0"
Nov 29 07:23:19 compute-0 charming_panini[144079]:             },
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "type": "block",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "vg_name": "ceph_vg1"
Nov 29 07:23:19 compute-0 charming_panini[144079]:         }
Nov 29 07:23:19 compute-0 charming_panini[144079]:     ],
Nov 29 07:23:19 compute-0 charming_panini[144079]:     "2": [
Nov 29 07:23:19 compute-0 charming_panini[144079]:         {
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "devices": [
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "/dev/loop5"
Nov 29 07:23:19 compute-0 charming_panini[144079]:             ],
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_name": "ceph_lv2",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_size": "21470642176",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "name": "ceph_lv2",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "tags": {
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.cluster_name": "ceph",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.crush_device_class": "",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.encrypted": "0",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.osd_id": "2",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.type": "block",
Nov 29 07:23:19 compute-0 charming_panini[144079]:                 "ceph.vdo": "0"
Nov 29 07:23:19 compute-0 charming_panini[144079]:             },
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "type": "block",
Nov 29 07:23:19 compute-0 charming_panini[144079]:             "vg_name": "ceph_vg2"
Nov 29 07:23:19 compute-0 charming_panini[144079]:         }
Nov 29 07:23:19 compute-0 charming_panini[144079]:     ]
Nov 29 07:23:19 compute-0 charming_panini[144079]: }
Nov 29 07:23:19 compute-0 systemd[1]: libpod-923b9f077a5f8eefe99fc597cadfdb7ede56adfbf8a80a61063431d41fdcb3ca.scope: Deactivated successfully.
Nov 29 07:23:19 compute-0 podman[144061]: 2025-11-29 07:23:19.31900741 +0000 UTC m=+0.951073689 container died 923b9f077a5f8eefe99fc597cadfdb7ede56adfbf8a80a61063431d41fdcb3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:23:19 compute-0 sudo[144325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuqkoheujrhiomputfrkhxnlpaqmfbek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400999.122557-116-279795589872128/AnsiballZ_stat.py'
Nov 29 07:23:19 compute-0 sudo[144325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-23564f1b48ba322081c722551512a57194eb780608b7b388fe421840f5dac2ff-merged.mount: Deactivated successfully.
Nov 29 07:23:19 compute-0 podman[144061]: 2025-11-29 07:23:19.530754919 +0000 UTC m=+1.162821178 container remove 923b9f077a5f8eefe99fc597cadfdb7ede56adfbf8a80a61063431d41fdcb3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_panini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:23:19 compute-0 systemd[1]: libpod-conmon-923b9f077a5f8eefe99fc597cadfdb7ede56adfbf8a80a61063431d41fdcb3ca.scope: Deactivated successfully.
Nov 29 07:23:19 compute-0 sudo[143841]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:19 compute-0 sudo[144331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:19 compute-0 sudo[144331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:19 compute-0 sudo[144331]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:19 compute-0 python3.9[144327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:19 compute-0 sudo[144325]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:19 compute-0 sudo[144356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:23:19 compute-0 sudo[144356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:19 compute-0 sudo[144356]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:19 compute-0 sudo[144383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:19 compute-0 sudo[144383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:19 compute-0 sudo[144383]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:19 compute-0 ceph-mon[75050]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:19 compute-0 sudo[144415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:23:19 compute-0 sudo[144415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:20 compute-0 sudo[144518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbhgzqygufkuofzgooslrxcudahwuyed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764400999.122557-116-279795589872128/AnsiballZ_file.py'
Nov 29 07:23:20 compute-0 sudo[144518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:20 compute-0 podman[144549]: 2025-11-29 07:23:20.195624997 +0000 UTC m=+0.053427447 container create 2c69ab4350713c10b90e33fb9c35411f1500ff1b11b210315dcab7f9965907d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:23:20 compute-0 python3.9[144528]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.prz49cge recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:20 compute-0 sudo[144518]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:20 compute-0 systemd[1]: Started libpod-conmon-2c69ab4350713c10b90e33fb9c35411f1500ff1b11b210315dcab7f9965907d1.scope.
Nov 29 07:23:20 compute-0 podman[144549]: 2025-11-29 07:23:20.170807416 +0000 UTC m=+0.028609956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:23:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:23:20 compute-0 podman[144549]: 2025-11-29 07:23:20.293539332 +0000 UTC m=+0.151341822 container init 2c69ab4350713c10b90e33fb9c35411f1500ff1b11b210315dcab7f9965907d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goodall, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:23:20 compute-0 podman[144549]: 2025-11-29 07:23:20.301779529 +0000 UTC m=+0.159581969 container start 2c69ab4350713c10b90e33fb9c35411f1500ff1b11b210315dcab7f9965907d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goodall, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:23:20 compute-0 podman[144549]: 2025-11-29 07:23:20.305843881 +0000 UTC m=+0.163646331 container attach 2c69ab4350713c10b90e33fb9c35411f1500ff1b11b210315dcab7f9965907d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goodall, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:23:20 compute-0 hopeful_goodall[144566]: 167 167
Nov 29 07:23:20 compute-0 systemd[1]: libpod-2c69ab4350713c10b90e33fb9c35411f1500ff1b11b210315dcab7f9965907d1.scope: Deactivated successfully.
Nov 29 07:23:20 compute-0 podman[144549]: 2025-11-29 07:23:20.308658088 +0000 UTC m=+0.166460568 container died 2c69ab4350713c10b90e33fb9c35411f1500ff1b11b210315dcab7f9965907d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:23:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-99650bcece3b9d751ffd73e6a3ca0f48f2ee2fb0a2b47835de9813b0c302c33f-merged.mount: Deactivated successfully.
Nov 29 07:23:20 compute-0 podman[144549]: 2025-11-29 07:23:20.344651305 +0000 UTC m=+0.202453755 container remove 2c69ab4350713c10b90e33fb9c35411f1500ff1b11b210315dcab7f9965907d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goodall, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:23:20 compute-0 systemd[1]: libpod-conmon-2c69ab4350713c10b90e33fb9c35411f1500ff1b11b210315dcab7f9965907d1.scope: Deactivated successfully.
Nov 29 07:23:20 compute-0 podman[144655]: 2025-11-29 07:23:20.510290299 +0000 UTC m=+0.062758823 container create f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:23:20 compute-0 systemd[1]: Started libpod-conmon-f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62.scope.
Nov 29 07:23:20 compute-0 podman[144655]: 2025-11-29 07:23:20.47245456 +0000 UTC m=+0.024923114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:23:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac80d5f26f8afe7c34ec8e07edf621522616da734b935a83b3fbf396299170e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac80d5f26f8afe7c34ec8e07edf621522616da734b935a83b3fbf396299170e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac80d5f26f8afe7c34ec8e07edf621522616da734b935a83b3fbf396299170e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac80d5f26f8afe7c34ec8e07edf621522616da734b935a83b3fbf396299170e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:23:20 compute-0 podman[144655]: 2025-11-29 07:23:20.612119912 +0000 UTC m=+0.164588466 container init f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:23:20 compute-0 podman[144655]: 2025-11-29 07:23:20.619732341 +0000 UTC m=+0.172200825 container start f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:23:20 compute-0 podman[144655]: 2025-11-29 07:23:20.62260503 +0000 UTC m=+0.175073594 container attach f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:23:20 compute-0 sudo[144758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yswdsibucalelcgdslikoqyuzhlgasga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401000.4045053-128-108733930914172/AnsiballZ_stat.py'
Nov 29 07:23:20 compute-0 sudo[144758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:20 compute-0 python3.9[144760]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:20 compute-0 sudo[144758]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:21 compute-0 sudo[144836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvspctpdiwwpthqtxpwniyplyqxgrgoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401000.4045053-128-108733930914172/AnsiballZ_file.py'
Nov 29 07:23:21 compute-0 sudo[144836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:21 compute-0 python3.9[144838]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:21 compute-0 sudo[144836]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:21 compute-0 ceph-mgr[75345]: [devicehealth INFO root] Check health
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]: {
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "osd_id": 2,
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "type": "bluestore"
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:     },
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "osd_id": 1,
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "type": "bluestore"
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:     },
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "osd_id": 0,
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:         "type": "bluestore"
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]:     }
Nov 29 07:23:21 compute-0 elegant_elbakyan[144703]: }
Nov 29 07:23:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:21 compute-0 systemd[1]: libpod-f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62.scope: Deactivated successfully.
Nov 29 07:23:21 compute-0 systemd[1]: libpod-f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62.scope: Consumed 1.138s CPU time.
Nov 29 07:23:21 compute-0 podman[144655]: 2025-11-29 07:23:21.755265219 +0000 UTC m=+1.307733803 container died f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ac80d5f26f8afe7c34ec8e07edf621522616da734b935a83b3fbf396299170e-merged.mount: Deactivated successfully.
Nov 29 07:23:21 compute-0 ceph-mon[75050]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:21 compute-0 podman[144655]: 2025-11-29 07:23:21.814192695 +0000 UTC m=+1.366661179 container remove f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:23:21 compute-0 systemd[1]: libpod-conmon-f97f8588ed77a57c64436c7ae49b66a870cc777f22fb257320045ea87607ed62.scope: Deactivated successfully.
Nov 29 07:23:21 compute-0 sudo[144415]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:23:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:23:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:21 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 428752d0-d800-492f-947d-d583187b9b67 does not exist
Nov 29 07:23:21 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 0f604614-cbdb-48b7-819a-9b9b043ea19e does not exist
Nov 29 07:23:21 compute-0 sudo[144956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:23:21 compute-0 sudo[144956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:21 compute-0 sudo[144956]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:21 compute-0 sudo[145001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:23:21 compute-0 sudo[145001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:23:21 compute-0 sudo[145001]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:22 compute-0 sudo[145079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewvpvqwwjbuegxangmmjllttlkltagvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401001.6477807-141-148514200228742/AnsiballZ_command.py'
Nov 29 07:23:22 compute-0 sudo[145079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:22 compute-0 python3.9[145081]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:23:22 compute-0 sudo[145079]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:22 compute-0 sudo[145232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jostkkuolmolsehrpaejhhgifuwznohu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401002.4331295-149-243087103683168/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:23:22 compute-0 sudo[145232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:23 compute-0 python3[145234]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:23:23 compute-0 sudo[145232]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:23:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:23 compute-0 sudo[145384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uunsifszcruuvozfqfnxwxozcagqtylv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401003.3917847-157-52005977973036/AnsiballZ_stat.py'
Nov 29 07:23:23 compute-0 sudo[145384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:23 compute-0 python3.9[145386]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:24 compute-0 sudo[145384]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:24 compute-0 ceph-mon[75050]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:24 compute-0 sudo[145509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfosvsxedhonsjlbzhqhijpdyxzjusil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401003.3917847-157-52005977973036/AnsiballZ_copy.py'
Nov 29 07:23:24 compute-0 sudo[145509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:24 compute-0 python3.9[145511]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401003.3917847-157-52005977973036/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:24 compute-0 sudo[145509]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:25 compute-0 sudo[145661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owxfacoistcmmzgdeqdcvrmrwqrtanwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401005.0318058-172-29933675326618/AnsiballZ_stat.py'
Nov 29 07:23:25 compute-0 sudo[145661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:25 compute-0 python3.9[145663]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:25 compute-0 sudo[145661]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:25 compute-0 ceph-mon[75050]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:25 compute-0 sudo[145786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smeirqubuturraqdfijdnxbqqjsrgirb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401005.0318058-172-29933675326618/AnsiballZ_copy.py'
Nov 29 07:23:25 compute-0 sudo[145786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:26 compute-0 python3.9[145788]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401005.0318058-172-29933675326618/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:26 compute-0 sudo[145786]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:26 compute-0 sudo[145938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqlqlgmawpgkivduucedcnusvfjcfzpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401006.4185245-187-51260478394055/AnsiballZ_stat.py'
Nov 29 07:23:26 compute-0 sudo[145938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:26 compute-0 python3.9[145940]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:26 compute-0 sudo[145938]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:27 compute-0 sudo[146063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opteapmqyppbblcghbiselhcftwpauwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401006.4185245-187-51260478394055/AnsiballZ_copy.py'
Nov 29 07:23:27 compute-0 sudo[146063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:27 compute-0 python3.9[146065]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401006.4185245-187-51260478394055/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:27 compute-0 sudo[146063]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:27 compute-0 ceph-mon[75050]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:28 compute-0 sudo[146215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydiurizjxgckhgeaunfbjvfnkkbdrqxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401007.6973908-202-187017658947901/AnsiballZ_stat.py'
Nov 29 07:23:28 compute-0 sudo[146215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:28 compute-0 python3.9[146217]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:28 compute-0 sudo[146215]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:28 compute-0 sudo[146340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpmuxmewuvkipizotavfgtihnndyxzqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401007.6973908-202-187017658947901/AnsiballZ_copy.py'
Nov 29 07:23:28 compute-0 sudo[146340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:28 compute-0 python3.9[146342]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401007.6973908-202-187017658947901/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:28 compute-0 sudo[146340]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:29 compute-0 sudo[146492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhageycjmmofuzduhlzvrsusblwtqgkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401009.08482-217-139186673166106/AnsiballZ_stat.py'
Nov 29 07:23:29 compute-0 sudo[146492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:29 compute-0 python3.9[146494]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:29 compute-0 sudo[146492]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:29 compute-0 ceph-mon[75050]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:30 compute-0 sudo[146617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnxiirbsxiwcwartschvggbsxuhlxgqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401009.08482-217-139186673166106/AnsiballZ_copy.py'
Nov 29 07:23:30 compute-0 sudo[146617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:30 compute-0 python3.9[146619]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401009.08482-217-139186673166106/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:30 compute-0 sudo[146617]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:30 compute-0 sudo[146769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkoyxgzltdlboggkhgvmlxfdfjcfwhdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401010.518008-232-54279045690623/AnsiballZ_file.py'
Nov 29 07:23:30 compute-0 sudo[146769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:30 compute-0 python3.9[146771]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:30 compute-0 sudo[146769]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:31 compute-0 sudo[146921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfzewhqzakwamngsrctrdwaxjyrwougz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401011.1639578-240-272975550922491/AnsiballZ_command.py'
Nov 29 07:23:31 compute-0 sudo[146921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:31 compute-0 python3.9[146923]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:23:31 compute-0 sudo[146921]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:31 compute-0 ceph-mon[75050]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:32 compute-0 sudo[147076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blhvquyimzseirswaccqrvtedsszwdbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401011.8287754-248-56144182090576/AnsiballZ_blockinfile.py'
Nov 29 07:23:32 compute-0 sudo[147076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:32 compute-0 python3.9[147078]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:32 compute-0 sudo[147076]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:33 compute-0 sudo[147228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukcjsqbysrcdcqcznnnvecrlwqrrwmel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401012.7495153-257-245396692278690/AnsiballZ_command.py'
Nov 29 07:23:33 compute-0 sudo[147228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:33 compute-0 python3.9[147230]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:23:33 compute-0 sudo[147228]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:33 compute-0 sudo[147381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljqpnxpdasuccwpmizbklsdvlfyojma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401013.4518318-265-237577463973525/AnsiballZ_stat.py'
Nov 29 07:23:33 compute-0 sudo[147381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:33 compute-0 ceph-mon[75050]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:33 compute-0 python3.9[147383]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:23:33 compute-0 sudo[147381]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:34 compute-0 sudo[147535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdkruiamemdnbqvlyhcwhgfitobrjqmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401014.1244693-273-68827637167459/AnsiballZ_command.py'
Nov 29 07:23:34 compute-0 sudo[147535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:34 compute-0 python3.9[147537]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:23:34 compute-0 sudo[147535]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:35 compute-0 sudo[147690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phpagrslyvaczkynnpejqlwcwwkyogau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401014.9034681-281-265638209071186/AnsiballZ_file.py'
Nov 29 07:23:35 compute-0 sudo[147690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:35 compute-0 python3.9[147692]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:35 compute-0 sudo[147690]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:23:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:23:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:23:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:23:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:23:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:23:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:35 compute-0 ceph-mon[75050]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:36 compute-0 python3.9[147842]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:23:37 compute-0 sudo[147993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtkvbfhrbyrmrxsaqcdkjikgluwyurbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401017.3013594-321-47083639442183/AnsiballZ_command.py'
Nov 29 07:23:37 compute-0 sudo[147993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:37 compute-0 python3.9[147995]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:23:37 compute-0 ovs-vsctl[147996]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 07:23:37 compute-0 ceph-mon[75050]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:37 compute-0 sudo[147993]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:38 compute-0 sudo[148146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtzmksjsyyqsemcdrrhycruqcpmzhupl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401018.5880055-330-174455517605990/AnsiballZ_command.py'
Nov 29 07:23:38 compute-0 sudo[148146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:39 compute-0 python3.9[148148]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:23:39 compute-0 sudo[148146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:39 compute-0 sudo[148301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnmblaravyhzlqkxbrtbdwdmelvskptc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401019.3960392-338-167546486490488/AnsiballZ_command.py'
Nov 29 07:23:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:39 compute-0 sudo[148301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:39 compute-0 python3.9[148303]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:23:39 compute-0 ovs-vsctl[148304]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 07:23:39 compute-0 sudo[148301]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:40 compute-0 python3.9[148454]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:23:40 compute-0 ceph-mon[75050]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:41 compute-0 sudo[148606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxjzesxwnqgimvarmrimkezomiblpabl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401020.953131-355-251814747223459/AnsiballZ_file.py'
Nov 29 07:23:41 compute-0 sudo[148606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:41 compute-0 python3.9[148608]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:41 compute-0 sudo[148606]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:41 compute-0 ceph-mon[75050]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:41 compute-0 sudo[148758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlltzgmruytoqwijbesoelvukgxjvjjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401021.6102524-363-120239697335002/AnsiballZ_stat.py'
Nov 29 07:23:41 compute-0 sudo[148758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:42 compute-0 python3.9[148760]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:42 compute-0 sudo[148758]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:42 compute-0 sudo[148836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmvezjolnzdtrwhneqhetpbnezwgjfwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401021.6102524-363-120239697335002/AnsiballZ_file.py'
Nov 29 07:23:42 compute-0 sudo[148836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:42 compute-0 python3.9[148838]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:42 compute-0 sudo[148836]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:43 compute-0 sudo[148988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huhnzmadszspfdwxjchfdhqrtgkvfjcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401022.7984374-363-249586632498653/AnsiballZ_stat.py'
Nov 29 07:23:43 compute-0 sudo[148988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:43 compute-0 python3.9[148990]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:43 compute-0 sudo[148988]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:43 compute-0 sudo[149066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqmbwvoqsndyahwdnadlvsvuukowquqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401022.7984374-363-249586632498653/AnsiballZ_file.py'
Nov 29 07:23:43 compute-0 sudo[149066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:43 compute-0 python3.9[149068]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:43 compute-0 sudo[149066]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:44 compute-0 sudo[149218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfiuzyiwgjgrtztwyynkcyksgrpmimdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401023.9151697-386-156228764047984/AnsiballZ_file.py'
Nov 29 07:23:44 compute-0 sudo[149218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:44 compute-0 python3.9[149220]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:44 compute-0 sudo[149218]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:45 compute-0 sudo[149370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abshwsfwxftnbezwxghwcpmgsqkzczcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401024.7965803-394-232683780870096/AnsiballZ_stat.py'
Nov 29 07:23:45 compute-0 sudo[149370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:45 compute-0 python3.9[149372]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:45 compute-0 sudo[149370]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:45 compute-0 sudo[149448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dumisxnmrdwvyzjjuiumpnkibolxuvud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401024.7965803-394-232683780870096/AnsiballZ_file.py'
Nov 29 07:23:45 compute-0 sudo[149448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:45 compute-0 python3.9[149450]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:46 compute-0 sudo[149448]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:46 compute-0 sudo[149600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfusuvrttbsxydwfctzwdmwssdxquuwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401026.2110925-406-122360408198189/AnsiballZ_stat.py'
Nov 29 07:23:46 compute-0 sudo[149600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:46 compute-0 ceph-mon[75050]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:46 compute-0 ceph-mon[75050]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:46 compute-0 python3.9[149602]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:46 compute-0 sudo[149600]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:47 compute-0 sudo[149678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpvvbajmhagzwoawgvhgtdmpahjkedbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401026.2110925-406-122360408198189/AnsiballZ_file.py'
Nov 29 07:23:47 compute-0 sudo[149678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:47 compute-0 python3.9[149680]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:47 compute-0 sudo[149678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:47 compute-0 sudo[149830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwwtkxfzwtdjvrggfjudckfikjkjscjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401027.4914675-418-12324152050269/AnsiballZ_systemd.py'
Nov 29 07:23:47 compute-0 sudo[149830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:48 compute-0 python3.9[149832]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:23:48 compute-0 systemd[1]: Reloading.
Nov 29 07:23:48 compute-0 systemd-rc-local-generator[149860]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:23:48 compute-0 systemd-sysv-generator[149864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:23:48 compute-0 sudo[149830]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:49 compute-0 sudo[150019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wetqravxnhkeimqxvzvgdbqdrrnubtsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401028.6931338-426-53662222619006/AnsiballZ_stat.py'
Nov 29 07:23:49 compute-0 sudo[150019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:49 compute-0 python3.9[150021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:49 compute-0 sudo[150019]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:49 compute-0 sudo[150097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wirveqikdjsrbgybbjjjampyrcesgwnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401028.6931338-426-53662222619006/AnsiballZ_file.py'
Nov 29 07:23:49 compute-0 sudo[150097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:49 compute-0 python3.9[150099]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:49 compute-0 sudo[150097]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:50 compute-0 ceph-mon[75050]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:50 compute-0 sudo[150249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozmnqsgcznnuikeummcavvmrcmwzxzcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401030.5593672-438-133171147553158/AnsiballZ_stat.py'
Nov 29 07:23:50 compute-0 sudo[150249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:51 compute-0 python3.9[150251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:51 compute-0 sudo[150249]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:51 compute-0 sudo[150327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywpppkaekbjwpqhwzkaeqatvenephtxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401030.5593672-438-133171147553158/AnsiballZ_file.py'
Nov 29 07:23:51 compute-0 sudo[150327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:51 compute-0 python3.9[150329]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:51 compute-0 sudo[150327]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:51 compute-0 ceph-mon[75050]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:52 compute-0 sudo[150479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erkpsfrsodphdpipvwkruzkaczqxraaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401031.7067237-450-56193378959017/AnsiballZ_systemd.py'
Nov 29 07:23:52 compute-0 sudo[150479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:52 compute-0 python3.9[150481]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:23:52 compute-0 systemd[1]: Reloading.
Nov 29 07:23:52 compute-0 systemd-rc-local-generator[150510]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:23:52 compute-0 systemd-sysv-generator[150515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:23:52 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:23:52 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:23:52 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:23:52 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:23:52 compute-0 sudo[150479]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:53 compute-0 ceph-mon[75050]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:53 compute-0 sudo[150672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goupwvvocnpaqpwxiymnqjzkuxvjqzcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401033.024078-460-236703124403988/AnsiballZ_file.py'
Nov 29 07:23:53 compute-0 sudo[150672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:53 compute-0 python3.9[150674]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:53 compute-0 sudo[150672]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:53 compute-0 sudo[150824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfhobcmmzcdmbeorrseoudaatjtfqzwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401033.6979918-468-84700945894549/AnsiballZ_stat.py'
Nov 29 07:23:53 compute-0 sudo[150824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:54 compute-0 python3.9[150826]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:54 compute-0 sudo[150824]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:54 compute-0 sudo[150947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fusavuotjevkcihelpayklrxzkgxctzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401033.6979918-468-84700945894549/AnsiballZ_copy.py'
Nov 29 07:23:54 compute-0 sudo[150947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:54 compute-0 python3.9[150949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401033.6979918-468-84700945894549/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:54 compute-0 sudo[150947]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:55 compute-0 ceph-mon[75050]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:55 compute-0 sudo[151099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utsnmqjujnjvqcyoyputoregaqvoukec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401035.2097569-485-268263405429476/AnsiballZ_file.py'
Nov 29 07:23:55 compute-0 sudo[151099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:55 compute-0 python3.9[151101]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:23:55 compute-0 sudo[151099]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:56 compute-0 sudo[151251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diptxgclnrerhozhmmlmrochoopoqhqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401035.9160252-493-144675880195880/AnsiballZ_stat.py'
Nov 29 07:23:56 compute-0 sudo[151251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:56 compute-0 python3.9[151253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:23:56 compute-0 sudo[151251]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:56 compute-0 ceph-mon[75050]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:56 compute-0 sudo[151374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajzngtibszbkhrijavrqfgpywycrtdzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401035.9160252-493-144675880195880/AnsiballZ_copy.py'
Nov 29 07:23:56 compute-0 sudo[151374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:57 compute-0 python3.9[151376]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401035.9160252-493-144675880195880/.source.json _original_basename=.6u5zn75a follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:57 compute-0 sudo[151374]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:57 compute-0 sudo[151526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oshzwswdoxfgjrhzbawbciobzmvtplzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401037.2804017-508-91302508983443/AnsiballZ_file.py'
Nov 29 07:23:57 compute-0 sudo[151526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:57 compute-0 python3.9[151528]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:23:57 compute-0 sudo[151526]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:58 compute-0 sudo[151678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wprtwlieurxmfqlhzxikslmexymsstfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401038.0702927-516-93689646865916/AnsiballZ_stat.py'
Nov 29 07:23:58 compute-0 sudo[151678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:23:58 compute-0 sudo[151678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:59 compute-0 sudo[151801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srzovxtxnvlrgzgwvgphqgflwnmixkxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401038.0702927-516-93689646865916/AnsiballZ_copy.py'
Nov 29 07:23:59 compute-0 sudo[151801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:23:59 compute-0 sudo[151801]: pam_unix(sudo:session): session closed for user root
Nov 29 07:23:59 compute-0 ceph-mon[75050]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:23:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:00 compute-0 sudo[151953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifqbcvgovvvoxynukwhtkijhgvtmuxud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401039.561256-533-109549700705259/AnsiballZ_container_config_data.py'
Nov 29 07:24:00 compute-0 sudo[151953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:00 compute-0 python3.9[151955]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 07:24:00 compute-0 sudo[151953]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:00 compute-0 ceph-mon[75050]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:00 compute-0 sudo[152105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plnxobzginxvdxriepbjeuudieecjzvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401040.473045-542-112603994275768/AnsiballZ_container_config_hash.py'
Nov 29 07:24:00 compute-0 sudo[152105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:01 compute-0 python3.9[152107]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:24:01 compute-0 sudo[152105]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:02 compute-0 sudo[152257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrokioreltrbauqhqecrwsgwwixavfrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401041.4951534-551-95201645792593/AnsiballZ_podman_container_info.py'
Nov 29 07:24:02 compute-0 sudo[152257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:02 compute-0 python3.9[152259]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 07:24:02 compute-0 sudo[152257]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:03 compute-0 ceph-mon[75050]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:03 compute-0 sudo[152436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpcczrsqqcvqzxshdqiqudsgqcqxqtwf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401043.0788105-564-252522483131110/AnsiballZ_edpm_container_manage.py'
Nov 29 07:24:03 compute-0 sudo[152436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:03 compute-0 python3[152438]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:24:05 compute-0 ceph-mon[75050]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:24:05
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups', '.mgr']
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:24:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:24:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:24:07 compute-0 ceph-mon[75050]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:08 compute-0 podman[152452]: 2025-11-29 07:24:08.71685391 +0000 UTC m=+4.759283852 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 07:24:08 compute-0 podman[152569]: 2025-11-29 07:24:08.850222608 +0000 UTC m=+0.046807414 container create 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 07:24:08 compute-0 podman[152569]: 2025-11-29 07:24:08.825574203 +0000 UTC m=+0.022159059 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 07:24:08 compute-0 python3[152438]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 07:24:08 compute-0 sudo[152436]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:09 compute-0 ceph-mon[75050]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:09 compute-0 sudo[152757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frbbbcbdwkwvshjsrujxzvpoznraveue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401049.166255-572-128333075201791/AnsiballZ_stat.py'
Nov 29 07:24:09 compute-0 sudo[152757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:09 compute-0 python3.9[152759]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:24:09 compute-0 sudo[152757]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:10 compute-0 sudo[152911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpcmfikhrxuydeyfqueiaxudzcvbqhsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401049.9505136-581-222284893066038/AnsiballZ_file.py'
Nov 29 07:24:10 compute-0 sudo[152911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:10 compute-0 python3.9[152913]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:10 compute-0 sudo[152911]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:10 compute-0 sudo[152987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sebtfrxvewtasoxbsuuompssrkebsjuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401049.9505136-581-222284893066038/AnsiballZ_stat.py'
Nov 29 07:24:10 compute-0 sudo[152987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:10 compute-0 python3.9[152989]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:24:10 compute-0 sudo[152987]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:11 compute-0 ceph-mon[75050]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:11 compute-0 sudo[153138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfoklyxyirzrnxorxgqxrcdinkexqmof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401050.9780045-581-194745166556947/AnsiballZ_copy.py'
Nov 29 07:24:11 compute-0 sudo[153138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:11 compute-0 python3.9[153140]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401050.9780045-581-194745166556947/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:11 compute-0 sudo[153138]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:12 compute-0 sudo[153214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alzquaswknbotcjvjemojxfzadalrbve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401050.9780045-581-194745166556947/AnsiballZ_systemd.py'
Nov 29 07:24:12 compute-0 sudo[153214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:12 compute-0 python3.9[153216]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:24:12 compute-0 systemd[1]: Reloading.
Nov 29 07:24:12 compute-0 systemd-rc-local-generator[153243]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:12 compute-0 systemd-sysv-generator[153246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:12 compute-0 sudo[153214]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:13 compute-0 sudo[153326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvzvgrszwumwmqjvdieflsvzffockhku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401050.9780045-581-194745166556947/AnsiballZ_systemd.py'
Nov 29 07:24:13 compute-0 sudo[153326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:13 compute-0 ceph-mon[75050]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:13 compute-0 python3.9[153328]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:24:13 compute-0 systemd[1]: Reloading.
Nov 29 07:24:13 compute-0 systemd-rc-local-generator[153356]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:13 compute-0 systemd-sysv-generator[153360]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:13 compute-0 systemd[1]: Starting ovn_controller container...
Nov 29 07:24:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e70d3ce46c267809e34f298c924f2f955d3549130488a083f5b7f9f5ca336a/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:13 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8.
Nov 29 07:24:13 compute-0 podman[153368]: 2025-11-29 07:24:13.797462294 +0000 UTC m=+0.117609627 container init 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:24:13 compute-0 ovn_controller[153383]: + sudo -E kolla_set_configs
Nov 29 07:24:13 compute-0 podman[153368]: 2025-11-29 07:24:13.831320763 +0000 UTC m=+0.151468096 container start 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 07:24:13 compute-0 edpm-start-podman-container[153368]: ovn_controller
Nov 29 07:24:13 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 29 07:24:13 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 07:24:13 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 07:24:13 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 29 07:24:13 compute-0 systemd[153419]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 29 07:24:13 compute-0 edpm-start-podman-container[153367]: Creating additional drop-in dependency for "ovn_controller" (23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8)
Nov 29 07:24:13 compute-0 podman[153390]: 2025-11-29 07:24:13.913726804 +0000 UTC m=+0.070701590 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 07:24:13 compute-0 systemd[1]: 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8-719f575451f10305.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 07:24:13 compute-0 systemd[1]: 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8-719f575451f10305.service: Failed with result 'exit-code'.
Nov 29 07:24:13 compute-0 systemd[1]: Reloading.
Nov 29 07:24:14 compute-0 systemd-rc-local-generator[153471]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:24:14 compute-0 systemd-sysv-generator[153474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:24:14 compute-0 systemd[153419]: Queued start job for default target Main User Target.
Nov 29 07:24:14 compute-0 systemd[153419]: Created slice User Application Slice.
Nov 29 07:24:14 compute-0 systemd[153419]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 07:24:14 compute-0 systemd[153419]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 07:24:14 compute-0 systemd[153419]: Reached target Paths.
Nov 29 07:24:14 compute-0 systemd[153419]: Reached target Timers.
Nov 29 07:24:14 compute-0 systemd[153419]: Starting D-Bus User Message Bus Socket...
Nov 29 07:24:14 compute-0 systemd[153419]: Starting Create User's Volatile Files and Directories...
Nov 29 07:24:14 compute-0 systemd[153419]: Listening on D-Bus User Message Bus Socket.
Nov 29 07:24:14 compute-0 systemd[153419]: Reached target Sockets.
Nov 29 07:24:14 compute-0 systemd[153419]: Finished Create User's Volatile Files and Directories.
Nov 29 07:24:14 compute-0 systemd[153419]: Reached target Basic System.
Nov 29 07:24:14 compute-0 systemd[153419]: Reached target Main User Target.
Nov 29 07:24:14 compute-0 systemd[153419]: Startup finished in 157ms.
Nov 29 07:24:14 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 29 07:24:14 compute-0 systemd[1]: Started ovn_controller container.
Nov 29 07:24:14 compute-0 systemd[1]: Started Session c1 of User root.
Nov 29 07:24:14 compute-0 sudo[153326]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:14 compute-0 ovn_controller[153383]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:24:14 compute-0 ovn_controller[153383]: INFO:__main__:Validating config file
Nov 29 07:24:14 compute-0 ovn_controller[153383]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:24:14 compute-0 ovn_controller[153383]: INFO:__main__:Writing out command to execute
Nov 29 07:24:14 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 07:24:14 compute-0 ovn_controller[153383]: ++ cat /run_command
Nov 29 07:24:14 compute-0 ovn_controller[153383]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 07:24:14 compute-0 ovn_controller[153383]: + ARGS=
Nov 29 07:24:14 compute-0 ovn_controller[153383]: + sudo kolla_copy_cacerts
Nov 29 07:24:14 compute-0 systemd[1]: Started Session c2 of User root.
Nov 29 07:24:14 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 07:24:14 compute-0 ovn_controller[153383]: + [[ ! -n '' ]]
Nov 29 07:24:14 compute-0 ovn_controller[153383]: + . kolla_extend_start
Nov 29 07:24:14 compute-0 ovn_controller[153383]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 07:24:14 compute-0 ovn_controller[153383]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 07:24:14 compute-0 ovn_controller[153383]: + umask 0022
Nov 29 07:24:14 compute-0 ovn_controller[153383]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 07:24:14 compute-0 NetworkManager[48962]: <info>  [1764401054.4426] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 07:24:14 compute-0 NetworkManager[48962]: <info>  [1764401054.4435] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:24:14 compute-0 NetworkManager[48962]: <info>  [1764401054.4446] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 07:24:14 compute-0 NetworkManager[48962]: <info>  [1764401054.4453] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 07:24:14 compute-0 NetworkManager[48962]: <info>  [1764401054.4456] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 07:24:14 compute-0 kernel: br-int: entered promiscuous mode
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 07:24:14 compute-0 ovn_controller[153383]: 2025-11-29T07:24:14Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 07:24:14 compute-0 systemd-udevd[153541]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:24:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:24:14 compute-0 sudo[153646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wluoynedpeqznkjlqpoufvltthluunbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401054.484701-609-265222408832532/AnsiballZ_command.py'
Nov 29 07:24:14 compute-0 sudo[153646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:14 compute-0 python3.9[153648]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:14 compute-0 ovs-vsctl[153649]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 07:24:14 compute-0 sudo[153646]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:15 compute-0 ovn_controller[153383]: 2025-11-29T07:24:15Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:24:15 compute-0 ovn_controller[153383]: 2025-11-29T07:24:15Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 07:24:15 compute-0 ovn_controller[153383]: 2025-11-29T07:24:15Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:24:15 compute-0 ovn_controller[153383]: 2025-11-29T07:24:15Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 07:24:15 compute-0 ovn_controller[153383]: 2025-11-29T07:24:15Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:24:15 compute-0 ovn_controller[153383]: 2025-11-29T07:24:15Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 07:24:15 compute-0 NetworkManager[48962]: <info>  [1764401055.2360] manager: (ovn-0dd8a0-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 07:24:15 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 07:24:15 compute-0 NetworkManager[48962]: <info>  [1764401055.2523] device (genev_sys_6081): carrier: link connected
Nov 29 07:24:15 compute-0 NetworkManager[48962]: <info>  [1764401055.2528] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 07:24:15 compute-0 ceph-mon[75050]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:15 compute-0 sudo[153801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqecopyjoagrrtnkepxfzwtejywvlnkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401055.1253016-617-181054439240425/AnsiballZ_command.py'
Nov 29 07:24:15 compute-0 sudo[153801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:15 compute-0 python3.9[153803]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:15 compute-0 ovs-vsctl[153805]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 07:24:15 compute-0 sudo[153801]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:16 compute-0 sudo[153956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxblltuesueudjeyndtwtgubxfbdwuvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401056.3787992-631-255608356773654/AnsiballZ_command.py'
Nov 29 07:24:16 compute-0 sudo[153956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:16 compute-0 python3.9[153958]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:24:16 compute-0 ovs-vsctl[153959]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 07:24:16 compute-0 sudo[153956]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:17 compute-0 ceph-mon[75050]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:17 compute-0 sshd-session[141776]: Connection closed by 192.168.122.30 port 54614
Nov 29 07:24:17 compute-0 sshd-session[141773]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:24:17 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 07:24:17 compute-0 systemd[1]: session-45.scope: Consumed 59.915s CPU time.
Nov 29 07:24:17 compute-0 systemd-logind[807]: Session 45 logged out. Waiting for processes to exit.
Nov 29 07:24:17 compute-0 systemd-logind[807]: Removed session 45.
Nov 29 07:24:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:19 compute-0 ceph-mon[75050]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:21 compute-0 ceph-mon[75050]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:22 compute-0 sudo[153984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:22 compute-0 sudo[153984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:22 compute-0 sudo[153984]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:22 compute-0 sudo[154009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:24:22 compute-0 sudo[154009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:22 compute-0 sudo[154009]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:22 compute-0 sudo[154034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:22 compute-0 sudo[154034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:22 compute-0 sudo[154034]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:22 compute-0 sudo[154059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:24:22 compute-0 sudo[154059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:22 compute-0 sudo[154059]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:24:22 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:24:22 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:24:22 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:24:22 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:24:22 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 2ebe0cae-2aba-4f7e-8162-1686a74bafd7 does not exist
Nov 29 07:24:22 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 855f863e-3272-4992-8169-e9083734c86c does not exist
Nov 29 07:24:22 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 41cd6fac-9c4b-43de-b071-5a63d3e43233 does not exist
Nov 29 07:24:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:24:22 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:24:22 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:24:22 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:24:22 compute-0 sudo[154116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:22 compute-0 sudo[154116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:22 compute-0 sudo[154116]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:22 compute-0 ceph-mon[75050]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:24:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:24:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:24:23 compute-0 sudo[154141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:24:23 compute-0 sudo[154141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:23 compute-0 sudo[154141]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:23 compute-0 sudo[154166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:23 compute-0 sudo[154166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:23 compute-0 sudo[154166]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:23 compute-0 sudo[154191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:24:23 compute-0 sudo[154191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:23 compute-0 podman[154256]: 2025-11-29 07:24:23.46536047 +0000 UTC m=+0.040128308 container create f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:24:23 compute-0 systemd[1]: Started libpod-conmon-f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e.scope.
Nov 29 07:24:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:24:23 compute-0 podman[154256]: 2025-11-29 07:24:23.448700134 +0000 UTC m=+0.023467952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:24:23 compute-0 podman[154256]: 2025-11-29 07:24:23.551249877 +0000 UTC m=+0.126017765 container init f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:24:23 compute-0 podman[154256]: 2025-11-29 07:24:23.559557021 +0000 UTC m=+0.134324819 container start f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:24:23 compute-0 podman[154256]: 2025-11-29 07:24:23.563673942 +0000 UTC m=+0.138441780 container attach f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:24:23 compute-0 dreamy_mirzakhani[154272]: 167 167
Nov 29 07:24:23 compute-0 systemd[1]: libpod-f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e.scope: Deactivated successfully.
Nov 29 07:24:23 compute-0 conmon[154272]: conmon f7560acaf789f3f056bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e.scope/container/memory.events
Nov 29 07:24:23 compute-0 podman[154256]: 2025-11-29 07:24:23.567572036 +0000 UTC m=+0.142339844 container died f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:24:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7fd1bd5b1ce0f0e54f838b79ffdb3e155ef434e815257bc2de811ae5dcf2005-merged.mount: Deactivated successfully.
Nov 29 07:24:23 compute-0 podman[154256]: 2025-11-29 07:24:23.606706467 +0000 UTC m=+0.181474265 container remove f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:24:23 compute-0 systemd[1]: libpod-conmon-f7560acaf789f3f056bc2bd5f0e75369905d7e61c603ef7b9568484d8e934f3e.scope: Deactivated successfully.
Nov 29 07:24:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:23 compute-0 podman[154298]: 2025-11-29 07:24:23.78254528 +0000 UTC m=+0.043926861 container create 4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dirac, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:24:23 compute-0 systemd[1]: Started libpod-conmon-4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2.scope.
Nov 29 07:24:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7372c33c56f090d1403178e2e259c11b8f84e5dfb3591a3b81410d5c0b334d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7372c33c56f090d1403178e2e259c11b8f84e5dfb3591a3b81410d5c0b334d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7372c33c56f090d1403178e2e259c11b8f84e5dfb3591a3b81410d5c0b334d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7372c33c56f090d1403178e2e259c11b8f84e5dfb3591a3b81410d5c0b334d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7372c33c56f090d1403178e2e259c11b8f84e5dfb3591a3b81410d5c0b334d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:23 compute-0 podman[154298]: 2025-11-29 07:24:23.766352385 +0000 UTC m=+0.027733986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:24:23 compute-0 podman[154298]: 2025-11-29 07:24:23.866401813 +0000 UTC m=+0.127783424 container init 4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dirac, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:24:23 compute-0 podman[154298]: 2025-11-29 07:24:23.876079402 +0000 UTC m=+0.137460983 container start 4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dirac, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:24:23 compute-0 podman[154298]: 2025-11-29 07:24:23.88081119 +0000 UTC m=+0.142192871 container attach 4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dirac, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:24:24 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 29 07:24:24 compute-0 systemd[153419]: Activating special unit Exit the Session...
Nov 29 07:24:24 compute-0 systemd[153419]: Stopped target Main User Target.
Nov 29 07:24:24 compute-0 systemd[153419]: Stopped target Basic System.
Nov 29 07:24:24 compute-0 systemd[153419]: Stopped target Paths.
Nov 29 07:24:24 compute-0 systemd[153419]: Stopped target Sockets.
Nov 29 07:24:24 compute-0 systemd[153419]: Stopped target Timers.
Nov 29 07:24:24 compute-0 systemd[153419]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 07:24:24 compute-0 systemd[153419]: Closed D-Bus User Message Bus Socket.
Nov 29 07:24:24 compute-0 systemd[153419]: Stopped Create User's Volatile Files and Directories.
Nov 29 07:24:24 compute-0 systemd[153419]: Removed slice User Application Slice.
Nov 29 07:24:24 compute-0 systemd[153419]: Reached target Shutdown.
Nov 29 07:24:24 compute-0 systemd[153419]: Finished Exit the Session.
Nov 29 07:24:24 compute-0 systemd[153419]: Reached target Exit the Session.
Nov 29 07:24:24 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 07:24:24 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 29 07:24:24 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 07:24:24 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 07:24:24 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 07:24:24 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 07:24:24 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 07:24:24 compute-0 sshd-session[154334]: Accepted publickey for zuul from 192.168.122.30 port 49624 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:24:24 compute-0 systemd-logind[807]: New session 47 of user zuul.
Nov 29 07:24:24 compute-0 systemd[1]: Started Session 47 of User zuul.
Nov 29 07:24:24 compute-0 sshd-session[154334]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:24:24 compute-0 kind_dirac[154315]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:24:24 compute-0 kind_dirac[154315]: --> relative data size: 1.0
Nov 29 07:24:24 compute-0 kind_dirac[154315]: --> All data devices are unavailable
Nov 29 07:24:24 compute-0 systemd[1]: libpod-4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2.scope: Deactivated successfully.
Nov 29 07:24:24 compute-0 conmon[154315]: conmon 4abc03fce55502cfc20f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2.scope/container/memory.events
Nov 29 07:24:24 compute-0 podman[154298]: 2025-11-29 07:24:24.93523165 +0000 UTC m=+1.196613231 container died 4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c7372c33c56f090d1403178e2e259c11b8f84e5dfb3591a3b81410d5c0b334d-merged.mount: Deactivated successfully.
Nov 29 07:24:24 compute-0 ceph-mon[75050]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:24 compute-0 podman[154298]: 2025-11-29 07:24:24.99849656 +0000 UTC m=+1.259878141 container remove 4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:24:25 compute-0 systemd[1]: libpod-conmon-4abc03fce55502cfc20f5dcdc98e191df4b6b26fa591491f913d9f4e11e5c8a2.scope: Deactivated successfully.
Nov 29 07:24:25 compute-0 sudo[154191]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:25 compute-0 sudo[154417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:25 compute-0 sudo[154417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:25 compute-0 sudo[154417]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:25 compute-0 sudo[154442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:24:25 compute-0 sudo[154442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:25 compute-0 sudo[154442]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:25 compute-0 sudo[154467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:25 compute-0 sudo[154467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:25 compute-0 sudo[154467]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:25 compute-0 sudo[154492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:24:25 compute-0 sudo[154492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:25 compute-0 podman[154654]: 2025-11-29 07:24:25.616630812 +0000 UTC m=+0.053381554 container create 67f9357db91823ac4774182a7c931620f9dec325eed98eca77542d0072e4490f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:24:25 compute-0 systemd[1]: Started libpod-conmon-67f9357db91823ac4774182a7c931620f9dec325eed98eca77542d0072e4490f.scope.
Nov 29 07:24:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:24:25 compute-0 podman[154654]: 2025-11-29 07:24:25.590734877 +0000 UTC m=+0.027485699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:24:25 compute-0 podman[154654]: 2025-11-29 07:24:25.686235112 +0000 UTC m=+0.122985874 container init 67f9357db91823ac4774182a7c931620f9dec325eed98eca77542d0072e4490f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:24:25 compute-0 podman[154654]: 2025-11-29 07:24:25.693108777 +0000 UTC m=+0.129859509 container start 67f9357db91823ac4774182a7c931620f9dec325eed98eca77542d0072e4490f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:24:25 compute-0 podman[154654]: 2025-11-29 07:24:25.695941523 +0000 UTC m=+0.132692295 container attach 67f9357db91823ac4774182a7c931620f9dec325eed98eca77542d0072e4490f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_babbage, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:24:25 compute-0 dreamy_babbage[154672]: 167 167
Nov 29 07:24:25 compute-0 podman[154654]: 2025-11-29 07:24:25.69732651 +0000 UTC m=+0.134077272 container died 67f9357db91823ac4774182a7c931620f9dec325eed98eca77542d0072e4490f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:24:25 compute-0 systemd[1]: libpod-67f9357db91823ac4774182a7c931620f9dec325eed98eca77542d0072e4490f.scope: Deactivated successfully.
Nov 29 07:24:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4bcef45c3cbf8d1c0b78b27b32a0dea549bc38140b85a005a1fcb5f950e11d4-merged.mount: Deactivated successfully.
Nov 29 07:24:25 compute-0 podman[154654]: 2025-11-29 07:24:25.732517646 +0000 UTC m=+0.169268388 container remove 67f9357db91823ac4774182a7c931620f9dec325eed98eca77542d0072e4490f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:24:25 compute-0 systemd[1]: libpod-conmon-67f9357db91823ac4774182a7c931620f9dec325eed98eca77542d0072e4490f.scope: Deactivated successfully.
Nov 29 07:24:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:25 compute-0 python3.9[154656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:24:25 compute-0 podman[154696]: 2025-11-29 07:24:25.883501661 +0000 UTC m=+0.040210732 container create 7162342a7a50fadcc55e6f52b23d2e0f521d62c77a604589c2a377f55a1da3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:24:25 compute-0 systemd[1]: Started libpod-conmon-7162342a7a50fadcc55e6f52b23d2e0f521d62c77a604589c2a377f55a1da3cc.scope.
Nov 29 07:24:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddfb940d400e0b095ba3b9ef890f0d1cb974f31875cdfd833d9f15089e37d1b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddfb940d400e0b095ba3b9ef890f0d1cb974f31875cdfd833d9f15089e37d1b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddfb940d400e0b095ba3b9ef890f0d1cb974f31875cdfd833d9f15089e37d1b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddfb940d400e0b095ba3b9ef890f0d1cb974f31875cdfd833d9f15089e37d1b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:25 compute-0 podman[154696]: 2025-11-29 07:24:25.954854377 +0000 UTC m=+0.111563458 container init 7162342a7a50fadcc55e6f52b23d2e0f521d62c77a604589c2a377f55a1da3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:24:25 compute-0 podman[154696]: 2025-11-29 07:24:25.866538755 +0000 UTC m=+0.023247846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:24:25 compute-0 podman[154696]: 2025-11-29 07:24:25.96499438 +0000 UTC m=+0.121703451 container start 7162342a7a50fadcc55e6f52b23d2e0f521d62c77a604589c2a377f55a1da3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cohen, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:24:25 compute-0 podman[154696]: 2025-11-29 07:24:25.96836234 +0000 UTC m=+0.125071411 container attach 7162342a7a50fadcc55e6f52b23d2e0f521d62c77a604589c2a377f55a1da3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:24:26 compute-0 sshd-session[154344]: Invalid user admin from 143.14.121.41 port 33380
Nov 29 07:24:26 compute-0 sshd-session[154344]: Connection closed by invalid user admin 143.14.121.41 port 33380 [preauth]
Nov 29 07:24:26 compute-0 great_cohen[154717]: {
Nov 29 07:24:26 compute-0 great_cohen[154717]:     "0": [
Nov 29 07:24:26 compute-0 great_cohen[154717]:         {
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "devices": [
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "/dev/loop3"
Nov 29 07:24:26 compute-0 great_cohen[154717]:             ],
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_name": "ceph_lv0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_size": "21470642176",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "name": "ceph_lv0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "tags": {
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.cluster_name": "ceph",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.crush_device_class": "",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.encrypted": "0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.osd_id": "0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.type": "block",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.vdo": "0"
Nov 29 07:24:26 compute-0 great_cohen[154717]:             },
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "type": "block",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "vg_name": "ceph_vg0"
Nov 29 07:24:26 compute-0 great_cohen[154717]:         }
Nov 29 07:24:26 compute-0 great_cohen[154717]:     ],
Nov 29 07:24:26 compute-0 great_cohen[154717]:     "1": [
Nov 29 07:24:26 compute-0 great_cohen[154717]:         {
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "devices": [
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "/dev/loop4"
Nov 29 07:24:26 compute-0 great_cohen[154717]:             ],
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_name": "ceph_lv1",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_size": "21470642176",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "name": "ceph_lv1",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "tags": {
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.cluster_name": "ceph",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.crush_device_class": "",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.encrypted": "0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.osd_id": "1",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.type": "block",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.vdo": "0"
Nov 29 07:24:26 compute-0 great_cohen[154717]:             },
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "type": "block",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "vg_name": "ceph_vg1"
Nov 29 07:24:26 compute-0 great_cohen[154717]:         }
Nov 29 07:24:26 compute-0 great_cohen[154717]:     ],
Nov 29 07:24:26 compute-0 great_cohen[154717]:     "2": [
Nov 29 07:24:26 compute-0 great_cohen[154717]:         {
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "devices": [
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "/dev/loop5"
Nov 29 07:24:26 compute-0 great_cohen[154717]:             ],
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_name": "ceph_lv2",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_size": "21470642176",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "name": "ceph_lv2",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "tags": {
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.cluster_name": "ceph",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.crush_device_class": "",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.encrypted": "0",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.osd_id": "2",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.type": "block",
Nov 29 07:24:26 compute-0 great_cohen[154717]:                 "ceph.vdo": "0"
Nov 29 07:24:26 compute-0 great_cohen[154717]:             },
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "type": "block",
Nov 29 07:24:26 compute-0 great_cohen[154717]:             "vg_name": "ceph_vg2"
Nov 29 07:24:26 compute-0 great_cohen[154717]:         }
Nov 29 07:24:26 compute-0 great_cohen[154717]:     ]
Nov 29 07:24:26 compute-0 great_cohen[154717]: }
Nov 29 07:24:26 compute-0 systemd[1]: libpod-7162342a7a50fadcc55e6f52b23d2e0f521d62c77a604589c2a377f55a1da3cc.scope: Deactivated successfully.
Nov 29 07:24:26 compute-0 podman[154696]: 2025-11-29 07:24:26.732769262 +0000 UTC m=+0.889478413 container died 7162342a7a50fadcc55e6f52b23d2e0f521d62c77a604589c2a377f55a1da3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cohen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:24:26 compute-0 sudo[154891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kedgenkwhmkzgvydokbukyngevnayims ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401066.3807907-34-23878075542019/AnsiballZ_file.py'
Nov 29 07:24:26 compute-0 sudo[154891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:28 compute-0 python3.9[154893]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:28 compute-0 sudo[154891]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:28 compute-0 sshd-session[154799]: Invalid user orangepi from 143.14.121.41 port 33382
Nov 29 07:24:28 compute-0 sshd-session[154799]: Connection closed by invalid user orangepi 143.14.121.41 port 33382 [preauth]
Nov 29 07:24:29 compute-0 sudo[155047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxjcvkrvjcmylhgvucnxeqzgqczguowi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401068.6786354-34-164414190230846/AnsiballZ_file.py'
Nov 29 07:24:29 compute-0 sudo[155047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:29 compute-0 ceph-mon[75050]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:29 compute-0 python3.9[155049]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:30 compute-0 sudo[155047]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddfb940d400e0b095ba3b9ef890f0d1cb974f31875cdfd833d9f15089e37d1b6-merged.mount: Deactivated successfully.
Nov 29 07:24:30 compute-0 podman[154696]: 2025-11-29 07:24:30.189820537 +0000 UTC m=+4.346529618 container remove 7162342a7a50fadcc55e6f52b23d2e0f521d62c77a604589c2a377f55a1da3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:24:30 compute-0 systemd[1]: libpod-conmon-7162342a7a50fadcc55e6f52b23d2e0f521d62c77a604589c2a377f55a1da3cc.scope: Deactivated successfully.
Nov 29 07:24:30 compute-0 sudo[154492]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:30 compute-0 sudo[155127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:30 compute-0 sudo[155127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:30 compute-0 sudo[155127]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:30 compute-0 sudo[155175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:24:30 compute-0 sudo[155175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:30 compute-0 sudo[155175]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:30 compute-0 sudo[155224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:30 compute-0 sudo[155224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:30 compute-0 sudo[155224]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:30 compute-0 sudo[155274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxzwmqekfnrdczfepatyndqqfyqpgyxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401070.1800416-34-48503433082968/AnsiballZ_file.py'
Nov 29 07:24:30 compute-0 sudo[155274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:30 compute-0 sudo[155276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:24:30 compute-0 sudo[155276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:30 compute-0 python3.9[155281]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:30 compute-0 sudo[155274]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:30 compute-0 podman[155344]: 2025-11-29 07:24:30.808024602 +0000 UTC m=+0.050449966 container create 6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:24:30 compute-0 systemd[1]: Started libpod-conmon-6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9.scope.
Nov 29 07:24:30 compute-0 ceph-mon[75050]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:30 compute-0 ceph-mon[75050]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:30 compute-0 podman[155344]: 2025-11-29 07:24:30.778886309 +0000 UTC m=+0.021311763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:24:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:24:30 compute-0 podman[155344]: 2025-11-29 07:24:30.903366803 +0000 UTC m=+0.145792187 container init 6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:24:30 compute-0 podman[155344]: 2025-11-29 07:24:30.919536466 +0000 UTC m=+0.161961830 container start 6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:24:30 compute-0 podman[155344]: 2025-11-29 07:24:30.92452908 +0000 UTC m=+0.166954434 container attach 6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:24:30 compute-0 zealous_mirzakhani[155403]: 167 167
Nov 29 07:24:30 compute-0 systemd[1]: libpod-6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9.scope: Deactivated successfully.
Nov 29 07:24:30 compute-0 conmon[155403]: conmon 6701e0be2330d373e10f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9.scope/container/memory.events
Nov 29 07:24:30 compute-0 podman[155344]: 2025-11-29 07:24:30.928203229 +0000 UTC m=+0.170628593 container died 6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:24:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bd10e2bda1ad21cfbdfaa03d402d34f92e91879c5d96d1b71bd556b345f266c-merged.mount: Deactivated successfully.
Nov 29 07:24:30 compute-0 podman[155344]: 2025-11-29 07:24:30.965012008 +0000 UTC m=+0.207437362 container remove 6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mirzakhani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:24:30 compute-0 systemd[1]: libpod-conmon-6701e0be2330d373e10fe7d0ebfe074a4d66c6d5644472d6981033a65c73b2e9.scope: Deactivated successfully.
Nov 29 07:24:31 compute-0 sshd-session[155039]: Connection closed by authenticating user root 143.14.121.41 port 33390 [preauth]
Nov 29 07:24:31 compute-0 podman[155494]: 2025-11-29 07:24:31.125218031 +0000 UTC m=+0.047537358 container create c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:24:31 compute-0 systemd[1]: Started libpod-conmon-c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e.scope.
Nov 29 07:24:31 compute-0 sudo[155543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcxtrxheqiqxmfjqjqrrkyvlufebxnev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401070.8675406-34-22324980710410/AnsiballZ_file.py'
Nov 29 07:24:31 compute-0 sudo[155543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0597d3653c12150289df3e02a8e91c6358147f7663f1bc713f5d5b818354bb27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0597d3653c12150289df3e02a8e91c6358147f7663f1bc713f5d5b818354bb27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0597d3653c12150289df3e02a8e91c6358147f7663f1bc713f5d5b818354bb27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0597d3653c12150289df3e02a8e91c6358147f7663f1bc713f5d5b818354bb27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:24:31 compute-0 podman[155494]: 2025-11-29 07:24:31.103342233 +0000 UTC m=+0.025661570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:24:31 compute-0 python3.9[155549]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:31 compute-0 sudo[155543]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:31 compute-0 sudo[155702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbgpulsskxwdbcvfzeaafsiulragwrvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401071.505508-34-157816884289052/AnsiballZ_file.py'
Nov 29 07:24:31 compute-0 sudo[155702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:32 compute-0 podman[155494]: 2025-11-29 07:24:32.1888941 +0000 UTC m=+1.111213457 container init c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:24:32 compute-0 podman[155494]: 2025-11-29 07:24:32.205000643 +0000 UTC m=+1.127319970 container start c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:24:32 compute-0 podman[155494]: 2025-11-29 07:24:32.243523768 +0000 UTC m=+1.165843105 container attach c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:24:32 compute-0 python3.9[155704]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:32 compute-0 sudo[155702]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:33 compute-0 python3.9[155857]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:24:33 compute-0 ceph-mon[75050]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]: {
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "osd_id": 2,
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "type": "bluestore"
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:     },
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "osd_id": 1,
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "type": "bluestore"
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:     },
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "osd_id": 0,
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:         "type": "bluestore"
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]:     }
Nov 29 07:24:33 compute-0 beautiful_chaplygin[155547]: }
Nov 29 07:24:33 compute-0 systemd[1]: libpod-c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e.scope: Deactivated successfully.
Nov 29 07:24:33 compute-0 systemd[1]: libpod-c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e.scope: Consumed 1.131s CPU time.
Nov 29 07:24:33 compute-0 podman[155494]: 2025-11-29 07:24:33.335101958 +0000 UTC m=+2.257421325 container died c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:24:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0597d3653c12150289df3e02a8e91c6358147f7663f1bc713f5d5b818354bb27-merged.mount: Deactivated successfully.
Nov 29 07:24:33 compute-0 podman[155494]: 2025-11-29 07:24:33.400369191 +0000 UTC m=+2.322688508 container remove c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:24:33 compute-0 systemd[1]: libpod-conmon-c10436d230eee53586d502631e4757ade9e9dfa77671182e021fe917dc2d159e.scope: Deactivated successfully.
Nov 29 07:24:33 compute-0 sudo[155276]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:24:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:24:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:24:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:24:33 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 20df46e8-3eaa-4dbf-84c1-57d3a9897467 does not exist
Nov 29 07:24:33 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 63d3e0c4-95a2-432b-9a39-f857928998ae does not exist
Nov 29 07:24:33 compute-0 sudo[155974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:24:33 compute-0 sudo[155974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:33 compute-0 sudo[155974]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:33 compute-0 sudo[156020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:24:33 compute-0 sudo[156020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:24:33 compute-0 sudo[156020]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:33 compute-0 sudo[156097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxczccdduyjfwzitkybqiyrbooifptgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401073.2588315-78-215728713581539/AnsiballZ_seboolean.py'
Nov 29 07:24:33 compute-0 sudo[156097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:33 compute-0 python3.9[156099]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 07:24:34 compute-0 sshd-session[155551]: Connection closed by authenticating user root 143.14.121.41 port 33392 [preauth]
Nov 29 07:24:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:24:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:24:34 compute-0 ceph-mon[75050]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:34 compute-0 sudo[156097]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:35 compute-0 python3.9[156251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:24:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:24:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:24:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:24:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:24:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:24:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:36 compute-0 python3.9[156372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401074.7790616-86-34534060339832/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:36 compute-0 sshd-session[156100]: Connection closed by authenticating user root 143.14.121.41 port 54532 [preauth]
Nov 29 07:24:36 compute-0 ceph-mon[75050]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:36 compute-0 python3.9[156523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:37 compute-0 python3.9[156645]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401076.4599612-101-39780304755090/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:38 compute-0 sudo[156795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxqoyabpcnuyiaopvbomxdwyrhiwukbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401077.8049695-118-116047977203014/AnsiballZ_setup.py'
Nov 29 07:24:38 compute-0 sudo[156795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:38 compute-0 python3.9[156797]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:24:38 compute-0 sshd-session[156520]: Connection closed by authenticating user root 143.14.121.41 port 54536 [preauth]
Nov 29 07:24:38 compute-0 sudo[156795]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:38 compute-0 ceph-mon[75050]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:39 compute-0 sudo[156880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwliumevjldutnjjlcbnhuowshkjdgcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401077.8049695-118-116047977203014/AnsiballZ_dnf.py'
Nov 29 07:24:39 compute-0 sudo[156880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:39 compute-0 python3.9[156882]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:24:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:40 compute-0 sudo[156880]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:41 compute-0 sshd-session[156806]: Connection closed by authenticating user root 143.14.121.41 port 54548 [preauth]
Nov 29 07:24:41 compute-0 ceph-mon[75050]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:41 compute-0 sudo[157034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeqzpqmzppgckruuogawsjzhctgngaec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401080.9382155-130-94545119865074/AnsiballZ_systemd.py'
Nov 29 07:24:41 compute-0 sudo[157034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:41 compute-0 python3.9[157036]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:24:42 compute-0 sudo[157034]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:44 compute-0 ovn_controller[153383]: 2025-11-29T07:24:44Z|00025|memory|INFO|16128 kB peak resident set size after 30.3 seconds
Nov 29 07:24:44 compute-0 ovn_controller[153383]: 2025-11-29T07:24:44Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 29 07:24:44 compute-0 podman[157192]: 2025-11-29 07:24:44.790197617 +0000 UTC m=+0.145962872 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:24:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:46 compute-0 sshd-session[157037]: Connection closed by authenticating user root 143.14.121.41 port 54560 [preauth]
Nov 29 07:24:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:47 compute-0 python3.9[157191]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:47 compute-0 ceph-mon[75050]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:47 compute-0 ceph-mon[75050]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 07:24:47 compute-0 ceph-mon[75050]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:48 compute-0 python3.9[157341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401082.2878542-138-29836468750570/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:48 compute-0 ceph-mon[75050]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:49 compute-0 python3.9[157491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:49 compute-0 sshd-session[157219]: Invalid user test from 143.14.121.41 port 37246
Nov 29 07:24:49 compute-0 sshd-session[157219]: Connection closed by invalid user test 143.14.121.41 port 37246 [preauth]
Nov 29 07:24:49 compute-0 python3.9[157612]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401088.5853028-138-166599572779843/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:51 compute-0 python3.9[157764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:51 compute-0 sshd-session[157630]: Invalid user user from 143.14.121.41 port 37262
Nov 29 07:24:51 compute-0 ceph-mon[75050]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:51 compute-0 sshd-session[157630]: Connection closed by invalid user user 143.14.121.41 port 37262 [preauth]
Nov 29 07:24:51 compute-0 python3.9[157885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401090.5557225-182-52714166461949/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:52 compute-0 python3.9[158037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:52 compute-0 ceph-mon[75050]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:53 compute-0 python3.9[158158]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401091.859076-182-174632251246716/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:53 compute-0 python3.9[158308]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:24:54 compute-0 sshd-session[157886]: Connection closed by authenticating user root 143.14.121.41 port 37266 [preauth]
Nov 29 07:24:54 compute-0 sudo[158462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbtgwyqqbvopiwyzlgfpjdnobtgfnpos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401094.3148167-220-47260971281342/AnsiballZ_file.py'
Nov 29 07:24:54 compute-0 sudo[158462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:54 compute-0 ceph-mon[75050]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:54 compute-0 python3.9[158464]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:54 compute-0 sudo[158462]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:55 compute-0 sudo[158614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyntngeakpsjwdhbpqklmjxkojlmcwnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401095.1677725-228-193459553080536/AnsiballZ_stat.py'
Nov 29 07:24:55 compute-0 sudo[158614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:55 compute-0 sshd-session[158387]: Invalid user admin from 143.14.121.41 port 41898
Nov 29 07:24:55 compute-0 python3.9[158616]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:55 compute-0 sudo[158614]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:55 compute-0 sshd-session[158387]: Connection closed by invalid user admin 143.14.121.41 port 41898 [preauth]
Nov 29 07:24:56 compute-0 sudo[158692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljrfkpuhnnaosmnrdlvtadsgboizdrwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401095.1677725-228-193459553080536/AnsiballZ_file.py'
Nov 29 07:24:56 compute-0 sudo[158692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:56 compute-0 python3.9[158694]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:56 compute-0 sudo[158692]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:56 compute-0 sudo[158846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrzmlnraljlyuydksrmugwobtlckidix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401096.4554212-228-240642564768894/AnsiballZ_stat.py'
Nov 29 07:24:56 compute-0 sudo[158846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:56 compute-0 ceph-mon[75050]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:56 compute-0 python3.9[158848]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:56 compute-0 sudo[158846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:57 compute-0 sudo[158924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upsofqjjitnhdvkdteyhhqjtljpdgijt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401096.4554212-228-240642564768894/AnsiballZ_file.py'
Nov 29 07:24:57 compute-0 sudo[158924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:57 compute-0 sshd-session[158695]: Invalid user cirros from 143.14.121.41 port 41904
Nov 29 07:24:57 compute-0 python3.9[158926]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:24:57 compute-0 sudo[158924]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:24:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:57 compute-0 sshd-session[158695]: Connection closed by invalid user cirros 143.14.121.41 port 41904 [preauth]
Nov 29 07:24:58 compute-0 sudo[159078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsejblhcwnwfdexkrcyjqnthwhwqpnsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401097.8611863-251-146491132474787/AnsiballZ_file.py'
Nov 29 07:24:58 compute-0 sudo[159078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:58 compute-0 python3.9[159080]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:58 compute-0 sudo[159078]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:58 compute-0 ceph-mon[75050]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:59 compute-0 sudo[159230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlhaofszypkpnxjomwfmzccxgxporram ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401098.6554961-259-120093075982744/AnsiballZ_stat.py'
Nov 29 07:24:59 compute-0 sudo[159230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:59 compute-0 python3.9[159232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:24:59 compute-0 sudo[159230]: pam_unix(sudo:session): session closed for user root
Nov 29 07:24:59 compute-0 sshd-session[159044]: Connection closed by authenticating user root 143.14.121.41 port 41908 [preauth]
Nov 29 07:24:59 compute-0 sudo[159308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nncjndjrapfxreqoekpkpeqbjvxuptcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401098.6554961-259-120093075982744/AnsiballZ_file.py'
Nov 29 07:24:59 compute-0 sudo[159308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:24:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:24:59 compute-0 python3.9[159310]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:24:59 compute-0 sudo[159308]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:00 compute-0 sudo[159462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyxxraqkrlprwzllajuimerzdbzresqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401100.0572379-271-204289485457275/AnsiballZ_stat.py'
Nov 29 07:25:00 compute-0 sudo[159462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:00 compute-0 python3.9[159464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:25:00 compute-0 sudo[159462]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:00 compute-0 ceph-mon[75050]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:00 compute-0 sudo[159540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahrwuzvjcewlmbowixintmccpeyxhmbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401100.0572379-271-204289485457275/AnsiballZ_file.py'
Nov 29 07:25:00 compute-0 sudo[159540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:01 compute-0 python3.9[159542]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:01 compute-0 sudo[159540]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:01 compute-0 sshd-session[159311]: Connection closed by authenticating user root 143.14.121.41 port 41914 [preauth]
Nov 29 07:25:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.888666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401101888756, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1343, "num_deletes": 251, "total_data_size": 2163089, "memory_usage": 2199808, "flush_reason": "Manual Compaction"}
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401101908367, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 2122207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8479, "largest_seqno": 9821, "table_properties": {"data_size": 2115912, "index_size": 3558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12576, "raw_average_key_size": 18, "raw_value_size": 2103306, "raw_average_value_size": 3177, "num_data_blocks": 167, "num_entries": 662, "num_filter_entries": 662, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400954, "oldest_key_time": 1764400954, "file_creation_time": 1764401101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 19776 microseconds, and 7221 cpu microseconds.
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.908452) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 2122207 bytes OK
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.908477) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.910849) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 29 07:25:01 compute-0 sudo[159694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvqjnnyjhojbrdbyzsoifbgfnxhwlkqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401101.3938107-283-118329927129897/AnsiballZ_systemd.py'
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.910878) EVENT_LOG_v1 {"time_micros": 1764401101910869, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.910906) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2157109, prev total WAL file size 2157109, number of live WAL files 2.
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.912188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(2072KB)], [23(6629KB)]
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401101912258, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8910329, "oldest_snapshot_seqno": -1}
Nov 29 07:25:01 compute-0 sudo[159694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3392 keys, 7121040 bytes, temperature: kUnknown
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401101969631, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7121040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7095261, "index_size": 16198, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 82681, "raw_average_key_size": 24, "raw_value_size": 7030776, "raw_average_value_size": 2072, "num_data_blocks": 708, "num_entries": 3392, "num_filter_entries": 3392, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764401101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.970068) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7121040 bytes
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.972506) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.8 rd, 123.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(7.6) write-amplify(3.4) OK, records in: 3906, records dropped: 514 output_compression: NoCompression
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.972528) EVENT_LOG_v1 {"time_micros": 1764401101972517, "job": 8, "event": "compaction_finished", "compaction_time_micros": 57543, "compaction_time_cpu_micros": 17663, "output_level": 6, "num_output_files": 1, "total_output_size": 7121040, "num_input_records": 3906, "num_output_records": 3392, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401101973274, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401101974872, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.912125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.975117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.975313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.975315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.975317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:25:01 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:25:01.975320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:25:02 compute-0 python3.9[159696]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:25:02 compute-0 systemd[1]: Reloading.
Nov 29 07:25:02 compute-0 systemd-rc-local-generator[159724]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:25:02 compute-0 systemd-sysv-generator[159727]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:25:02 compute-0 sudo[159694]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:02 compute-0 sshd-session[159625]: Invalid user admin from 143.14.121.41 port 41930
Nov 29 07:25:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:25:02 compute-0 ceph-mon[75050]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:02 compute-0 sshd-session[159625]: Connection closed by invalid user admin 143.14.121.41 port 41930 [preauth]
Nov 29 07:25:03 compute-0 sudo[159883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpmvugtueoewdcoxbtoljncfqbnmvsgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401102.8390899-291-132199916237047/AnsiballZ_stat.py'
Nov 29 07:25:03 compute-0 sudo[159883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:03 compute-0 python3.9[159885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:25:03 compute-0 sudo[159883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:03 compute-0 sudo[159963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgvmhngpikbpcdasygpgkdilgbcqehpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401102.8390899-291-132199916237047/AnsiballZ_file.py'
Nov 29 07:25:03 compute-0 sudo[159963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:03 compute-0 python3.9[159965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:03 compute-0 sudo[159963]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:04 compute-0 sudo[160115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cltdsdfxeznfaexhjlravvfsmzfojbgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401103.9461038-303-78157942193175/AnsiballZ_stat.py'
Nov 29 07:25:04 compute-0 sudo[160115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:04 compute-0 python3.9[160117]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:25:04 compute-0 sudo[160115]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:04 compute-0 sudo[160193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tistyjprsqnqncfrcvovyrpsybetlvrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401103.9461038-303-78157942193175/AnsiballZ_file.py'
Nov 29 07:25:04 compute-0 sudo[160193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:04 compute-0 python3.9[160195]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:04 compute-0 sudo[160193]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:04 compute-0 ceph-mon[75050]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:05 compute-0 sudo[160346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drsbtsynnpaizjrvwxccyphmjrnqgocp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401105.071272-315-219327723562693/AnsiballZ_systemd.py'
Nov 29 07:25:05 compute-0 sudo[160346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:25:05
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', 'backups']
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:25:05 compute-0 python3.9[160348]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:25:05 compute-0 systemd[1]: Reloading.
Nov 29 07:25:05 compute-0 sshd-session[159886]: Connection closed by authenticating user root 143.14.121.41 port 51928 [preauth]
Nov 29 07:25:05 compute-0 systemd-sysv-generator[160380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:25:05 compute-0 systemd-rc-local-generator[160376]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:25:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:06 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:25:06 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:25:06 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:25:06 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:25:06 compute-0 sudo[160346]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:25:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:25:06 compute-0 sudo[160541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbimqkutkcbryrrufaocfkvrutwxtdhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401106.6161842-325-105104652770217/AnsiballZ_file.py'
Nov 29 07:25:06 compute-0 sudo[160541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:07 compute-0 python3.9[160543]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:25:07 compute-0 sudo[160541]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:07 compute-0 ceph-mon[75050]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:07 compute-0 sshd-session[160330]: Connection closed by authenticating user rpc 143.14.121.41 port 51938 [preauth]
Nov 29 07:25:07 compute-0 sudo[160694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nclesuzornulcokjpbefuulpucejfvvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401107.339667-333-84659891225844/AnsiballZ_stat.py'
Nov 29 07:25:07 compute-0 sudo[160694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:25:07 compute-0 python3.9[160696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:25:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:07 compute-0 sudo[160694]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:08 compute-0 sudo[160818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elarmfidpopjvlffepfhgzqdgijcokfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401107.339667-333-84659891225844/AnsiballZ_copy.py'
Nov 29 07:25:08 compute-0 sudo[160818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:08 compute-0 python3.9[160820]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401107.339667-333-84659891225844/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:25:08 compute-0 sudo[160818]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:09 compute-0 sudo[160970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhofkdzdnestncvknryaouuegijmjvvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401108.7735353-350-270217767280050/AnsiballZ_file.py'
Nov 29 07:25:09 compute-0 sudo[160970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:09 compute-0 python3.9[160972]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:25:09 compute-0 sudo[160970]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:09 compute-0 ceph-mon[75050]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:09 compute-0 sudo[161122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtoyjmbfsptswoqrljxvgznlfazslsud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401109.4562073-358-84323888079908/AnsiballZ_stat.py'
Nov 29 07:25:09 compute-0 sudo[161122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:09 compute-0 python3.9[161124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:25:09 compute-0 sudo[161122]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:10 compute-0 sudo[161245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aofpzgxguccmaqdfhtgspfpkjrflnhzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401109.4562073-358-84323888079908/AnsiballZ_copy.py'
Nov 29 07:25:10 compute-0 sudo[161245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:10 compute-0 ceph-mon[75050]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:10 compute-0 python3.9[161247]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401109.4562073-358-84323888079908/.source.json _original_basename=.kl94p1d8 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:10 compute-0 sudo[161245]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:10 compute-0 sshd-session[160690]: Connection closed by authenticating user root 143.14.121.41 port 51942 [preauth]
Nov 29 07:25:11 compute-0 sudo[161398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foopvzoezmldqviaxhvucfoekfzbznwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401110.8271725-373-194524411331659/AnsiballZ_file.py'
Nov 29 07:25:11 compute-0 sudo[161398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:11 compute-0 python3.9[161400]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:11 compute-0 sudo[161398]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:12 compute-0 sudo[161551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-damqasuzzevikrlhasiqknmjdrmgemsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401111.8136942-381-200493421341038/AnsiballZ_stat.py'
Nov 29 07:25:12 compute-0 sudo[161551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:12 compute-0 sudo[161551]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:25:12 compute-0 sudo[161674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qupqxntrpnlbhtbfkymldqhbjgfpjtfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401111.8136942-381-200493421341038/AnsiballZ_copy.py'
Nov 29 07:25:12 compute-0 sudo[161674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:12 compute-0 sshd-session[161347]: Connection closed by authenticating user root 143.14.121.41 port 51944 [preauth]
Nov 29 07:25:13 compute-0 sudo[161674]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:13 compute-0 ceph-mon[75050]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:13 compute-0 sudo[161828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjuvvlcxdoxafnzmjuazlqogsnkjuhxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401113.3815217-398-58661012705874/AnsiballZ_container_config_data.py'
Nov 29 07:25:13 compute-0 sudo[161828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:14 compute-0 python3.9[161830]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 07:25:14 compute-0 sudo[161828]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:25:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:25:14 compute-0 sudo[161980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujwpyvqaenjjbfkkhnykvqpuecqzacqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401114.2891335-407-196431859863463/AnsiballZ_container_config_hash.py'
Nov 29 07:25:14 compute-0 sudo[161980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:14 compute-0 sshd-session[161685]: Connection closed by authenticating user root 143.14.121.41 port 53722 [preauth]
Nov 29 07:25:14 compute-0 python3.9[161982]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:25:14 compute-0 sudo[161980]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:15 compute-0 ceph-mon[75050]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:15 compute-0 podman[162085]: 2025-11-29 07:25:15.783656927 +0000 UTC m=+0.146308930 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 07:25:15 compute-0 sudo[162157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgdiatklyjhybhkosfezcaluyxmjrydx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401115.2297397-416-68551762685280/AnsiballZ_podman_container_info.py'
Nov 29 07:25:15 compute-0 sudo[162157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:16 compute-0 python3.9[162162]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 07:25:16 compute-0 sudo[162157]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:16 compute-0 ceph-mon[75050]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:17 compute-0 sudo[162339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imqifurrtowfeohmorjgewfpndiydqea ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401116.9611762-429-180445569708525/AnsiballZ_edpm_container_manage.py'
Nov 29 07:25:17 compute-0 sudo[162339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:25:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:17 compute-0 python3[162341]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:25:18 compute-0 sshd-session[162006]: Invalid user user1 from 143.14.121.41 port 53736
Nov 29 07:25:18 compute-0 sshd-session[162006]: Connection closed by invalid user user1 143.14.121.41 port 53736 [preauth]
Nov 29 07:25:19 compute-0 ceph-mon[75050]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:20 compute-0 ceph-mon[75050]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:21 compute-0 sshd-session[162387]: Connection closed by authenticating user root 143.14.121.41 port 53748 [preauth]
Nov 29 07:25:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:25:23 compute-0 ceph-mon[75050]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:23 compute-0 sshd-session[162416]: Connection closed by authenticating user root 143.14.121.41 port 53760 [preauth]
Nov 29 07:25:25 compute-0 ceph-mon[75050]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:26 compute-0 sshd-session[162424]: Connection closed by authenticating user nobody 143.14.121.41 port 50752 [preauth]
Nov 29 07:25:27 compute-0 sshd-session[162457]: Invalid user kali from 143.14.121.41 port 50758
Nov 29 07:25:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:25:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:27 compute-0 sshd-session[162457]: Connection closed by invalid user kali 143.14.121.41 port 50758 [preauth]
Nov 29 07:25:28 compute-0 ceph-mon[75050]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:29 compute-0 podman[162353]: 2025-11-29 07:25:29.380623412 +0000 UTC m=+11.456459423 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:25:29 compute-0 podman[162489]: 2025-11-29 07:25:29.544018368 +0000 UTC m=+0.054479497 container create 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:25:29 compute-0 podman[162489]: 2025-11-29 07:25:29.512499696 +0000 UTC m=+0.022960865 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:25:29 compute-0 python3[162341]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:25:29 compute-0 sudo[162339]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:29 compute-0 ceph-mon[75050]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:29 compute-0 sshd-session[162463]: Invalid user linaro from 143.14.121.41 port 50766
Nov 29 07:25:30 compute-0 sshd-session[162463]: Connection closed by invalid user linaro 143.14.121.41 port 50766 [preauth]
Nov 29 07:25:30 compute-0 sudo[162677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykuxrjjlxzfjexojlpuzrfzuwhislxlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401129.9035478-437-155734987720584/AnsiballZ_stat.py'
Nov 29 07:25:30 compute-0 sudo[162677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:30 compute-0 python3.9[162679]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:25:30 compute-0 sudo[162677]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:30 compute-0 sudo[162834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anzvahiczdcqpdnzjbtxlggwysvltfuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401130.6192036-446-255751531780919/AnsiballZ_file.py'
Nov 29 07:25:30 compute-0 sudo[162834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:31 compute-0 ceph-mon[75050]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:31 compute-0 python3.9[162836]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:31 compute-0 sudo[162834]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:31 compute-0 sudo[162910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcyqdenlabcvcjahnppnxnvopijlidqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401130.6192036-446-255751531780919/AnsiballZ_stat.py'
Nov 29 07:25:31 compute-0 sudo[162910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:31 compute-0 python3.9[162912]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:25:31 compute-0 sudo[162910]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:31 compute-0 sshd-session[162690]: Connection closed by authenticating user root 143.14.121.41 port 50776 [preauth]
Nov 29 07:25:32 compute-0 sudo[163061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfhcvztzkisqbdjqdpsoawwsuidrzqvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401131.6219363-446-71460306835339/AnsiballZ_copy.py'
Nov 29 07:25:32 compute-0 sudo[163061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:32 compute-0 python3.9[163063]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401131.6219363-446-71460306835339/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:25:32 compute-0 sudo[163061]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:32 compute-0 sudo[163139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbqllhyiasmlhpbhubadqffqvbmpaahv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401131.6219363-446-71460306835339/AnsiballZ_systemd.py'
Nov 29 07:25:32 compute-0 sudo[163139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:33 compute-0 sudo[163142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:33 compute-0 sudo[163142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:33 compute-0 sudo[163142]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:33 compute-0 sudo[163167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:25:33 compute-0 sudo[163167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:33 compute-0 sudo[163167]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:33 compute-0 sudo[163192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:33 compute-0 sudo[163192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:33 compute-0 sudo[163192]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:33 compute-0 sudo[163217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:25:33 compute-0 sudo[163217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:34 compute-0 sshd-session[163064]: Connection closed by authenticating user root 143.14.121.41 port 39336 [preauth]
Nov 29 07:25:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:25:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:25:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:25:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:25:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:25:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:25:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:35 compute-0 sshd-session[163328]: Connection closed by authenticating user root 143.14.121.41 port 39374 [preauth]
Nov 29 07:25:36 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:25:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:37 compute-0 sshd-session[163331]: Connection closed by authenticating user root 143.14.121.41 port 39406 [preauth]
Nov 29 07:25:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:40 compute-0 sshd-session[163373]: Connection closed by authenticating user root 143.14.121.41 port 39416 [preauth]
Nov 29 07:25:40 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:25:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:42 compute-0 sshd-session[163376]: Connection closed by authenticating user root 143.14.121.41 port 39444 [preauth]
Nov 29 07:25:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:44 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:25:44 compute-0 sshd-session[163378]: Invalid user admin from 143.14.121.41 port 33632
Nov 29 07:25:45 compute-0 sshd-session[163378]: Connection closed by invalid user admin 143.14.121.41 port 33632 [preauth]
Nov 29 07:25:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:46 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf MDS connection to Monitors appears to be laggy; 17.2755s since last acked beacon
Nov 29 07:25:46 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:25:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:48 compute-0 python3.9[163141]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:25:48 compute-0 systemd[1]: Reloading.
Nov 29 07:25:48 compute-0 systemd-rc-local-generator[163419]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:25:48 compute-0 systemd-sysv-generator[163424]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:25:48 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:25:48 compute-0 sshd-session[163380]: Connection closed by authenticating user root 143.14.121.41 port 33646 [preauth]
Nov 29 07:25:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:50 compute-0 sshd-session[163428]: Connection closed by authenticating user root 143.14.121.41 port 33650 [preauth]
Nov 29 07:25:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 23.0515 seconds
Nov 29 07:25:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:25:50 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf  MDS is no longer laggy
Nov 29 07:25:51 compute-0 sudo[163139]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:52 compute-0 sshd-session[163430]: Connection closed by authenticating user root 143.14.121.41 port 33662 [preauth]
Nov 29 07:25:53 compute-0 sudo[163507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-angfmiygkaofmjabxlstthekdxgorsvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401131.6219363-446-71460306835339/AnsiballZ_systemd.py'
Nov 29 07:25:53 compute-0 sudo[163507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:25:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:54 compute-0 sshd-session[163455]: Connection closed by authenticating user root 143.14.121.41 port 54116 [preauth]
Nov 29 07:25:54 compute-0 python3.9[163509]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:25:55 compute-0 systemd[1]: Reloading.
Nov 29 07:25:55 compute-0 systemd-rc-local-generator[163539]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:25:55 compute-0 systemd-sysv-generator[163544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 ceph-mon[75050]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:55 compute-0 podman[163314]: 2025-11-29 07:25:55.584699526 +0000 UTC m=+21.199805019 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:25:55 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 07:25:55 compute-0 sshd-session[163511]: Invalid user admin from 143.14.121.41 port 54122
Nov 29 07:25:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:56 compute-0 sshd-session[163511]: Connection closed by invalid user admin 143.14.121.41 port 54122 [preauth]
Nov 29 07:25:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:25:57 compute-0 ceph-mon[75050]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:57 compute-0 ceph-mon[75050]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:57 compute-0 ceph-mon[75050]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:57 compute-0 podman[163314]: 2025-11-29 07:25:57.508250794 +0000 UTC m=+23.123356307 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:25:57 compute-0 sshd-session[163581]: Invalid user user from 143.14.121.41 port 54128
Nov 29 07:25:57 compute-0 podman[163382]: 2025-11-29 07:25:57.715135951 +0000 UTC m=+11.068538637 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:25:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0ea265aaf7308d2de4c498e01cbd6300233f36116fd8dc5825673ac27ac04e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 07:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0ea265aaf7308d2de4c498e01cbd6300233f36116fd8dc5825673ac27ac04e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:25:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e.
Nov 29 07:25:57 compute-0 podman[163570]: 2025-11-29 07:25:57.798649883 +0000 UTC m=+2.002220720 container init 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: + sudo -E kolla_set_configs
Nov 29 07:25:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:57 compute-0 podman[163570]: 2025-11-29 07:25:57.824176385 +0000 UTC m=+2.027747242 container start 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:25:57 compute-0 edpm-start-podman-container[163570]: ovn_metadata_agent
Nov 29 07:25:57 compute-0 sshd-session[163581]: Connection closed by invalid user user 143.14.121.41 port 54128 [preauth]
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Validating config file
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Copying service configuration files
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Writing out command to execute
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: ++ cat /run_command
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: + CMD=neutron-ovn-metadata-agent
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: + ARGS=
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: + sudo kolla_copy_cacerts
Nov 29 07:25:57 compute-0 edpm-start-podman-container[163564]: Creating additional drop-in dependency for "ovn_metadata_agent" (8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e)
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: + [[ ! -n '' ]]
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: + . kolla_extend_start
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: + umask 0022
Nov 29 07:25:57 compute-0 ovn_metadata_agent[163632]: + exec neutron-ovn-metadata-agent
Nov 29 07:25:57 compute-0 podman[163661]: 2025-11-29 07:25:57.911772396 +0000 UTC m=+0.077915423 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent)
Nov 29 07:25:57 compute-0 systemd[1]: Reloading.
Nov 29 07:25:57 compute-0 systemd-rc-local-generator[163753]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:25:57 compute-0 systemd-sysv-generator[163756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:25:58 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 29 07:25:58 compute-0 sudo[163507]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:58 compute-0 sudo[163217]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:25:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:25:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:25:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:25:58 compute-0 sudo[163840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:58 compute-0 sudo[163840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:58 compute-0 sudo[163840]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:58 compute-0 sshd-session[154347]: Connection closed by 192.168.122.30 port 49624
Nov 29 07:25:58 compute-0 sudo[163866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:25:58 compute-0 sudo[163866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:58 compute-0 sudo[163866]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:58 compute-0 sshd-session[154334]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:25:58 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Nov 29 07:25:58 compute-0 systemd[1]: session-47.scope: Consumed 57.909s CPU time.
Nov 29 07:25:58 compute-0 systemd-logind[807]: Session 47 logged out. Waiting for processes to exit.
Nov 29 07:25:58 compute-0 systemd-logind[807]: Removed session 47.
Nov 29 07:25:58 compute-0 sudo[163891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:58 compute-0 sudo[163891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:58 compute-0 sudo[163891]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:58 compute-0 sudo[163916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:25:58 compute-0 sudo[163916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:59 compute-0 sudo[163916]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:25:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:25:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:25:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:25:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:25:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:25:59 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a03db7ee-1e21-4160-8d77-1048f89e2842 does not exist
Nov 29 07:25:59 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 84cfa5d7-1c39-434d-87b5-be24652f99e3 does not exist
Nov 29 07:25:59 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 62abf399-e7f5-4f13-8fde-60231c059930 does not exist
Nov 29 07:25:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:25:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:25:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:25:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:25:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:25:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:25:59 compute-0 sudo[163973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:59 compute-0 sudo[163973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:59 compute-0 sudo[163973]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:59 compute-0 sudo[163998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:25:59 compute-0 sudo[163998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:59 compute-0 sudo[163998]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:59 compute-0 sudo[164023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:25:59 compute-0 sudo[164023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:59 compute-0 sudo[164023]: pam_unix(sudo:session): session closed for user root
Nov 29 07:25:59 compute-0 ceph-mon[75050]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:25:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:25:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:25:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:25:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:25:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:25:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:25:59 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:25:59 compute-0 sudo[164048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:25:59 compute-0 sudo[164048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:25:59 compute-0 sshd-session[163762]: Connection closed by authenticating user root 143.14.121.41 port 54142 [preauth]
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.693 163655 INFO neutron.common.config [-] Logging enabled!
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.695 163655 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.695 163655 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.696 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.696 163655 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.696 163655 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.696 163655 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.697 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.697 163655 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.697 163655 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.697 163655 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.697 163655 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.697 163655 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.698 163655 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.698 163655 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.698 163655 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.698 163655 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.698 163655 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.698 163655 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.699 163655 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.699 163655 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.699 163655 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.699 163655 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.699 163655 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.699 163655 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.700 163655 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.700 163655 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.700 163655 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.700 163655 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.700 163655 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.700 163655 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.701 163655 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.701 163655 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.701 163655 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.701 163655 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.701 163655 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.701 163655 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.702 163655 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.702 163655 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.702 163655 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.702 163655 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.702 163655 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.703 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.703 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.703 163655 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.703 163655 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.703 163655 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.704 163655 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.704 163655 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.704 163655 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.704 163655 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.704 163655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.704 163655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.705 163655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.705 163655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.705 163655 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.705 163655 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.705 163655 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.705 163655 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.705 163655 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.705 163655 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.705 163655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.706 163655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.706 163655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.706 163655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.706 163655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.706 163655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.706 163655 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.706 163655 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.706 163655 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.707 163655 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.707 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.707 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.707 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.707 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.707 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.707 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.707 163655 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.707 163655 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.708 163655 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.709 163655 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.710 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.711 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.711 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.711 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.711 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.711 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.711 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.711 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.712 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.712 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.712 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.712 163655 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.712 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.712 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.712 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.712 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.713 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.713 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.713 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.713 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.713 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.713 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.713 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.713 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.713 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.714 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.714 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.714 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.714 163655 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.714 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.714 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.714 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.714 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.714 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.715 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.715 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.715 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.715 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.715 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.715 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.715 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.715 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.716 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.716 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.716 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.716 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.716 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.716 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.716 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.716 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.717 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.718 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.718 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.718 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.718 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.718 163655 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.718 163655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.718 163655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.719 163655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.719 163655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.719 163655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.719 163655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.719 163655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.719 163655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.719 163655 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.719 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.719 163655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.720 163655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.720 163655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.720 163655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.720 163655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.720 163655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.720 163655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.720 163655 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.720 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.720 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.721 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.721 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.721 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.721 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.721 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.721 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.721 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.721 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.722 163655 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.722 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.722 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.722 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.722 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.722 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.722 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.722 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.722 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.723 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.724 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.724 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.724 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.724 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.724 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.724 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.724 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.725 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.725 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.725 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.725 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.725 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.725 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.725 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.725 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.726 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.726 163655 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.726 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.726 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.726 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.726 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.726 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.726 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.727 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.727 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.727 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.727 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.727 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.727 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.728 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.728 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.728 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.728 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.728 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.728 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.728 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.729 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.729 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.729 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.729 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.729 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.729 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.729 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.729 163655 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.730 163655 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.730 163655 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.730 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.730 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.730 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.730 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.730 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.731 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.731 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.731 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.731 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.731 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.731 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.731 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.731 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.732 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.732 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.732 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.732 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.732 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.732 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.732 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.732 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.732 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.733 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.733 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.733 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.733 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.733 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.733 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.733 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.733 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.733 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.734 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.734 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.734 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.734 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.734 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.734 163655 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.734 163655 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:25:59 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.747 163655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.748 163655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.748 163655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.748 163655 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.748 163655 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.762 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name df234f2c-4343-4c91-861d-13d184c56aa0 (UUID: df234f2c-4343-4c91-861d-13d184c56aa0) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.784 163655 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.785 163655 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.785 163655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.785 163655 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.788 163655 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.793 163655 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.798 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'df234f2c-4343-4c91-861d-13d184c56aa0'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], external_ids={}, name=df234f2c-4343-4c91-861d-13d184c56aa0, nb_cfg_timestamp=1764401062469, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.799 163655 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7ff72e65afa0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.800 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.800 163655 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.800 163655 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.800 163655 INFO oslo_service.service [-] Starting 1 workers
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.806 163655 DEBUG oslo_service.service [-] Started child 164115 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.810 163655 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpthalsdm8/privsep.sock']
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.810 164115 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-955858'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 29 07:25:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.834 164115 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.835 164115 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.835 164115 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.838 164115 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.844 164115 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 07:25:59 compute-0 podman[164113]: 2025-11-29 07:25:59.84827497 +0000 UTC m=+0.040552005 container create dcaee50e902e3da29212d60e4eb1bbaf07657dfbd0effe980afbf7a31d8b2dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:25:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:25:59.850 164115 INFO eventlet.wsgi.server [-] (164115) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 29 07:25:59 compute-0 systemd[1]: Started libpod-conmon-dcaee50e902e3da29212d60e4eb1bbaf07657dfbd0effe980afbf7a31d8b2dbc.scope.
Nov 29 07:25:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:25:59 compute-0 podman[164113]: 2025-11-29 07:25:59.829605501 +0000 UTC m=+0.021882576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:25:59 compute-0 podman[164113]: 2025-11-29 07:25:59.935031608 +0000 UTC m=+0.127308653 container init dcaee50e902e3da29212d60e4eb1bbaf07657dfbd0effe980afbf7a31d8b2dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:25:59 compute-0 podman[164113]: 2025-11-29 07:25:59.941525972 +0000 UTC m=+0.133803027 container start dcaee50e902e3da29212d60e4eb1bbaf07657dfbd0effe980afbf7a31d8b2dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:25:59 compute-0 podman[164113]: 2025-11-29 07:25:59.944913632 +0000 UTC m=+0.137190677 container attach dcaee50e902e3da29212d60e4eb1bbaf07657dfbd0effe980afbf7a31d8b2dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:25:59 compute-0 busy_gould[164133]: 167 167
Nov 29 07:25:59 compute-0 systemd[1]: libpod-dcaee50e902e3da29212d60e4eb1bbaf07657dfbd0effe980afbf7a31d8b2dbc.scope: Deactivated successfully.
Nov 29 07:25:59 compute-0 podman[164113]: 2025-11-29 07:25:59.947740858 +0000 UTC m=+0.140017903 container died dcaee50e902e3da29212d60e4eb1bbaf07657dfbd0effe980afbf7a31d8b2dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:25:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-eadea63fd3f2f67472df6a4a9c9731274476c3b168f5278fe3a57a3206442e9f-merged.mount: Deactivated successfully.
Nov 29 07:25:59 compute-0 podman[164113]: 2025-11-29 07:25:59.997522228 +0000 UTC m=+0.189799273 container remove dcaee50e902e3da29212d60e4eb1bbaf07657dfbd0effe980afbf7a31d8b2dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_gould, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:26:00 compute-0 systemd[1]: libpod-conmon-dcaee50e902e3da29212d60e4eb1bbaf07657dfbd0effe980afbf7a31d8b2dbc.scope: Deactivated successfully.
Nov 29 07:26:00 compute-0 podman[164159]: 2025-11-29 07:26:00.203114252 +0000 UTC m=+0.064240538 container create 85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:26:00 compute-0 podman[164159]: 2025-11-29 07:26:00.16974876 +0000 UTC m=+0.030875046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:26:00 compute-0 systemd[1]: Started libpod-conmon-85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2.scope.
Nov 29 07:26:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1cd0354869748f0ffc5ba1bd31a74e4eb857b262519776f5bfeee3bddd4f0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1cd0354869748f0ffc5ba1bd31a74e4eb857b262519776f5bfeee3bddd4f0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1cd0354869748f0ffc5ba1bd31a74e4eb857b262519776f5bfeee3bddd4f0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1cd0354869748f0ffc5ba1bd31a74e4eb857b262519776f5bfeee3bddd4f0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1cd0354869748f0ffc5ba1bd31a74e4eb857b262519776f5bfeee3bddd4f0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:00 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 07:26:00 compute-0 podman[164159]: 2025-11-29 07:26:00.525015183 +0000 UTC m=+0.386141489 container init 85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:26:00 compute-0 podman[164159]: 2025-11-29 07:26:00.534188618 +0000 UTC m=+0.395314884 container start 85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:26:00 compute-0 podman[164159]: 2025-11-29 07:26:00.537495606 +0000 UTC m=+0.398621902 container attach 85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:26:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:00.546 163655 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 07:26:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:00.547 163655 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpthalsdm8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 07:26:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:00.389 164178 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:26:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:00.394 164178 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:26:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:00.397 164178 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 29 07:26:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:00.397 164178 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164178
Nov 29 07:26:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:00.549 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2d5049-5669-4ce4-a192-d2e9af2a2398]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:26:00 compute-0 ceph-mon[75050]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.073 164178 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.073 164178 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.073 164178 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:26:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:01 compute-0 trusting_goodall[164175]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:26:01 compute-0 trusting_goodall[164175]: --> relative data size: 1.0
Nov 29 07:26:01 compute-0 trusting_goodall[164175]: --> All data devices are unavailable
Nov 29 07:26:01 compute-0 systemd[1]: libpod-85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2.scope: Deactivated successfully.
Nov 29 07:26:01 compute-0 systemd[1]: libpod-85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2.scope: Consumed 1.009s CPU time.
Nov 29 07:26:01 compute-0 podman[164209]: 2025-11-29 07:26:01.638457185 +0000 UTC m=+0.024168648 container died 85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca1cd0354869748f0ffc5ba1bd31a74e4eb857b262519776f5bfeee3bddd4f0c-merged.mount: Deactivated successfully.
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.686 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[5849f1e1-0907-4867-95ed-e52f622a63e4]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.689 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, column=external_ids, values=({'neutron:ovn-metadata-id': '09f32b0e-46b0-5e31-be28-4f8ce3309087'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:26:01 compute-0 podman[164209]: 2025-11-29 07:26:01.696538887 +0000 UTC m=+0.082250340 container remove 85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goodall, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:26:01 compute-0 systemd[1]: libpod-conmon-85f4fa9093c3adc7327099218820c541ffb8ee2b8695452db2542e1dd9c3f2d2.scope: Deactivated successfully.
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.705 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.714 163655 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.714 163655 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.714 163655 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.714 163655 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.714 163655 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.714 163655 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.715 163655 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.715 163655 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.715 163655 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.715 163655 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.715 163655 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.716 163655 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.716 163655 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.716 163655 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.716 163655 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.716 163655 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.716 163655 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.717 163655 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.717 163655 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.717 163655 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.717 163655 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.717 163655 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.717 163655 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.717 163655 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.718 163655 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.718 163655 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.718 163655 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.718 163655 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.718 163655 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.719 163655 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.719 163655 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.719 163655 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.719 163655 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.720 163655 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.720 163655 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.720 163655 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.720 163655 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.720 163655 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.720 163655 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.721 163655 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.721 163655 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.721 163655 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.721 163655 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.721 163655 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.721 163655 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.721 163655 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.722 163655 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.722 163655 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.722 163655 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.722 163655 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.723 163655 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.723 163655 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.723 163655 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.723 163655 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.724 163655 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.724 163655 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.724 163655 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.724 163655 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.724 163655 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.724 163655 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.725 163655 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.725 163655 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.725 163655 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.725 163655 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.725 163655 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.725 163655 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.726 163655 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.726 163655 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.726 163655 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.726 163655 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.726 163655 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.726 163655 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.727 163655 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.727 163655 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.727 163655 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.727 163655 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.727 163655 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.727 163655 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 sudo[164048]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.728 163655 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.728 163655 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.728 163655 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.728 163655 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.728 163655 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.728 163655 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.728 163655 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.728 163655 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.729 163655 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.729 163655 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.729 163655 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.729 163655 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.729 163655 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.730 163655 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.730 163655 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.730 163655 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.730 163655 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.730 163655 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.730 163655 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.730 163655 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.731 163655 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.731 163655 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.731 163655 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.731 163655 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.731 163655 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.731 163655 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.732 163655 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.732 163655 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.732 163655 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.732 163655 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.732 163655 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.733 163655 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.733 163655 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.733 163655 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.733 163655 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.733 163655 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.734 163655 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.734 163655 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.734 163655 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.734 163655 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.734 163655 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.735 163655 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.735 163655 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.735 163655 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.735 163655 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.735 163655 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.736 163655 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.736 163655 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.736 163655 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.736 163655 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.736 163655 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.736 163655 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.737 163655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.737 163655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.737 163655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.737 163655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.737 163655 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.739 163655 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.739 163655 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.739 163655 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.739 163655 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.739 163655 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.740 163655 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.740 163655 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.740 163655 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.740 163655 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.740 163655 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.741 163655 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.741 163655 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.741 163655 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.741 163655 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.741 163655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.742 163655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.742 163655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.742 163655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.742 163655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.742 163655 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.743 163655 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.743 163655 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.743 163655 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.743 163655 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.743 163655 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.744 163655 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.744 163655 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.744 163655 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.744 163655 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.744 163655 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.745 163655 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.745 163655 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.745 163655 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.745 163655 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.745 163655 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.746 163655 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.746 163655 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.746 163655 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.746 163655 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.747 163655 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.747 163655 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.747 163655 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.747 163655 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.747 163655 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.748 163655 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.748 163655 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.748 163655 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.748 163655 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.748 163655 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.749 163655 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.749 163655 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.749 163655 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.749 163655 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.749 163655 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.749 163655 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.750 163655 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.750 163655 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.750 163655 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.750 163655 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.750 163655 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.750 163655 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.750 163655 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.751 163655 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.751 163655 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.751 163655 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.751 163655 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.751 163655 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.751 163655 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.751 163655 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.752 163655 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.752 163655 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.752 163655 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.752 163655 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.752 163655 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.752 163655 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.752 163655 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.753 163655 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.753 163655 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.753 163655 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.753 163655 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.753 163655 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.753 163655 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.753 163655 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.754 163655 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.754 163655 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.754 163655 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.754 163655 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.754 163655 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.754 163655 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.755 163655 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.755 163655 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.755 163655 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.755 163655 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.755 163655 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.755 163655 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.755 163655 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.756 163655 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.756 163655 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.756 163655 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.756 163655 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.756 163655 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.756 163655 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.756 163655 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.757 163655 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.757 163655 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.757 163655 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.757 163655 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.757 163655 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.757 163655 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.758 163655 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.758 163655 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.758 163655 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.758 163655 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.758 163655 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.758 163655 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.758 163655 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.759 163655 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.759 163655 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.759 163655 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.759 163655 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.759 163655 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.759 163655 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.759 163655 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.760 163655 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.760 163655 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.760 163655 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.760 163655 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.760 163655 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.760 163655 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.760 163655 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.761 163655 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.761 163655 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.761 163655 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.761 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.761 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.761 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.762 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.762 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.762 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.762 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.762 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.762 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.763 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.763 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.763 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.763 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.763 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.763 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.763 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.764 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.764 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.764 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.764 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.764 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.764 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.764 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.765 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.765 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.765 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.765 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.765 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.765 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.766 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.766 163655 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.766 163655 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.766 163655 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.766 163655 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.766 163655 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:26:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:01.767 163655 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:26:01 compute-0 sudo[164224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:01 compute-0 sudo[164224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:01 compute-0 sudo[164224]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:01 compute-0 sudo[164249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:26:01 compute-0 sudo[164249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:01 compute-0 sudo[164249]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:01 compute-0 sudo[164274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:01 compute-0 sudo[164274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:01 compute-0 sudo[164274]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:01 compute-0 sshd-session[164136]: Connection closed by authenticating user root 143.14.121.41 port 54152 [preauth]
Nov 29 07:26:01 compute-0 sudo[164299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:26:01 compute-0 sudo[164299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:02 compute-0 podman[164365]: 2025-11-29 07:26:02.290096127 +0000 UTC m=+0.043243937 container create 6e1bb1ea4c972a40e61f5675f8ecf707417950e7750df750b24cea4fffb2ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:26:02 compute-0 systemd[1]: Started libpod-conmon-6e1bb1ea4c972a40e61f5675f8ecf707417950e7750df750b24cea4fffb2ef64.scope.
Nov 29 07:26:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:02 compute-0 podman[164365]: 2025-11-29 07:26:02.346175916 +0000 UTC m=+0.099323756 container init 6e1bb1ea4c972a40e61f5675f8ecf707417950e7750df750b24cea4fffb2ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:26:02 compute-0 podman[164365]: 2025-11-29 07:26:02.352583456 +0000 UTC m=+0.105731286 container start 6e1bb1ea4c972a40e61f5675f8ecf707417950e7750df750b24cea4fffb2ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:26:02 compute-0 blissful_lalande[164381]: 167 167
Nov 29 07:26:02 compute-0 systemd[1]: libpod-6e1bb1ea4c972a40e61f5675f8ecf707417950e7750df750b24cea4fffb2ef64.scope: Deactivated successfully.
Nov 29 07:26:02 compute-0 podman[164365]: 2025-11-29 07:26:02.356728607 +0000 UTC m=+0.109876427 container attach 6e1bb1ea4c972a40e61f5675f8ecf707417950e7750df750b24cea4fffb2ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:26:02 compute-0 podman[164365]: 2025-11-29 07:26:02.358488664 +0000 UTC m=+0.111636484 container died 6e1bb1ea4c972a40e61f5675f8ecf707417950e7750df750b24cea4fffb2ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:26:02 compute-0 podman[164365]: 2025-11-29 07:26:02.268768377 +0000 UTC m=+0.021916237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a56e8e2020b91670e77a87ce78e200c572b07feff10a3acee048494c95fde748-merged.mount: Deactivated successfully.
Nov 29 07:26:02 compute-0 podman[164365]: 2025-11-29 07:26:02.392717938 +0000 UTC m=+0.145865748 container remove 6e1bb1ea4c972a40e61f5675f8ecf707417950e7750df750b24cea4fffb2ef64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:26:02 compute-0 systemd[1]: libpod-conmon-6e1bb1ea4c972a40e61f5675f8ecf707417950e7750df750b24cea4fffb2ef64.scope: Deactivated successfully.
Nov 29 07:26:02 compute-0 podman[164406]: 2025-11-29 07:26:02.546802426 +0000 UTC m=+0.036573729 container create b9645655010db5a733ac2994b0bf35b1b968074ffb81e4f52341bf9b83d1e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_newton, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:26:02 compute-0 systemd[1]: Started libpod-conmon-b9645655010db5a733ac2994b0bf35b1b968074ffb81e4f52341bf9b83d1e56b.scope.
Nov 29 07:26:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca2c8c4c57af138b4f07bc6d2a1828bcc6c0526c81de21c51544fdbcb63fe36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca2c8c4c57af138b4f07bc6d2a1828bcc6c0526c81de21c51544fdbcb63fe36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca2c8c4c57af138b4f07bc6d2a1828bcc6c0526c81de21c51544fdbcb63fe36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca2c8c4c57af138b4f07bc6d2a1828bcc6c0526c81de21c51544fdbcb63fe36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:02 compute-0 podman[164406]: 2025-11-29 07:26:02.612783079 +0000 UTC m=+0.102554402 container init b9645655010db5a733ac2994b0bf35b1b968074ffb81e4f52341bf9b83d1e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:26:02 compute-0 podman[164406]: 2025-11-29 07:26:02.62104678 +0000 UTC m=+0.110818083 container start b9645655010db5a733ac2994b0bf35b1b968074ffb81e4f52341bf9b83d1e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_newton, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:02 compute-0 podman[164406]: 2025-11-29 07:26:02.624244955 +0000 UTC m=+0.114016258 container attach b9645655010db5a733ac2994b0bf35b1b968074ffb81e4f52341bf9b83d1e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_newton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:26:02 compute-0 podman[164406]: 2025-11-29 07:26:02.530932312 +0000 UTC m=+0.020703645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:26:02 compute-0 ceph-mon[75050]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:03 compute-0 sshd-session[164356]: Invalid user admin from 143.14.121.41 port 53942
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]: {
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:     "0": [
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:         {
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "devices": [
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "/dev/loop3"
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             ],
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_name": "ceph_lv0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_size": "21470642176",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "name": "ceph_lv0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "tags": {
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.cluster_name": "ceph",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.crush_device_class": "",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.encrypted": "0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.osd_id": "0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.type": "block",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.vdo": "0"
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             },
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "type": "block",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "vg_name": "ceph_vg0"
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:         }
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:     ],
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:     "1": [
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:         {
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "devices": [
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "/dev/loop4"
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             ],
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_name": "ceph_lv1",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_size": "21470642176",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "name": "ceph_lv1",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "tags": {
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.cluster_name": "ceph",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.crush_device_class": "",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.encrypted": "0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.osd_id": "1",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.type": "block",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.vdo": "0"
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             },
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "type": "block",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "vg_name": "ceph_vg1"
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:         }
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:     ],
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:     "2": [
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:         {
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "devices": [
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "/dev/loop5"
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             ],
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_name": "ceph_lv2",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_size": "21470642176",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "name": "ceph_lv2",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "tags": {
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.cluster_name": "ceph",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.crush_device_class": "",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.encrypted": "0",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.osd_id": "2",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.type": "block",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:                 "ceph.vdo": "0"
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             },
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "type": "block",
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:             "vg_name": "ceph_vg2"
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:         }
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]:     ]
Nov 29 07:26:03 compute-0 flamboyant_newton[164423]: }
Nov 29 07:26:03 compute-0 systemd[1]: libpod-b9645655010db5a733ac2994b0bf35b1b968074ffb81e4f52341bf9b83d1e56b.scope: Deactivated successfully.
Nov 29 07:26:03 compute-0 podman[164406]: 2025-11-29 07:26:03.376138596 +0000 UTC m=+0.865909919 container died b9645655010db5a733ac2994b0bf35b1b968074ffb81e4f52341bf9b83d1e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_newton, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:26:03 compute-0 sshd-session[164356]: Connection closed by invalid user admin 143.14.121.41 port 53942 [preauth]
Nov 29 07:26:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ca2c8c4c57af138b4f07bc6d2a1828bcc6c0526c81de21c51544fdbcb63fe36-merged.mount: Deactivated successfully.
Nov 29 07:26:04 compute-0 podman[164406]: 2025-11-29 07:26:04.505590875 +0000 UTC m=+1.995362178 container remove b9645655010db5a733ac2994b0bf35b1b968074ffb81e4f52341bf9b83d1e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_newton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:26:04 compute-0 systemd[1]: libpod-conmon-b9645655010db5a733ac2994b0bf35b1b968074ffb81e4f52341bf9b83d1e56b.scope: Deactivated successfully.
Nov 29 07:26:04 compute-0 sudo[164299]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:04 compute-0 sudo[164448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:04 compute-0 sudo[164448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:04 compute-0 sudo[164448]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:04 compute-0 sudo[164473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:26:04 compute-0 sudo[164473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:04 compute-0 sudo[164473]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:04 compute-0 sudo[164498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:04 compute-0 sudo[164498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:04 compute-0 sudo[164498]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:04 compute-0 sudo[164523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:26:04 compute-0 sudo[164523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:04 compute-0 sshd-session[164547]: Accepted publickey for zuul from 192.168.122.30 port 53642 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:26:04 compute-0 systemd-logind[807]: New session 48 of user zuul.
Nov 29 07:26:04 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 29 07:26:04 compute-0 sshd-session[164547]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:26:05 compute-0 podman[164641]: 2025-11-29 07:26:05.178569158 +0000 UTC m=+0.107328729 container create bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 07:26:05 compute-0 podman[164641]: 2025-11-29 07:26:05.095533509 +0000 UTC m=+0.024293060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:26:05 compute-0 sshd-session[164443]: Connection closed by authenticating user root 143.14.121.41 port 53956 [preauth]
Nov 29 07:26:05 compute-0 systemd[1]: Started libpod-conmon-bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384.scope.
Nov 29 07:26:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:05 compute-0 podman[164641]: 2025-11-29 07:26:05.310885903 +0000 UTC m=+0.239645454 container init bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:26:05 compute-0 podman[164641]: 2025-11-29 07:26:05.32351481 +0000 UTC m=+0.252274341 container start bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:26:05 compute-0 podman[164641]: 2025-11-29 07:26:05.327630421 +0000 UTC m=+0.256389982 container attach bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:26:05 compute-0 determined_meitner[164658]: 167 167
Nov 29 07:26:05 compute-0 systemd[1]: libpod-bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384.scope: Deactivated successfully.
Nov 29 07:26:05 compute-0 conmon[164658]: conmon bed9a466ec02666d92ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384.scope/container/memory.events
Nov 29 07:26:05 compute-0 podman[164641]: 2025-11-29 07:26:05.332225534 +0000 UTC m=+0.260985075 container died bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:26:05
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'volumes', 'images', 'backups', 'default.rgw.meta', '.rgw.root']
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-26276d08f9917778ace8ee90440752eee8a941f5978da654743e96ecc6f32f41-merged.mount: Deactivated successfully.
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:26:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:05 compute-0 ceph-mon[75050]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:06 compute-0 python3.9[164774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:26:06 compute-0 podman[164641]: 2025-11-29 07:26:06.376212169 +0000 UTC m=+1.304971730 container remove bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_meitner, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:06 compute-0 systemd[1]: libpod-conmon-bed9a466ec02666d92ea9e40cc971dfeb712bc178421e67d29ff04aa0b7a0384.scope: Deactivated successfully.
Nov 29 07:26:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:06 compute-0 podman[164811]: 2025-11-29 07:26:06.561031197 +0000 UTC m=+0.046135263 container create 236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_davinci, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:26:06 compute-0 systemd[1]: Started libpod-conmon-236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17.scope.
Nov 29 07:26:06 compute-0 podman[164811]: 2025-11-29 07:26:06.540665414 +0000 UTC m=+0.025769480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:26:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8c4cdee66455d7c1b34d39ea5ee590343ddf6c5abac599a03a1a5c9bb936a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8c4cdee66455d7c1b34d39ea5ee590343ddf6c5abac599a03a1a5c9bb936a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8c4cdee66455d7c1b34d39ea5ee590343ddf6c5abac599a03a1a5c9bb936a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8c4cdee66455d7c1b34d39ea5ee590343ddf6c5abac599a03a1a5c9bb936a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:26:06 compute-0 podman[164811]: 2025-11-29 07:26:06.677604013 +0000 UTC m=+0.162708099 container init 236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_davinci, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:26:06 compute-0 podman[164811]: 2025-11-29 07:26:06.68611904 +0000 UTC m=+0.171223106 container start 236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:26:06 compute-0 podman[164811]: 2025-11-29 07:26:06.689571592 +0000 UTC m=+0.174675748 container attach 236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_davinci, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:26:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:26:07 compute-0 sudo[164956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebgjfkevlyswacthyxclikdpnpkdblne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401166.540995-34-162513447709818/AnsiballZ_command.py'
Nov 29 07:26:07 compute-0 sudo[164956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:07 compute-0 sshd-session[164695]: Connection closed by authenticating user root 143.14.121.41 port 53972 [preauth]
Nov 29 07:26:07 compute-0 python3.9[164958]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:07 compute-0 sudo[164956]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:07 compute-0 ceph-mon[75050]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]: {
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "osd_id": 2,
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "type": "bluestore"
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:     },
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "osd_id": 1,
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "type": "bluestore"
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:     },
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "osd_id": 0,
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:         "type": "bluestore"
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]:     }
Nov 29 07:26:07 compute-0 optimistic_davinci[164878]: }
Nov 29 07:26:07 compute-0 systemd[1]: libpod-236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17.scope: Deactivated successfully.
Nov 29 07:26:07 compute-0 systemd[1]: libpod-236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17.scope: Consumed 1.115s CPU time.
Nov 29 07:26:07 compute-0 podman[164811]: 2025-11-29 07:26:07.795938575 +0000 UTC m=+1.281042641 container died 236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:26:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f8c4cdee66455d7c1b34d39ea5ee590343ddf6c5abac599a03a1a5c9bb936a5-merged.mount: Deactivated successfully.
Nov 29 07:26:07 compute-0 podman[164811]: 2025-11-29 07:26:07.855982009 +0000 UTC m=+1.341086075 container remove 236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:26:07 compute-0 systemd[1]: libpod-conmon-236cb411c354ba96dcbbb90cd7edf1d704a32e6f533a37a442e44c677db8da17.scope: Deactivated successfully.
Nov 29 07:26:07 compute-0 sudo[164523]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:26:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:26:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:26:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:26:07 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c23fe3da-61ca-4f15-b74d-8e137c99e25b does not exist
Nov 29 07:26:07 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 93e8db97-a6b0-49b3-be32-a631c9458251 does not exist
Nov 29 07:26:07 compute-0 sudo[165090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:26:07 compute-0 sudo[165090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:07 compute-0 sudo[165090]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:08 compute-0 sudo[165115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:26:08 compute-0 sudo[165115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:26:08 compute-0 sudo[165115]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:08 compute-0 sudo[165213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhlxechpqqgfjfngqdwwpfeeckdzmcbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401167.607384-45-231109678435151/AnsiballZ_systemd_service.py'
Nov 29 07:26:08 compute-0 sudo[165213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:08 compute-0 sshd-session[164986]: Invalid user postgres from 143.14.121.41 port 53982
Nov 29 07:26:08 compute-0 python3.9[165215]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:26:08 compute-0 systemd[1]: Reloading.
Nov 29 07:26:08 compute-0 systemd-rc-local-generator[165244]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:26:08 compute-0 systemd-sysv-generator[165248]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:26:08 compute-0 sshd-session[164986]: Connection closed by invalid user postgres 143.14.121.41 port 53982 [preauth]
Nov 29 07:26:08 compute-0 sudo[165213]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:08 compute-0 ceph-mon[75050]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:26:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:26:09 compute-0 python3.9[165402]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:26:09 compute-0 network[165419]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:26:09 compute-0 network[165420]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:26:09 compute-0 network[165421]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:26:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:10 compute-0 sshd-session[165293]: Connection closed by authenticating user root 143.14.121.41 port 53988 [preauth]
Nov 29 07:26:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:11 compute-0 ceph-mon[75050]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:12 compute-0 sshd-session[165441]: Connection closed by authenticating user root 143.14.121.41 port 54002 [preauth]
Nov 29 07:26:12 compute-0 ceph-mon[75050]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:13 compute-0 sudo[165683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdxyggbspzntgfclrxhsbwfnclpdpxvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401173.0118387-64-160364682747754/AnsiballZ_systemd_service.py'
Nov 29 07:26:13 compute-0 sudo[165683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:13 compute-0 python3.9[165685]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:26:13 compute-0 sudo[165683]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:14 compute-0 sudo[165838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmtxfjbvdvgbptoipcgcphxrzxtfvixb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401173.8380587-64-177078702544476/AnsiballZ_systemd_service.py'
Nov 29 07:26:14 compute-0 sudo[165838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:14 compute-0 python3.9[165840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:26:14 compute-0 sudo[165838]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:26:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:26:14 compute-0 sshd-session[165687]: Invalid user openhabian from 143.14.121.41 port 41434
Nov 29 07:26:15 compute-0 sudo[165991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgzlctkxebzulgskqcljxdsnvtdnxguf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401174.7529671-64-258217718833525/AnsiballZ_systemd_service.py'
Nov 29 07:26:15 compute-0 sudo[165991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:15 compute-0 ceph-mon[75050]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:15 compute-0 sshd-session[165687]: Connection closed by invalid user openhabian 143.14.121.41 port 41434 [preauth]
Nov 29 07:26:15 compute-0 python3.9[165993]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:26:15 compute-0 sudo[165991]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:15 compute-0 sudo[166145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tytysvbylmdqzdglircnrhltvfygwjmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401175.592313-64-267107846477329/AnsiballZ_systemd_service.py'
Nov 29 07:26:15 compute-0 sudo[166145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:16 compute-0 python3.9[166147]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:26:16 compute-0 sudo[166145]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:16 compute-0 sudo[166299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjsxeqcaqyktkqwobhlruoehskplxuwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401176.386303-64-63259235879503/AnsiballZ_systemd_service.py'
Nov 29 07:26:16 compute-0 sudo[166299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:17 compute-0 python3.9[166301]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:26:17 compute-0 sudo[166299]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:17 compute-0 ceph-mon[75050]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:17 compute-0 sudo[166452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfchhrnvoardvjncuqoqstpqnmhsaijv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401177.2021353-64-7908568905177/AnsiballZ_systemd_service.py'
Nov 29 07:26:17 compute-0 sudo[166452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:17 compute-0 python3.9[166454]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:26:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:17 compute-0 sudo[166452]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:17 compute-0 sshd-session[166018]: Connection closed by authenticating user root 143.14.121.41 port 41446 [preauth]
Nov 29 07:26:18 compute-0 sudo[166605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlkfwmcdytvncacktwiwarujpaaxlsde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401177.9810338-64-247358615531694/AnsiballZ_systemd_service.py'
Nov 29 07:26:18 compute-0 sudo[166605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:18 compute-0 python3.9[166607]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:26:18 compute-0 sudo[166605]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:19 compute-0 ceph-mon[75050]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:19 compute-0 sudo[166758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnguvhvsviklxypzghidzzxmzmlaolsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401178.8947206-116-174175317924269/AnsiballZ_file.py'
Nov 29 07:26:19 compute-0 sudo[166758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:19 compute-0 python3.9[166760]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:19 compute-0 sudo[166758]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:20 compute-0 sudo[166910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmtkidtczgvgmvuawevqdcfeipfshxnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401179.7549284-116-255556192837541/AnsiballZ_file.py'
Nov 29 07:26:20 compute-0 sudo[166910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:20 compute-0 python3.9[166912]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:20 compute-0 sudo[166910]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:20 compute-0 ceph-mon[75050]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:20 compute-0 sudo[167064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxztmnpicmumnqmzfkrdkygmczxmchpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401180.4542978-116-142425601791837/AnsiballZ_file.py'
Nov 29 07:26:20 compute-0 sudo[167064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:21 compute-0 python3.9[167066]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:21 compute-0 sudo[167064]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:21 compute-0 sshd-session[166913]: Invalid user sshadmin from 143.14.121.41 port 41448
Nov 29 07:26:21 compute-0 sudo[167216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjslehnzmgmdsihizahmlhxczqzwgblt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401181.1751926-116-153542264196556/AnsiballZ_file.py'
Nov 29 07:26:21 compute-0 sudo[167216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:21 compute-0 sshd-session[166913]: Connection closed by invalid user sshadmin 143.14.121.41 port 41448 [preauth]
Nov 29 07:26:21 compute-0 python3.9[167218]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:21 compute-0 sudo[167216]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:22 compute-0 sudo[167370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujunlcpxrzuirbojwvgppazjjmpjnxkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401181.835156-116-144758086118032/AnsiballZ_file.py'
Nov 29 07:26:22 compute-0 sudo[167370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:22 compute-0 python3.9[167372]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:22 compute-0 sudo[167370]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:22 compute-0 sudo[167522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tznzttilsagqgtanljuhznlayeeokzro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401182.4722133-116-169667564357415/AnsiballZ_file.py'
Nov 29 07:26:22 compute-0 sudo[167522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:22 compute-0 python3.9[167524]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:22 compute-0 sudo[167522]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:23 compute-0 sudo[167674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnctwuoxuytjiprzehougnazmkabordl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401183.0678642-116-217734862697098/AnsiballZ_file.py'
Nov 29 07:26:23 compute-0 sudo[167674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:23 compute-0 python3.9[167676]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:23 compute-0 sudo[167674]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:23 compute-0 ceph-mon[75050]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:23 compute-0 sudo[167826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjeupytbaztahmrnabkzdtbjatudqdgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401183.727141-166-87217790940942/AnsiballZ_file.py'
Nov 29 07:26:24 compute-0 sudo[167826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:24 compute-0 sshd-session[167273]: Connection closed by authenticating user root 143.14.121.41 port 46470 [preauth]
Nov 29 07:26:24 compute-0 python3.9[167828]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:24 compute-0 sudo[167826]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:24 compute-0 sudo[167980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htcrhdisuwghttsqiotlxojrpirsppnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401184.3620317-166-211117055437968/AnsiballZ_file.py'
Nov 29 07:26:24 compute-0 sudo[167980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:24 compute-0 ceph-mon[75050]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:24 compute-0 python3.9[167982]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:24 compute-0 sudo[167980]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:25 compute-0 sudo[168132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwnnhjwhpwopkyqyepksurdtachhgdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401184.9427292-166-173818108777808/AnsiballZ_file.py'
Nov 29 07:26:25 compute-0 sudo[168132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:25 compute-0 python3.9[168134]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:25 compute-0 sudo[168132]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:25 compute-0 sudo[168284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glvqifbeurhtbpisbzewuckezlfsdgwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401185.5278907-166-145453990298778/AnsiballZ_file.py'
Nov 29 07:26:25 compute-0 sudo[168284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:26 compute-0 python3.9[168286]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:26 compute-0 sudo[168284]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:26 compute-0 sudo[168436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbutonvtdfnaccesronubxtiaywryjvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401186.1880054-166-199192055828301/AnsiballZ_file.py'
Nov 29 07:26:26 compute-0 sudo[168436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:26 compute-0 sshd-session[167829]: Invalid user frappe from 143.14.121.41 port 46482
Nov 29 07:26:26 compute-0 python3.9[168438]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:26 compute-0 sudo[168436]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:26 compute-0 sshd-session[167829]: Connection closed by invalid user frappe 143.14.121.41 port 46482 [preauth]
Nov 29 07:26:27 compute-0 sudo[168588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcxixiwarecyydqqvdmoxkqfurrbzxfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401186.76406-166-115577252555922/AnsiballZ_file.py'
Nov 29 07:26:27 compute-0 sudo[168588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:27 compute-0 ceph-mon[75050]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:27 compute-0 python3.9[168590]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:27 compute-0 sudo[168588]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:27 compute-0 sudo[168742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpojpyebipssodbmwxfdlqahaajxgpcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401187.3409257-166-24326484796506/AnsiballZ_file.py'
Nov 29 07:26:27 compute-0 sudo[168742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:27 compute-0 python3.9[168744]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:26:27 compute-0 sudo[168742]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:28 compute-0 sudo[168913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeeknyncpuxvhrkvijcvzhfgjfezsgck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401188.0481818-217-47619537866332/AnsiballZ_command.py'
Nov 29 07:26:28 compute-0 sudo[168913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:28 compute-0 podman[168869]: 2025-11-29 07:26:28.352078786 +0000 UTC m=+0.060885040 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:26:28 compute-0 podman[168868]: 2025-11-29 07:26:28.378785268 +0000 UTC m=+0.088113946 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:26:28 compute-0 sshd-session[168591]: Invalid user huawei from 143.14.121.41 port 46490
Nov 29 07:26:28 compute-0 python3.9[168929]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:28 compute-0 sudo[168913]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:28 compute-0 sshd-session[168591]: Connection closed by invalid user huawei 143.14.121.41 port 46490 [preauth]
Nov 29 07:26:29 compute-0 ceph-mon[75050]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:29 compute-0 python3.9[169093]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:26:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:30 compute-0 sudo[169244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niizvcrgwuchdrktwcyyvxtuhzlyjfpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401189.730634-235-198575456544258/AnsiballZ_systemd_service.py'
Nov 29 07:26:30 compute-0 sudo[169244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:30 compute-0 python3.9[169246]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:26:30 compute-0 systemd[1]: Reloading.
Nov 29 07:26:30 compute-0 systemd-rc-local-generator[169275]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:26:30 compute-0 systemd-sysv-generator[169280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:26:30 compute-0 sudo[169244]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:30 compute-0 sshd-session[169019]: Invalid user ubuntu from 143.14.121.41 port 46494
Nov 29 07:26:31 compute-0 sshd-session[169019]: Connection closed by invalid user ubuntu 143.14.121.41 port 46494 [preauth]
Nov 29 07:26:31 compute-0 sudo[169431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctnkgcnwvheumdkrvclfyjjkvdecdnty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401190.9047215-243-163482809946458/AnsiballZ_command.py'
Nov 29 07:26:31 compute-0 sudo[169431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:31 compute-0 python3.9[169433]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:31 compute-0 sudo[169431]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:32 compute-0 sudo[169586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfrndsynhzzgcofpyejkvhzmffbkmpiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401191.6891289-243-4385741734182/AnsiballZ_command.py'
Nov 29 07:26:32 compute-0 sudo[169586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:32 compute-0 sshd-session[169434]: Invalid user vyos from 143.14.121.41 port 46504
Nov 29 07:26:32 compute-0 sshd-session[169434]: Connection closed by invalid user vyos 143.14.121.41 port 46504 [preauth]
Nov 29 07:26:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:33 compute-0 python3.9[169588]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:33 compute-0 sudo[169586]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:33 compute-0 sudo[169741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hksmimfdcujugpgezgxqfdzogarsihjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401193.2921154-243-84522833435338/AnsiballZ_command.py'
Nov 29 07:26:33 compute-0 sudo[169741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:33 compute-0 python3.9[169743]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:33 compute-0 sudo[169741]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:34 compute-0 sudo[169894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gspcyoofacgimblctgdsqsxckxcimckw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401194.043611-243-71990552692897/AnsiballZ_command.py'
Nov 29 07:26:34 compute-0 sudo[169894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:34 compute-0 python3.9[169896]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:34 compute-0 sudo[169894]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:34 compute-0 sshd-session[169615]: Connection closed by authenticating user root 143.14.121.41 port 33118 [preauth]
Nov 29 07:26:34 compute-0 ceph-mon[75050]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:35 compute-0 sudo[170047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhuqayozkanfpxomwalhlxppzvfzhmxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401194.7872005-243-25387472148757/AnsiballZ_command.py'
Nov 29 07:26:35 compute-0 sudo[170047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:35 compute-0 python3.9[170049]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:35 compute-0 sudo[170047]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:26:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:26:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:26:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:26:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:26:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:26:35 compute-0 sudo[170202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrxpungagsofsjngjuhjqtdmenjplthg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401195.4363308-243-276321336597919/AnsiballZ_command.py'
Nov 29 07:26:35 compute-0 sudo[170202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:35 compute-0 ceph-mon[75050]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:35 compute-0 ceph-mon[75050]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:35 compute-0 python3.9[170204]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:35 compute-0 sudo[170202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:36 compute-0 sudo[170355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifqtrsrnnpqnfaculzbibtgunxulebqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401196.1171215-243-27776848924370/AnsiballZ_command.py'
Nov 29 07:26:36 compute-0 sudo[170355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:36 compute-0 python3.9[170357]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:26:36 compute-0 sudo[170355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:37 compute-0 ceph-mon[75050]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:37 compute-0 sshd-session[170072]: Connection closed by authenticating user root 143.14.121.41 port 33134 [preauth]
Nov 29 07:26:37 compute-0 sudo[170509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnecldvadaezhxeluvxnfkedtqjxtchr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401197.097266-297-79770633121610/AnsiballZ_getent.py'
Nov 29 07:26:37 compute-0 sudo[170509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:37 compute-0 python3.9[170511]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 07:26:37 compute-0 sudo[170509]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:38 compute-0 sudo[170663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bheygrpoxxvmaaovcyytkjbdspdiszrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401198.034964-305-194837574081525/AnsiballZ_group.py'
Nov 29 07:26:38 compute-0 sudo[170663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:38 compute-0 python3.9[170665]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:26:38 compute-0 sshd-session[170435]: Invalid user admin from 143.14.121.41 port 33138
Nov 29 07:26:39 compute-0 sshd-session[170435]: Connection closed by invalid user admin 143.14.121.41 port 33138 [preauth]
Nov 29 07:26:39 compute-0 ceph-mon[75050]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:39 compute-0 groupadd[170666]: group added to /etc/group: name=libvirt, GID=42473
Nov 29 07:26:39 compute-0 groupadd[170666]: group added to /etc/gshadow: name=libvirt
Nov 29 07:26:39 compute-0 groupadd[170666]: new group: name=libvirt, GID=42473
Nov 29 07:26:39 compute-0 sudo[170663]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:40 compute-0 sudo[170823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmgrxbkfubaovguntkfpxzxdusyrfdhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401200.0344872-313-247797251599519/AnsiballZ_user.py'
Nov 29 07:26:40 compute-0 sudo[170823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:40 compute-0 ceph-mon[75050]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:40 compute-0 python3.9[170825]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:26:40 compute-0 useradd[170827]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 07:26:41 compute-0 sudo[170823]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:41 compute-0 sshd-session[170667]: Invalid user p from 143.14.121.41 port 33148
Nov 29 07:26:41 compute-0 sudo[170983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txzywoojovoojlmmhtjiknphpomxpksp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401201.3598511-324-51635369999062/AnsiballZ_setup.py'
Nov 29 07:26:41 compute-0 sudo[170983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:41 compute-0 sshd-session[170667]: Connection closed by invalid user p 143.14.121.41 port 33148 [preauth]
Nov 29 07:26:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:41 compute-0 python3.9[170985]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:26:42 compute-0 sudo[170983]: pam_unix(sudo:session): session closed for user root
Nov 29 07:26:42 compute-0 sudo[171069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihfcqqehhlibxjwuqheuyqxmdkwozmcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401201.3598511-324-51635369999062/AnsiballZ_dnf.py'
Nov 29 07:26:42 compute-0 sudo[171069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:26:42 compute-0 python3.9[171071]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:26:43 compute-0 ceph-mon[75050]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:44 compute-0 sshd-session[170991]: Connection closed by authenticating user root 143.14.121.41 port 45176 [preauth]
Nov 29 07:26:45 compute-0 ceph-mon[75050]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:46 compute-0 sshd-session[171076]: Invalid user public from 143.14.121.41 port 45178
Nov 29 07:26:46 compute-0 sshd-session[171076]: Connection closed by invalid user public 143.14.121.41 port 45178 [preauth]
Nov 29 07:26:47 compute-0 ceph-mon[75050]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:49 compute-0 ceph-mon[75050]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:49 compute-0 sshd-session[171084]: Invalid user debian from 143.14.121.41 port 45180
Nov 29 07:26:49 compute-0 sshd-session[171084]: Connection closed by invalid user debian 143.14.121.41 port 45180 [preauth]
Nov 29 07:26:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:51 compute-0 ceph-mon[75050]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:53 compute-0 ceph-mon[75050]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:53 compute-0 sshd-session[171086]: Connection closed by authenticating user root 143.14.121.41 port 45194 [preauth]
Nov 29 07:26:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:55 compute-0 ceph-mon[75050]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:55 compute-0 sshd-session[171088]: Invalid user steam from 143.14.121.41 port 45740
Nov 29 07:26:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:56 compute-0 sshd-session[171088]: Connection closed by invalid user steam 143.14.121.41 port 45740 [preauth]
Nov 29 07:26:57 compute-0 ceph-mon[75050]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:58 compute-0 sshd-session[171090]: Connection closed by authenticating user root 143.14.121.41 port 45742 [preauth]
Nov 29 07:26:58 compute-0 podman[171093]: 2025-11-29 07:26:58.740292183 +0000 UTC m=+0.096817232 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:26:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:26:58 compute-0 podman[171092]: 2025-11-29 07:26:58.853438578 +0000 UTC m=+0.218346544 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 07:26:59 compute-0 ceph-mon[75050]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:26:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:59.738 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:26:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:59.739 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:26:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:26:59.740 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:26:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:00 compute-0 sshd-session[171130]: Connection closed by authenticating user root 143.14.121.41 port 45752 [preauth]
Nov 29 07:27:01 compute-0 ceph-mon[75050]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:02 compute-0 sshd-session[171139]: Connection closed by authenticating user root 143.14.121.41 port 45754 [preauth]
Nov 29 07:27:03 compute-0 ceph-mon[75050]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:04 compute-0 sshd-session[171141]: Connection closed by authenticating user root 143.14.121.41 port 58870 [preauth]
Nov 29 07:27:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:27:05
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.meta']
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:27:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: client.0 ms_handle_reset on v2:192.168.122.100:6800/878361048
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:27:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:27:07 compute-0 ceph-mon[75050]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:07 compute-0 sshd-session[171143]: Connection closed by authenticating user root 143.14.121.41 port 58874 [preauth]
Nov 29 07:27:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:08 compute-0 sudo[171147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:08 compute-0 sudo[171147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:08 compute-0 sudo[171147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:08 compute-0 sudo[171172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:08 compute-0 sudo[171172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:08 compute-0 sudo[171172]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:08 compute-0 sudo[171197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:08 compute-0 sudo[171197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:08 compute-0 sudo[171197]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:08 compute-0 sudo[171222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:27:08 compute-0 sudo[171222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:08 compute-0 ceph-mon[75050]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:08 compute-0 sudo[171222]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:09 compute-0 sshd-session[171145]: Connection closed by authenticating user root 143.14.121.41 port 58886 [preauth]
Nov 29 07:27:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:27:09 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 84883ec5-f2d0-465a-8ba6-9b0d6266b250 does not exist
Nov 29 07:27:09 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 4f1701a0-47fc-44ba-a147-6b6459c0d1c0 does not exist
Nov 29 07:27:09 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev d82f2061-01e3-461f-8ade-4d90f1a6ac52 does not exist
Nov 29 07:27:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:27:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:27:09 compute-0 sudo[171278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:09 compute-0 sudo[171278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:09 compute-0 sudo[171278]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:09 compute-0 sudo[171303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:09 compute-0 sudo[171303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:09 compute-0 sudo[171303]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:09 compute-0 sudo[171328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:09 compute-0 sudo[171328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:09 compute-0 sudo[171328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:09 compute-0 ceph-mon[75050]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:27:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:27:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:27:09 compute-0 sudo[171353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:27:09 compute-0 sudo[171353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:10 compute-0 podman[171418]: 2025-11-29 07:27:10.278049209 +0000 UTC m=+0.077106029 container create 35a1a0fd61af387e0f21b5c5957a0003c155d11145ce21c9c0742c3cd5c306df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pike, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:27:10 compute-0 podman[171418]: 2025-11-29 07:27:10.22671614 +0000 UTC m=+0.025773010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:27:10 compute-0 systemd[1]: Started libpod-conmon-35a1a0fd61af387e0f21b5c5957a0003c155d11145ce21c9c0742c3cd5c306df.scope.
Nov 29 07:27:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:10 compute-0 podman[171418]: 2025-11-29 07:27:10.982330861 +0000 UTC m=+0.781387721 container init 35a1a0fd61af387e0f21b5c5957a0003c155d11145ce21c9c0742c3cd5c306df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pike, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:10 compute-0 podman[171418]: 2025-11-29 07:27:10.99779325 +0000 UTC m=+0.796850100 container start 35a1a0fd61af387e0f21b5c5957a0003c155d11145ce21c9c0742c3cd5c306df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:11 compute-0 silly_pike[171434]: 167 167
Nov 29 07:27:11 compute-0 systemd[1]: libpod-35a1a0fd61af387e0f21b5c5957a0003c155d11145ce21c9c0742c3cd5c306df.scope: Deactivated successfully.
Nov 29 07:27:11 compute-0 podman[171418]: 2025-11-29 07:27:11.415422088 +0000 UTC m=+1.214478938 container attach 35a1a0fd61af387e0f21b5c5957a0003c155d11145ce21c9c0742c3cd5c306df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pike, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:27:11 compute-0 podman[171418]: 2025-11-29 07:27:11.416090766 +0000 UTC m=+1.215147616 container died 35a1a0fd61af387e0f21b5c5957a0003c155d11145ce21c9c0742c3cd5c306df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:27:11 compute-0 ceph-mon[75050]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-de3249984cfd025b2d41bcd2e00461e989b859b9f8940d283e3d836015d26d84-merged.mount: Deactivated successfully.
Nov 29 07:27:12 compute-0 sshd-session[171437]: Connection closed by authenticating user root 143.14.121.41 port 58888 [preauth]
Nov 29 07:27:12 compute-0 podman[171418]: 2025-11-29 07:27:12.724841664 +0000 UTC m=+2.523898484 container remove 35a1a0fd61af387e0f21b5c5957a0003c155d11145ce21c9c0742c3cd5c306df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pike, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:27:12 compute-0 ceph-mon[75050]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:12 compute-0 systemd[1]: libpod-conmon-35a1a0fd61af387e0f21b5c5957a0003c155d11145ce21c9c0742c3cd5c306df.scope: Deactivated successfully.
Nov 29 07:27:12 compute-0 podman[171461]: 2025-11-29 07:27:12.936576058 +0000 UTC m=+0.068797514 container create 92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:27:12 compute-0 systemd[1]: Started libpod-conmon-92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066.scope.
Nov 29 07:27:12 compute-0 podman[171461]: 2025-11-29 07:27:12.900285135 +0000 UTC m=+0.032506641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:27:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb78dd1ebc2d59525b7dade64509a4a3afe1b8abdd869b955bc7a026d82d15f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb78dd1ebc2d59525b7dade64509a4a3afe1b8abdd869b955bc7a026d82d15f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb78dd1ebc2d59525b7dade64509a4a3afe1b8abdd869b955bc7a026d82d15f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb78dd1ebc2d59525b7dade64509a4a3afe1b8abdd869b955bc7a026d82d15f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb78dd1ebc2d59525b7dade64509a4a3afe1b8abdd869b955bc7a026d82d15f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:13 compute-0 podman[171461]: 2025-11-29 07:27:13.10246949 +0000 UTC m=+0.234690966 container init 92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:27:13 compute-0 podman[171461]: 2025-11-29 07:27:13.115536544 +0000 UTC m=+0.247758000 container start 92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:27:13 compute-0 podman[171461]: 2025-11-29 07:27:13.123324996 +0000 UTC m=+0.255546472 container attach 92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:27:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:14 compute-0 festive_kilby[171478]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:27:14 compute-0 festive_kilby[171478]: --> relative data size: 1.0
Nov 29 07:27:14 compute-0 festive_kilby[171478]: --> All data devices are unavailable
Nov 29 07:27:14 compute-0 systemd[1]: libpod-92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066.scope: Deactivated successfully.
Nov 29 07:27:14 compute-0 podman[171461]: 2025-11-29 07:27:14.296224385 +0000 UTC m=+1.428445841 container died 92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:27:14 compute-0 systemd[1]: libpod-92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066.scope: Consumed 1.095s CPU time.
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:27:14 compute-0 sshd-session[171452]: Connection closed by authenticating user root 143.14.121.41 port 48376 [preauth]
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:27:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:27:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bb78dd1ebc2d59525b7dade64509a4a3afe1b8abdd869b955bc7a026d82d15f-merged.mount: Deactivated successfully.
Nov 29 07:27:14 compute-0 podman[171461]: 2025-11-29 07:27:14.770892889 +0000 UTC m=+1.903114345 container remove 92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:27:14 compute-0 systemd[1]: libpod-conmon-92130a42ad02e0a48ea13ba640c2027b69a3d2c675f97cbbf30988a972628066.scope: Deactivated successfully.
Nov 29 07:27:14 compute-0 sudo[171353]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:14 compute-0 sudo[171558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:14 compute-0 sudo[171558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:14 compute-0 sudo[171558]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:14 compute-0 sudo[171586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:14 compute-0 sudo[171586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:14 compute-0 sudo[171586]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:14 compute-0 sudo[171613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:14 compute-0 ceph-mon[75050]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:14 compute-0 sudo[171613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:14 compute-0 sudo[171613]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:15 compute-0 sudo[171643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:27:15 compute-0 sudo[171643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:15 compute-0 podman[171720]: 2025-11-29 07:27:15.376704053 +0000 UTC m=+0.055797482 container create 68cd6ddda8d896561cd817b1ea13e9caf70ad18441a13a5acb72543d1cbf2992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:27:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:15 compute-0 systemd[1]: Started libpod-conmon-68cd6ddda8d896561cd817b1ea13e9caf70ad18441a13a5acb72543d1cbf2992.scope.
Nov 29 07:27:15 compute-0 podman[171720]: 2025-11-29 07:27:15.340951965 +0000 UTC m=+0.020045414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:27:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:15 compute-0 podman[171720]: 2025-11-29 07:27:15.497656528 +0000 UTC m=+0.176750047 container init 68cd6ddda8d896561cd817b1ea13e9caf70ad18441a13a5acb72543d1cbf2992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:15 compute-0 podman[171720]: 2025-11-29 07:27:15.508137062 +0000 UTC m=+0.187230491 container start 68cd6ddda8d896561cd817b1ea13e9caf70ad18441a13a5acb72543d1cbf2992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:27:15 compute-0 podman[171720]: 2025-11-29 07:27:15.511352909 +0000 UTC m=+0.190446388 container attach 68cd6ddda8d896561cd817b1ea13e9caf70ad18441a13a5acb72543d1cbf2992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:27:15 compute-0 focused_roentgen[171742]: 167 167
Nov 29 07:27:15 compute-0 systemd[1]: libpod-68cd6ddda8d896561cd817b1ea13e9caf70ad18441a13a5acb72543d1cbf2992.scope: Deactivated successfully.
Nov 29 07:27:15 compute-0 podman[171720]: 2025-11-29 07:27:15.513656472 +0000 UTC m=+0.192749901 container died 68cd6ddda8d896561cd817b1ea13e9caf70ad18441a13a5acb72543d1cbf2992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-76196309f0604e4841b7f7209c5565dd0b64a18fdb3fcbb58875878c135432b4-merged.mount: Deactivated successfully.
Nov 29 07:27:15 compute-0 podman[171720]: 2025-11-29 07:27:15.641927526 +0000 UTC m=+0.321020955 container remove 68cd6ddda8d896561cd817b1ea13e9caf70ad18441a13a5acb72543d1cbf2992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:27:15 compute-0 systemd[1]: libpod-conmon-68cd6ddda8d896561cd817b1ea13e9caf70ad18441a13a5acb72543d1cbf2992.scope: Deactivated successfully.
Nov 29 07:27:15 compute-0 podman[171778]: 2025-11-29 07:27:15.834315894 +0000 UTC m=+0.066396428 container create 2fdfb4e1083a499273ebe0307fba40f7d4175d575c8ce9290f5e6ecbe08a2298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pascal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:27:15 compute-0 podman[171778]: 2025-11-29 07:27:15.793916751 +0000 UTC m=+0.025997325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:27:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:15 compute-0 systemd[1]: Started libpod-conmon-2fdfb4e1083a499273ebe0307fba40f7d4175d575c8ce9290f5e6ecbe08a2298.scope.
Nov 29 07:27:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6ea3f529251e1bf48a9e3495cd12d438052cf0ee628de0f8fa48be9ae79284/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6ea3f529251e1bf48a9e3495cd12d438052cf0ee628de0f8fa48be9ae79284/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6ea3f529251e1bf48a9e3495cd12d438052cf0ee628de0f8fa48be9ae79284/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6ea3f529251e1bf48a9e3495cd12d438052cf0ee628de0f8fa48be9ae79284/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:15 compute-0 podman[171778]: 2025-11-29 07:27:15.972185819 +0000 UTC m=+0.204266363 container init 2fdfb4e1083a499273ebe0307fba40f7d4175d575c8ce9290f5e6ecbe08a2298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pascal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:15 compute-0 podman[171778]: 2025-11-29 07:27:15.978051337 +0000 UTC m=+0.210131861 container start 2fdfb4e1083a499273ebe0307fba40f7d4175d575c8ce9290f5e6ecbe08a2298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 29 07:27:15 compute-0 podman[171778]: 2025-11-29 07:27:15.991149651 +0000 UTC m=+0.223230175 container attach 2fdfb4e1083a499273ebe0307fba40f7d4175d575c8ce9290f5e6ecbe08a2298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:16 compute-0 sshd-session[171617]: Invalid user nvidia from 143.14.121.41 port 48378
Nov 29 07:27:16 compute-0 recursing_pascal[171801]: {
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:     "0": [
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:         {
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "devices": [
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "/dev/loop3"
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             ],
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_name": "ceph_lv0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_size": "21470642176",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "name": "ceph_lv0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "tags": {
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.cluster_name": "ceph",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.crush_device_class": "",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.encrypted": "0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.osd_id": "0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.type": "block",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.vdo": "0"
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             },
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "type": "block",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "vg_name": "ceph_vg0"
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:         }
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:     ],
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:     "1": [
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:         {
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "devices": [
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "/dev/loop4"
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             ],
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_name": "ceph_lv1",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_size": "21470642176",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "name": "ceph_lv1",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "tags": {
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.cluster_name": "ceph",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.crush_device_class": "",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.encrypted": "0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.osd_id": "1",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.type": "block",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.vdo": "0"
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             },
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "type": "block",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "vg_name": "ceph_vg1"
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:         }
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:     ],
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:     "2": [
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:         {
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "devices": [
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "/dev/loop5"
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             ],
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_name": "ceph_lv2",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_size": "21470642176",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "name": "ceph_lv2",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "tags": {
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.cluster_name": "ceph",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.crush_device_class": "",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.encrypted": "0",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.osd_id": "2",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.type": "block",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:                 "ceph.vdo": "0"
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             },
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "type": "block",
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:             "vg_name": "ceph_vg2"
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:         }
Nov 29 07:27:16 compute-0 recursing_pascal[171801]:     ]
Nov 29 07:27:16 compute-0 recursing_pascal[171801]: }
Nov 29 07:27:16 compute-0 systemd[1]: libpod-2fdfb4e1083a499273ebe0307fba40f7d4175d575c8ce9290f5e6ecbe08a2298.scope: Deactivated successfully.
Nov 29 07:27:16 compute-0 podman[171778]: 2025-11-29 07:27:16.787496726 +0000 UTC m=+1.019577260 container died 2fdfb4e1083a499273ebe0307fba40f7d4175d575c8ce9290f5e6ecbe08a2298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:27:16 compute-0 sshd-session[171617]: Connection closed by invalid user nvidia 143.14.121.41 port 48378 [preauth]
Nov 29 07:27:17 compute-0 ceph-mon[75050]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e6ea3f529251e1bf48a9e3495cd12d438052cf0ee628de0f8fa48be9ae79284-merged.mount: Deactivated successfully.
Nov 29 07:27:17 compute-0 podman[171778]: 2025-11-29 07:27:17.710342525 +0000 UTC m=+1.942423089 container remove 2fdfb4e1083a499273ebe0307fba40f7d4175d575c8ce9290f5e6ecbe08a2298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_pascal, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:27:17 compute-0 sudo[171643]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:17 compute-0 systemd[1]: libpod-conmon-2fdfb4e1083a499273ebe0307fba40f7d4175d575c8ce9290f5e6ecbe08a2298.scope: Deactivated successfully.
Nov 29 07:27:17 compute-0 sudo[171878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:17 compute-0 sudo[171878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:17 compute-0 sudo[171878]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:17 compute-0 sudo[171908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:27:17 compute-0 sudo[171908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:17 compute-0 sudo[171908]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:18 compute-0 sudo[171935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:18 compute-0 sudo[171935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:18 compute-0 sudo[171935]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:18 compute-0 sudo[171962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:27:18 compute-0 sudo[171962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:18 compute-0 podman[172039]: 2025-11-29 07:27:18.385348803 +0000 UTC m=+0.024273619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:27:18 compute-0 podman[172039]: 2025-11-29 07:27:18.755723842 +0000 UTC m=+0.394648658 container create 95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:27:18 compute-0 ceph-mon[75050]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:18 compute-0 systemd[1]: Started libpod-conmon-95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb.scope.
Nov 29 07:27:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:19 compute-0 podman[172039]: 2025-11-29 07:27:19.06179902 +0000 UTC m=+0.700723806 container init 95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:27:19 compute-0 podman[172039]: 2025-11-29 07:27:19.069903369 +0000 UTC m=+0.708828135 container start 95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:27:19 compute-0 condescending_wilbur[172072]: 167 167
Nov 29 07:27:19 compute-0 systemd[1]: libpod-95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb.scope: Deactivated successfully.
Nov 29 07:27:19 compute-0 conmon[172072]: conmon 95f3bbb4acd9933d9ae2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb.scope/container/memory.events
Nov 29 07:27:19 compute-0 podman[172039]: 2025-11-29 07:27:19.124251252 +0000 UTC m=+0.763176068 container attach 95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:27:19 compute-0 podman[172039]: 2025-11-29 07:27:19.124616631 +0000 UTC m=+0.763541417 container died 95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d48036a0f5515c2e3d7404958c724b809fdf50074971fd209ef6af7776729b22-merged.mount: Deactivated successfully.
Nov 29 07:27:19 compute-0 sshd-session[171861]: Connection closed by authenticating user root 143.14.121.41 port 48394 [preauth]
Nov 29 07:27:19 compute-0 podman[172039]: 2025-11-29 07:27:19.3675559 +0000 UTC m=+1.006480656 container remove 95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilbur, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:27:19 compute-0 systemd[1]: libpod-conmon-95f3bbb4acd9933d9ae25586f74eb4741336be9308704bd9cddfbb4b39fff7cb.scope: Deactivated successfully.
Nov 29 07:27:19 compute-0 podman[172096]: 2025-11-29 07:27:19.533134924 +0000 UTC m=+0.052230376 container create c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:19 compute-0 systemd[1]: Started libpod-conmon-c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476.scope.
Nov 29 07:27:19 compute-0 podman[172096]: 2025-11-29 07:27:19.501954419 +0000 UTC m=+0.021049901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59983c56b7bad4a5cafed1270199f127d234e5733c049cbfa04c7be3e31655f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59983c56b7bad4a5cafed1270199f127d234e5733c049cbfa04c7be3e31655f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59983c56b7bad4a5cafed1270199f127d234e5733c049cbfa04c7be3e31655f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59983c56b7bad4a5cafed1270199f127d234e5733c049cbfa04c7be3e31655f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:27:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:20 compute-0 podman[172096]: 2025-11-29 07:27:20.07068015 +0000 UTC m=+0.589775642 container init c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:27:20 compute-0 podman[172096]: 2025-11-29 07:27:20.078543812 +0000 UTC m=+0.597639304 container start c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaum, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:27:20 compute-0 podman[172096]: 2025-11-29 07:27:20.102731547 +0000 UTC m=+0.621827059 container attach c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaum, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 07:27:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]: {
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "osd_id": 2,
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "type": "bluestore"
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:     },
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "osd_id": 1,
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "type": "bluestore"
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:     },
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "osd_id": 0,
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:         "type": "bluestore"
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]:     }
Nov 29 07:27:21 compute-0 vigorous_chaum[172112]: }
Nov 29 07:27:21 compute-0 systemd[1]: libpod-c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476.scope: Deactivated successfully.
Nov 29 07:27:21 compute-0 podman[172096]: 2025-11-29 07:27:21.124890696 +0000 UTC m=+1.643986198 container died c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:27:21 compute-0 systemd[1]: libpod-c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476.scope: Consumed 1.050s CPU time.
Nov 29 07:27:21 compute-0 ceph-mon[75050]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-59983c56b7bad4a5cafed1270199f127d234e5733c049cbfa04c7be3e31655f2-merged.mount: Deactivated successfully.
Nov 29 07:27:21 compute-0 podman[172096]: 2025-11-29 07:27:21.577616955 +0000 UTC m=+2.096712417 container remove c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:27:21 compute-0 systemd[1]: libpod-conmon-c8dff5660bdc6b7ba545125f56b5cbe4362c420c38100768811fea8c8f546476.scope: Deactivated successfully.
Nov 29 07:27:21 compute-0 sudo[171962]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:27:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:27:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:27:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:27:21 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 03ff5ab4-9b66-4327-8e95-a9a095c6b7c5 does not exist
Nov 29 07:27:21 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 6b3c87b9-03f8-43be-a07b-5dbb05e1ed51 does not exist
Nov 29 07:27:21 compute-0 sudo[172160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:27:21 compute-0 sudo[172160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:21 compute-0 sudo[172160]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:21 compute-0 sudo[172185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:27:21 compute-0 sudo[172185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:27:21 compute-0 sudo[172185]: pam_unix(sudo:session): session closed for user root
Nov 29 07:27:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:21 compute-0 sshd-session[172115]: Invalid user admin from 143.14.121.41 port 48398
Nov 29 07:27:22 compute-0 sshd-session[172115]: Connection closed by invalid user admin 143.14.121.41 port 48398 [preauth]
Nov 29 07:27:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:27:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:27:22 compute-0 ceph-mon[75050]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:24 compute-0 sshd-session[172210]: Connection closed by authenticating user root 143.14.121.41 port 41250 [preauth]
Nov 29 07:27:25 compute-0 ceph-mon[75050]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:26 compute-0 sshd-session[172212]: Connection closed by authenticating user root 143.14.121.41 port 41252 [preauth]
Nov 29 07:27:27 compute-0 ceph-mon[75050]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:28 compute-0 sshd-session[172216]: Invalid user test from 143.14.121.41 port 41262
Nov 29 07:27:28 compute-0 sshd-session[172216]: Connection closed by invalid user test 143.14.121.41 port 41262 [preauth]
Nov 29 07:27:29 compute-0 ceph-mon[75050]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:29 compute-0 podman[172225]: 2025-11-29 07:27:29.696198577 +0000 UTC m=+0.066187449 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:27:29 compute-0 podman[172224]: 2025-11-29 07:27:29.753432863 +0000 UTC m=+0.116243089 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:27:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:30 compute-0 sshd-session[172222]: Connection closed by authenticating user root 143.14.121.41 port 41270 [preauth]
Nov 29 07:27:31 compute-0 ceph-mon[75050]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:32 compute-0 ceph-mon[75050]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:33 compute-0 sshd-session[172265]: Connection closed by authenticating user root 143.14.121.41 port 41274 [preauth]
Nov 29 07:27:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:34 compute-0 ceph-mon[75050]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:27:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:27:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:27:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:27:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:27:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:27:35 compute-0 sshd-session[172267]: Invalid user guest from 143.14.121.41 port 37042
Nov 29 07:27:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:35 compute-0 sshd-session[172267]: Connection closed by invalid user guest 143.14.121.41 port 37042 [preauth]
Nov 29 07:27:36 compute-0 ceph-mon[75050]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:37 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 29 07:27:37 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:27:37 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:27:37 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:27:37 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:27:37 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:27:37 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:27:37 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:27:37 compute-0 sshd-session[172273]: Connection closed by authenticating user root 143.14.121.41 port 37044 [preauth]
Nov 29 07:27:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:39 compute-0 ceph-mon[75050]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:40 compute-0 sshd-session[172279]: Invalid user admin from 143.14.121.41 port 37060
Nov 29 07:27:41 compute-0 sshd-session[172279]: Connection closed by invalid user admin 143.14.121.41 port 37060 [preauth]
Nov 29 07:27:41 compute-0 auditd[704]: Audit daemon rotating log files
Nov 29 07:27:41 compute-0 ceph-mon[75050]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:43 compute-0 sshd-session[172281]: Connection closed by authenticating user root 143.14.121.41 port 37062 [preauth]
Nov 29 07:27:43 compute-0 ceph-mon[75050]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:44 compute-0 ceph-mon[75050]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:45 compute-0 sshd-session[172283]: Connection closed by authenticating user root 143.14.121.41 port 53190 [preauth]
Nov 29 07:27:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:46 compute-0 sshd-session[172285]: Invalid user demo from 143.14.121.41 port 53200
Nov 29 07:27:47 compute-0 sshd-session[172285]: Connection closed by invalid user demo 143.14.121.41 port 53200 [preauth]
Nov 29 07:27:47 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 29 07:27:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:27:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:27:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:27:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:27:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:27:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:27:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:27:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:47 compute-0 ceph-mon[75050]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:48 compute-0 ceph-mon[75050]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:49 compute-0 sshd-session[172293]: Connection closed by authenticating user root 143.14.121.41 port 53212 [preauth]
Nov 29 07:27:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:51 compute-0 ceph-mon[75050]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:51 compute-0 sshd-session[172296]: Connection closed by authenticating user root 143.14.121.41 port 53214 [preauth]
Nov 29 07:27:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:54 compute-0 sshd-session[172298]: Connection closed by authenticating user root 143.14.121.41 port 53218 [preauth]
Nov 29 07:27:54 compute-0 ceph-mon[75050]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:27:55 compute-0 sshd-session[172301]: Connection closed by authenticating user root 143.14.121.41 port 43552 [preauth]
Nov 29 07:27:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:56 compute-0 ceph-mon[75050]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:57 compute-0 ceph-mon[75050]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:57 compute-0 sshd-session[172303]: Connection closed by authenticating user root 143.14.121.41 port 43554 [preauth]
Nov 29 07:27:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:59 compute-0 ceph-mon[75050]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:27:59 compute-0 sshd-session[172305]: Connection closed by authenticating user root 143.14.121.41 port 43562 [preauth]
Nov 29 07:27:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:27:59.739 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:27:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:27:59.741 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:27:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:27:59.741 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:27:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:00 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 07:28:00 compute-0 podman[172790]: 2025-11-29 07:28:00.747202629 +0000 UTC m=+0.093368760 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 07:28:00 compute-0 podman[172780]: 2025-11-29 07:28:00.806224852 +0000 UTC m=+0.152979918 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:01 compute-0 ceph-mon[75050]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:01 compute-0 sshd-session[172510]: Connection closed by authenticating user root 143.14.121.41 port 43568 [preauth]
Nov 29 07:28:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:03 compute-0 ceph-mon[75050]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:03 compute-0 sshd-session[173434]: Invalid user vpn from 143.14.121.41 port 39662
Nov 29 07:28:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:04 compute-0 sshd-session[173434]: Connection closed by invalid user vpn 143.14.121.41 port 39662 [preauth]
Nov 29 07:28:05 compute-0 ceph-mon[75050]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:28:05
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'default.rgw.log', 'default.rgw.control', '.rgw.root']
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:05 compute-0 sshd-session[174473]: Connection closed by authenticating user root 143.14.121.41 port 39672 [preauth]
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:28:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:28:07 compute-0 ceph-mon[75050]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:08 compute-0 sshd-session[175194]: Invalid user deploy from 143.14.121.41 port 39682
Nov 29 07:28:08 compute-0 sshd-session[175194]: Connection closed by invalid user deploy 143.14.121.41 port 39682 [preauth]
Nov 29 07:28:09 compute-0 ceph-mon[75050]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:11 compute-0 sshd-session[176525]: Connection closed by authenticating user root 143.14.121.41 port 39694 [preauth]
Nov 29 07:28:11 compute-0 ceph-mon[75050]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:13 compute-0 ceph-mon[75050]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:14 compute-0 sshd-session[177628]: Connection closed by authenticating user root 143.14.121.41 port 39698 [preauth]
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:28:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:28:15 compute-0 ceph-mon[75050]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:15 compute-0 sshd-session[179237]: Connection closed by authenticating user root 143.14.121.41 port 50684 [preauth]
Nov 29 07:28:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:17 compute-0 ceph-mon[75050]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:18 compute-0 sshd-session[180113]: Connection closed by authenticating user root 143.14.121.41 port 50700 [preauth]
Nov 29 07:28:19 compute-0 ceph-mon[75050]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:21 compute-0 ceph-mon[75050]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:21 compute-0 sshd-session[181694]: Connection closed by authenticating user root 143.14.121.41 port 50710 [preauth]
Nov 29 07:28:21 compute-0 sudo[182832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:21 compute-0 sudo[182832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:21 compute-0 sudo[182832]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:21 compute-0 sudo[182899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:21 compute-0 sudo[182899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:21 compute-0 sudo[182899]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:22 compute-0 sudo[182980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:22 compute-0 sudo[182980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:22 compute-0 sudo[182980]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:22 compute-0 sudo[183050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:28:22 compute-0 sudo[183050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:22 compute-0 sudo[183050]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:28:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:28:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:28:23 compute-0 sshd-session[183022]: Invalid user admin from 143.14.121.41 port 56188
Nov 29 07:28:23 compute-0 sshd-session[183022]: Connection closed by invalid user admin 143.14.121.41 port 56188 [preauth]
Nov 29 07:28:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:24 compute-0 ceph-mon[75050]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:28:24 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:28:24 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 504de303-cdaf-4459-909e-092fc1e7c1d6 does not exist
Nov 29 07:28:24 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 30b5e2aa-8eb3-4e30-b999-155d7673f7e7 does not exist
Nov 29 07:28:24 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 8e3c660e-c2bf-4910-8c19-3e821c82f00c does not exist
Nov 29 07:28:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:28:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:28:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:28:24 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:28:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:28:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:24 compute-0 sudo[184470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:24 compute-0 sudo[184470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:24 compute-0 sudo[184470]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:24 compute-0 sudo[184536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:24 compute-0 sudo[184536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:24 compute-0 sudo[184536]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:24 compute-0 sudo[184596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:24 compute-0 sudo[184596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:24 compute-0 sudo[184596]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:24 compute-0 sudo[184658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:28:24 compute-0 sudo[184658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:25 compute-0 podman[184945]: 2025-11-29 07:28:25.095760815 +0000 UTC m=+0.063729485 container create c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:25 compute-0 podman[184945]: 2025-11-29 07:28:25.062686292 +0000 UTC m=+0.030655022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:25 compute-0 systemd[1]: Started libpod-conmon-c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe.scope.
Nov 29 07:28:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:25 compute-0 podman[184945]: 2025-11-29 07:28:25.221824207 +0000 UTC m=+0.189792877 container init c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 07:28:25 compute-0 podman[184945]: 2025-11-29 07:28:25.231874687 +0000 UTC m=+0.199843367 container start c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:25 compute-0 podman[184945]: 2025-11-29 07:28:25.23626218 +0000 UTC m=+0.204230870 container attach c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:28:25 compute-0 systemd[1]: libpod-c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe.scope: Deactivated successfully.
Nov 29 07:28:25 compute-0 funny_haibt[185028]: 167 167
Nov 29 07:28:25 compute-0 conmon[185028]: conmon c5fbf56514ff5a4ebc81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe.scope/container/memory.events
Nov 29 07:28:25 compute-0 podman[184945]: 2025-11-29 07:28:25.245511829 +0000 UTC m=+0.213480529 container died c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6c8be2e793b1fcd95883e3218698ead29edf377ba05754b6c13dc19ba735ca3-merged.mount: Deactivated successfully.
Nov 29 07:28:25 compute-0 podman[184945]: 2025-11-29 07:28:25.296563876 +0000 UTC m=+0.264532516 container remove c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:25 compute-0 systemd[1]: libpod-conmon-c5fbf56514ff5a4ebc810919a8c2f0b8908a45e842c7701227b0abd12dc078fe.scope: Deactivated successfully.
Nov 29 07:28:25 compute-0 ceph-mon[75050]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:25 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:28:25 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:28:25 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:28:25 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:28:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:25 compute-0 podman[185195]: 2025-11-29 07:28:25.516398508 +0000 UTC m=+0.055174355 container create 6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:28:25 compute-0 systemd[1]: Started libpod-conmon-6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f.scope.
Nov 29 07:28:25 compute-0 podman[185195]: 2025-11-29 07:28:25.485786128 +0000 UTC m=+0.024562065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f86c2d61d55465262cea1f3842353ca558e1165b1a5f8b5a4aeafe1584b66c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f86c2d61d55465262cea1f3842353ca558e1165b1a5f8b5a4aeafe1584b66c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f86c2d61d55465262cea1f3842353ca558e1165b1a5f8b5a4aeafe1584b66c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f86c2d61d55465262cea1f3842353ca558e1165b1a5f8b5a4aeafe1584b66c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f86c2d61d55465262cea1f3842353ca558e1165b1a5f8b5a4aeafe1584b66c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:25 compute-0 podman[185195]: 2025-11-29 07:28:25.603173146 +0000 UTC m=+0.141949023 container init 6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:28:25 compute-0 podman[185195]: 2025-11-29 07:28:25.610658049 +0000 UTC m=+0.149433896 container start 6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:25 compute-0 podman[185195]: 2025-11-29 07:28:25.614538129 +0000 UTC m=+0.153314066 container attach 6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:25 compute-0 sshd-session[184259]: Invalid user alan from 143.14.121.41 port 56196
Nov 29 07:28:25 compute-0 sshd-session[184259]: Connection closed by invalid user alan 143.14.121.41 port 56196 [preauth]
Nov 29 07:28:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:26 compute-0 stoic_proskuriakova[185270]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:28:26 compute-0 stoic_proskuriakova[185270]: --> relative data size: 1.0
Nov 29 07:28:26 compute-0 stoic_proskuriakova[185270]: --> All data devices are unavailable
Nov 29 07:28:26 compute-0 systemd[1]: libpod-6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f.scope: Deactivated successfully.
Nov 29 07:28:26 compute-0 systemd[1]: libpod-6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f.scope: Consumed 1.127s CPU time.
Nov 29 07:28:26 compute-0 podman[185195]: 2025-11-29 07:28:26.806074969 +0000 UTC m=+1.344850926 container died 6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-32f86c2d61d55465262cea1f3842353ca558e1165b1a5f8b5a4aeafe1584b66c-merged.mount: Deactivated successfully.
Nov 29 07:28:26 compute-0 podman[185195]: 2025-11-29 07:28:26.893560176 +0000 UTC m=+1.432336023 container remove 6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:28:26 compute-0 systemd[1]: libpod-conmon-6e895c53d000b7c07673d7722f828300ac2635b7dad049b6155d511cbc92820f.scope: Deactivated successfully.
Nov 29 07:28:26 compute-0 sudo[184658]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:27 compute-0 sudo[186066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:27 compute-0 sudo[186066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:27 compute-0 sudo[186066]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:27 compute-0 sudo[186146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:27 compute-0 sudo[186146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:27 compute-0 sudo[186146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:27 compute-0 sudo[186208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:27 compute-0 sudo[186208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:27 compute-0 sudo[186208]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:27 compute-0 sudo[186275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:28:27 compute-0 sudo[186275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:27 compute-0 sshd-session[185645]: Invalid user admin from 143.14.121.41 port 56210
Nov 29 07:28:27 compute-0 ceph-mon[75050]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:27 compute-0 sshd-session[185645]: Connection closed by invalid user admin 143.14.121.41 port 56210 [preauth]
Nov 29 07:28:27 compute-0 podman[186523]: 2025-11-29 07:28:27.659974209 +0000 UTC m=+0.052088785 container create 319d5ea150da789de5ccf28c112e9a4832e82af7bb73f9f288cf848b25c1f6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:28:27 compute-0 systemd[1]: Started libpod-conmon-319d5ea150da789de5ccf28c112e9a4832e82af7bb73f9f288cf848b25c1f6d1.scope.
Nov 29 07:28:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:27 compute-0 podman[186523]: 2025-11-29 07:28:27.639194423 +0000 UTC m=+0.031309019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:27 compute-0 podman[186523]: 2025-11-29 07:28:27.737696054 +0000 UTC m=+0.129810660 container init 319d5ea150da789de5ccf28c112e9a4832e82af7bb73f9f288cf848b25c1f6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:28:27 compute-0 podman[186523]: 2025-11-29 07:28:27.751172742 +0000 UTC m=+0.143287318 container start 319d5ea150da789de5ccf28c112e9a4832e82af7bb73f9f288cf848b25c1f6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:28:27 compute-0 podman[186523]: 2025-11-29 07:28:27.755419182 +0000 UTC m=+0.147533758 container attach 319d5ea150da789de5ccf28c112e9a4832e82af7bb73f9f288cf848b25c1f6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:28:27 compute-0 happy_kare[186601]: 167 167
Nov 29 07:28:27 compute-0 systemd[1]: libpod-319d5ea150da789de5ccf28c112e9a4832e82af7bb73f9f288cf848b25c1f6d1.scope: Deactivated successfully.
Nov 29 07:28:27 compute-0 podman[186523]: 2025-11-29 07:28:27.757428464 +0000 UTC m=+0.149543050 container died 319d5ea150da789de5ccf28c112e9a4832e82af7bb73f9f288cf848b25c1f6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aad6547edc3ec78a1b4de51c03d31751b487acdafa6d2f480b927c899ef4417-merged.mount: Deactivated successfully.
Nov 29 07:28:27 compute-0 podman[186523]: 2025-11-29 07:28:27.792659332 +0000 UTC m=+0.184773908 container remove 319d5ea150da789de5ccf28c112e9a4832e82af7bb73f9f288cf848b25c1f6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:28:27 compute-0 systemd[1]: libpod-conmon-319d5ea150da789de5ccf28c112e9a4832e82af7bb73f9f288cf848b25c1f6d1.scope: Deactivated successfully.
Nov 29 07:28:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:27 compute-0 podman[186765]: 2025-11-29 07:28:27.962127695 +0000 UTC m=+0.051002698 container create 703e2febcc3bcccb13649ae6cd0657c06f37d75466857f5c7a1ef193678d2354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:28 compute-0 systemd[1]: Started libpod-conmon-703e2febcc3bcccb13649ae6cd0657c06f37d75466857f5c7a1ef193678d2354.scope.
Nov 29 07:28:28 compute-0 podman[186765]: 2025-11-29 07:28:27.935524818 +0000 UTC m=+0.024399811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c5544fef9a06711ffe6e28734db6fd5434ed4a150d109e6843115c0a6a2102/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c5544fef9a06711ffe6e28734db6fd5434ed4a150d109e6843115c0a6a2102/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c5544fef9a06711ffe6e28734db6fd5434ed4a150d109e6843115c0a6a2102/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c5544fef9a06711ffe6e28734db6fd5434ed4a150d109e6843115c0a6a2102/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:28 compute-0 podman[186765]: 2025-11-29 07:28:28.066018585 +0000 UTC m=+0.154893588 container init 703e2febcc3bcccb13649ae6cd0657c06f37d75466857f5c7a1ef193678d2354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:28:28 compute-0 podman[186765]: 2025-11-29 07:28:28.074495904 +0000 UTC m=+0.163370887 container start 703e2febcc3bcccb13649ae6cd0657c06f37d75466857f5c7a1ef193678d2354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:28:28 compute-0 podman[186765]: 2025-11-29 07:28:28.078101326 +0000 UTC m=+0.166976329 container attach 703e2febcc3bcccb13649ae6cd0657c06f37d75466857f5c7a1ef193678d2354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lovelace, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]: {
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:     "0": [
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:         {
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "devices": [
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "/dev/loop3"
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             ],
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_name": "ceph_lv0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_size": "21470642176",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "name": "ceph_lv0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "tags": {
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.cluster_name": "ceph",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.crush_device_class": "",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.encrypted": "0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.osd_id": "0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.type": "block",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.vdo": "0"
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             },
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "type": "block",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "vg_name": "ceph_vg0"
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:         }
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:     ],
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:     "1": [
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:         {
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "devices": [
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "/dev/loop4"
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             ],
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_name": "ceph_lv1",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_size": "21470642176",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "name": "ceph_lv1",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "tags": {
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.cluster_name": "ceph",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.crush_device_class": "",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.encrypted": "0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.osd_id": "1",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.type": "block",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.vdo": "0"
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             },
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "type": "block",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "vg_name": "ceph_vg1"
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:         }
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:     ],
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:     "2": [
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:         {
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "devices": [
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "/dev/loop5"
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             ],
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_name": "ceph_lv2",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_size": "21470642176",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "name": "ceph_lv2",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "tags": {
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.cluster_name": "ceph",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.crush_device_class": "",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.encrypted": "0",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.osd_id": "2",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.type": "block",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:                 "ceph.vdo": "0"
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             },
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "type": "block",
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:             "vg_name": "ceph_vg2"
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:         }
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]:     ]
Nov 29 07:28:28 compute-0 nostalgic_lovelace[186840]: }
Nov 29 07:28:28 compute-0 systemd[1]: libpod-703e2febcc3bcccb13649ae6cd0657c06f37d75466857f5c7a1ef193678d2354.scope: Deactivated successfully.
Nov 29 07:28:28 compute-0 podman[187378]: 2025-11-29 07:28:28.985987639 +0000 UTC m=+0.028644380 container died 703e2febcc3bcccb13649ae6cd0657c06f37d75466857f5c7a1ef193678d2354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c5544fef9a06711ffe6e28734db6fd5434ed4a150d109e6843115c0a6a2102-merged.mount: Deactivated successfully.
Nov 29 07:28:29 compute-0 podman[187378]: 2025-11-29 07:28:29.040071564 +0000 UTC m=+0.082728285 container remove 703e2febcc3bcccb13649ae6cd0657c06f37d75466857f5c7a1ef193678d2354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:28:29 compute-0 systemd[1]: libpod-conmon-703e2febcc3bcccb13649ae6cd0657c06f37d75466857f5c7a1ef193678d2354.scope: Deactivated successfully.
Nov 29 07:28:29 compute-0 sudo[186275]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:29 compute-0 sudo[187476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:29 compute-0 sudo[187476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:29 compute-0 sudo[187476]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:29 compute-0 sudo[187544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:28:29 compute-0 sudo[187544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:29 compute-0 sudo[187544]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:29 compute-0 sudo[187608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:29 compute-0 sudo[187608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:29 compute-0 sudo[187608]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:29 compute-0 sshd-session[186706]: Connection closed by authenticating user root 143.14.121.41 port 56218 [preauth]
Nov 29 07:28:29 compute-0 sudo[187665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:28:29 compute-0 sudo[187665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:29 compute-0 podman[187917]: 2025-11-29 07:28:29.689458707 +0000 UTC m=+0.026931055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:31 compute-0 sshd-session[187858]: Connection closed by authenticating user root 143.14.121.41 port 56232 [preauth]
Nov 29 07:28:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:33 compute-0 sshd-session[189090]: Connection closed by authenticating user root 143.14.121.41 port 46882 [preauth]
Nov 29 07:28:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:33 compute-0 ceph-mon[75050]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:33 compute-0 podman[187917]: 2025-11-29 07:28:33.942537916 +0000 UTC m=+4.280010224 container create ff052cd90f17a5a1090a53a0012e24a27046f34f6385c2af8e33ddd28678e803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:28:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:34 compute-0 systemd[1]: Started libpod-conmon-ff052cd90f17a5a1090a53a0012e24a27046f34f6385c2af8e33ddd28678e803.scope.
Nov 29 07:28:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:34 compute-0 sshd-session[189837]: Invalid user username from 143.14.121.41 port 46894
Nov 29 07:28:35 compute-0 sshd-session[189837]: Connection closed by invalid user username 143.14.121.41 port 46894 [preauth]
Nov 29 07:28:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:28:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:28:35 compute-0 podman[187917]: 2025-11-29 07:28:35.628417902 +0000 UTC m=+5.965890300 container init ff052cd90f17a5a1090a53a0012e24a27046f34f6385c2af8e33ddd28678e803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:28:35 compute-0 podman[187917]: 2025-11-29 07:28:35.641053524 +0000 UTC m=+5.978525862 container start ff052cd90f17a5a1090a53a0012e24a27046f34f6385c2af8e33ddd28678e803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:35 compute-0 hungry_hypatia[189949]: 167 167
Nov 29 07:28:35 compute-0 systemd[1]: libpod-ff052cd90f17a5a1090a53a0012e24a27046f34f6385c2af8e33ddd28678e803.scope: Deactivated successfully.
Nov 29 07:28:35 compute-0 ceph-mon[75050]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:35 compute-0 ceph-mon[75050]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:35 compute-0 ceph-mon[75050]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:36 compute-0 podman[187917]: 2025-11-29 07:28:36.575027753 +0000 UTC m=+6.912500141 container attach ff052cd90f17a5a1090a53a0012e24a27046f34f6385c2af8e33ddd28678e803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:28:36 compute-0 podman[187917]: 2025-11-29 07:28:36.576362098 +0000 UTC m=+6.913834476 container died ff052cd90f17a5a1090a53a0012e24a27046f34f6385c2af8e33ddd28678e803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:28:37 compute-0 sshd-session[189952]: Connection closed by authenticating user root 143.14.121.41 port 46910 [preauth]
Nov 29 07:28:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:39 compute-0 sshd-session[189970]: Connection closed by authenticating user root 143.14.121.41 port 46914 [preauth]
Nov 29 07:28:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c89b71a649f5123eb43d691d8c6b9d1164e8127ae567d79ff1b6979bc9fe0ab-merged.mount: Deactivated successfully.
Nov 29 07:28:40 compute-0 ceph-mon[75050]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:44 compute-0 sshd-session[189973]: Connection closed by authenticating user root 143.14.121.41 port 46926 [preauth]
Nov 29 07:28:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:45 compute-0 podman[187917]: 2025-11-29 07:28:45.556534041 +0000 UTC m=+15.894006339 container remove ff052cd90f17a5a1090a53a0012e24a27046f34f6385c2af8e33ddd28678e803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:28:45 compute-0 ceph-mon[75050]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:45 compute-0 ceph-mon[75050]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:45 compute-0 ceph-mon[75050]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:45 compute-0 systemd[1]: libpod-conmon-ff052cd90f17a5a1090a53a0012e24a27046f34f6385c2af8e33ddd28678e803.scope: Deactivated successfully.
Nov 29 07:28:45 compute-0 podman[189010]: 2025-11-29 07:28:45.675622074 +0000 UTC m=+14.043460218 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:28:45 compute-0 podman[189018]: 2025-11-29 07:28:45.693848462 +0000 UTC m=+14.056464981 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 29 07:28:45 compute-0 podman[190004]: 2025-11-29 07:28:45.734513058 +0000 UTC m=+0.038099350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:28:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:45 compute-0 sshd-session[189975]: Connection closed by authenticating user root 143.14.121.41 port 42562 [preauth]
Nov 29 07:28:46 compute-0 podman[190004]: 2025-11-29 07:28:46.448711736 +0000 UTC m=+0.752297978 container create 8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:28:46 compute-0 systemd[1]: Started libpod-conmon-8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f.scope.
Nov 29 07:28:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3ce57367aade5f835af885d0f0b561e477e18c7b0a3629bc5255953c0e4811/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3ce57367aade5f835af885d0f0b561e477e18c7b0a3629bc5255953c0e4811/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3ce57367aade5f835af885d0f0b561e477e18c7b0a3629bc5255953c0e4811/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3ce57367aade5f835af885d0f0b561e477e18c7b0a3629bc5255953c0e4811/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:28:46 compute-0 podman[190004]: 2025-11-29 07:28:46.703084786 +0000 UTC m=+1.006670998 container init 8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:28:46 compute-0 podman[190004]: 2025-11-29 07:28:46.712483432 +0000 UTC m=+1.016069644 container start 8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:28:46 compute-0 ceph-mon[75050]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:46 compute-0 podman[190004]: 2025-11-29 07:28:46.781934053 +0000 UTC m=+1.085520255 container attach 8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:28:46 compute-0 ceph-mon[75050]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:47 compute-0 infallible_keller[190024]: {
Nov 29 07:28:47 compute-0 infallible_keller[190024]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "osd_id": 2,
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "type": "bluestore"
Nov 29 07:28:47 compute-0 infallible_keller[190024]:     },
Nov 29 07:28:47 compute-0 infallible_keller[190024]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "osd_id": 1,
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "type": "bluestore"
Nov 29 07:28:47 compute-0 infallible_keller[190024]:     },
Nov 29 07:28:47 compute-0 infallible_keller[190024]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "osd_id": 0,
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:28:47 compute-0 infallible_keller[190024]:         "type": "bluestore"
Nov 29 07:28:47 compute-0 infallible_keller[190024]:     }
Nov 29 07:28:47 compute-0 infallible_keller[190024]: }
Nov 29 07:28:47 compute-0 systemd[1]: libpod-8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f.scope: Deactivated successfully.
Nov 29 07:28:47 compute-0 systemd[1]: libpod-8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f.scope: Consumed 1.059s CPU time.
Nov 29 07:28:47 compute-0 conmon[190024]: conmon 8c906194e7213a9b05a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f.scope/container/memory.events
Nov 29 07:28:47 compute-0 podman[190004]: 2025-11-29 07:28:47.767791404 +0000 UTC m=+2.071377606 container died 8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f3ce57367aade5f835af885d0f0b561e477e18c7b0a3629bc5255953c0e4811-merged.mount: Deactivated successfully.
Nov 29 07:28:47 compute-0 podman[190004]: 2025-11-29 07:28:47.890626754 +0000 UTC m=+2.194212966 container remove 8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:28:47 compute-0 systemd[1]: libpod-conmon-8c906194e7213a9b05a78f4b0cc50a634f48505b46c85350e475afdd98f3a86f.scope: Deactivated successfully.
Nov 29 07:28:47 compute-0 sudo[187665]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:28:47 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:28:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:28:47 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:28:47 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev dfe5fc64-bc82-40f1-be16-93a1a6cf5861 does not exist
Nov 29 07:28:47 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev d942370b-8f4d-48b0-ad08-ce1f17a4aeca does not exist
Nov 29 07:28:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:48 compute-0 sudo[190070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:28:48 compute-0 sudo[190070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:48 compute-0 sudo[190070]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:48 compute-0 sudo[190095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:28:48 compute-0 sudo[190095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:28:48 compute-0 sudo[190095]: pam_unix(sudo:session): session closed for user root
Nov 29 07:28:48 compute-0 sshd-session[190020]: Invalid user rema from 143.14.121.41 port 42564
Nov 29 07:28:48 compute-0 sshd-session[190020]: Connection closed by invalid user rema 143.14.121.41 port 42564 [preauth]
Nov 29 07:28:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:28:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:28:50 compute-0 ceph-mon[75050]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:50 compute-0 sshd-session[190120]: Connection closed by authenticating user root 143.14.121.41 port 42572 [preauth]
Nov 29 07:28:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:52 compute-0 ceph-mon[75050]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:53 compute-0 sshd-session[190125]: Connection closed by authenticating user root 143.14.121.41 port 42576 [preauth]
Nov 29 07:28:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:55 compute-0 sshd-session[190127]: Invalid user test from 143.14.121.41 port 50698
Nov 29 07:28:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:28:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:56 compute-0 sshd-session[190127]: Connection closed by invalid user test 143.14.121.41 port 50698 [preauth]
Nov 29 07:28:56 compute-0 ceph-mon[75050]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:57 compute-0 ceph-mon[75050]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:57 compute-0 ceph-mon[75050]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:58 compute-0 sshd-session[190129]: Invalid user odroid from 143.14.121.41 port 50708
Nov 29 07:28:58 compute-0 sshd-session[190129]: Connection closed by invalid user odroid 143.14.121.41 port 50708 [preauth]
Nov 29 07:28:59 compute-0 ceph-mon[75050]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:28:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:28:59.741 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:28:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:28:59.743 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:28:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:28:59.743 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:28:59 compute-0 sshd-session[190139]: Invalid user ftpuser from 143.14.121.41 port 50722
Nov 29 07:28:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:00 compute-0 sshd-session[190139]: Connection closed by invalid user ftpuser 143.14.121.41 port 50722 [preauth]
Nov 29 07:29:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:01 compute-0 ceph-mon[75050]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:03 compute-0 ceph-mon[75050]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:03 compute-0 sshd-session[190141]: Connection closed by authenticating user root 143.14.121.41 port 50730 [preauth]
Nov 29 07:29:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:05 compute-0 ceph-mon[75050]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:29:05
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'vms', 'default.rgw.control']
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:06 compute-0 sshd-session[190143]: Invalid user ftpuser from 143.14.121.41 port 46956
Nov 29 07:29:06 compute-0 sshd-session[190143]: Connection closed by invalid user ftpuser 143.14.121.41 port 46956 [preauth]
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:29:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:29:07 compute-0 ceph-mon[75050]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:08 compute-0 sshd-session[190145]: Connection closed by authenticating user root 143.14.121.41 port 46958 [preauth]
Nov 29 07:29:09 compute-0 ceph-mon[75050]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:09 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Nov 29 07:29:09 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 07:29:09 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 07:29:09 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 07:29:09 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 07:29:09 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 07:29:09 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 07:29:09 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 07:29:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:10 compute-0 sshd-session[190152]: Connection closed by authenticating user root 143.14.121.41 port 46964 [preauth]
Nov 29 07:29:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:11 compute-0 ceph-mon[75050]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:12 compute-0 sshd-session[190156]: Invalid user oracle from 143.14.121.41 port 46972
Nov 29 07:29:12 compute-0 sshd-session[190156]: Connection closed by invalid user oracle 143.14.121.41 port 46972 [preauth]
Nov 29 07:29:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:29:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:29:14 compute-0 ceph-mon[75050]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:16 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 07:29:16 compute-0 podman[190161]: 2025-11-29 07:29:16.761416406 +0000 UTC m=+0.105007974 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 07:29:16 compute-0 podman[190160]: 2025-11-29 07:29:16.823659349 +0000 UTC m=+0.168530621 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 07:29:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:17 compute-0 sshd-session[190158]: Connection closed by authenticating user root 143.14.121.41 port 57984 [preauth]
Nov 29 07:29:17 compute-0 ceph-mon[75050]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:19 compute-0 sshd-session[190203]: Connection closed by authenticating user root 143.14.121.41 port 57998 [preauth]
Nov 29 07:29:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:21 compute-0 sshd-session[190205]: Invalid user aaa from 143.14.121.41 port 58004
Nov 29 07:29:21 compute-0 ceph-mon[75050]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:21 compute-0 ceph-mon[75050]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:21 compute-0 sshd-session[190205]: Connection closed by invalid user aaa 143.14.121.41 port 58004 [preauth]
Nov 29 07:29:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:23 compute-0 sshd-session[190211]: Invalid user debian from 143.14.121.41 port 44450
Nov 29 07:29:23 compute-0 sshd-session[190211]: Connection closed by invalid user debian 143.14.121.41 port 44450 [preauth]
Nov 29 07:29:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:24 compute-0 ceph-mon[75050]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:26 compute-0 sshd-session[190213]: Invalid user user from 143.14.121.41 port 44462
Nov 29 07:29:26 compute-0 sshd-session[190213]: Connection closed by invalid user user 143.14.121.41 port 44462 [preauth]
Nov 29 07:29:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:28 compute-0 sshd-session[190215]: Connection closed by authenticating user root 143.14.121.41 port 44478 [preauth]
Nov 29 07:29:28 compute-0 ceph-mon[75050]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:28 compute-0 ceph-mon[75050]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:31 compute-0 sshd-session[190217]: Invalid user vagrant from 143.14.121.41 port 44486
Nov 29 07:29:31 compute-0 ceph-mon[75050]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:31 compute-0 ceph-mon[75050]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:31 compute-0 groupadd[190222]: group added to /etc/group: name=dnsmasq, GID=991
Nov 29 07:29:31 compute-0 groupadd[190222]: group added to /etc/gshadow: name=dnsmasq
Nov 29 07:29:31 compute-0 groupadd[190222]: new group: name=dnsmasq, GID=991
Nov 29 07:29:31 compute-0 sshd-session[190217]: Connection closed by invalid user vagrant 143.14.121.41 port 44486 [preauth]
Nov 29 07:29:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:32 compute-0 useradd[190230]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 29 07:29:32 compute-0 ceph-mon[75050]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:32 compute-0 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Nov 29 07:29:32 compute-0 dbus-broker-launch[774]: Noticed file-system modification, trigger reload.
Nov 29 07:29:32 compute-0 sshd-session[190219]: Connection closed by authenticating user root 143.14.121.41 port 44492 [preauth]
Nov 29 07:29:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:34 compute-0 sshd-session[190240]: Connection closed by authenticating user root 143.14.121.41 port 38134 [preauth]
Nov 29 07:29:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:34 compute-0 ceph-mon[75050]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:29:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:29:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:36 compute-0 ceph-mon[75050]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:36 compute-0 groupadd[190247]: group added to /etc/group: name=clevis, GID=990
Nov 29 07:29:37 compute-0 groupadd[190247]: group added to /etc/gshadow: name=clevis
Nov 29 07:29:37 compute-0 groupadd[190247]: new group: name=clevis, GID=990
Nov 29 07:29:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:38 compute-0 sshd-session[190243]: Connection closed by authenticating user root 143.14.121.41 port 38142 [preauth]
Nov 29 07:29:39 compute-0 ceph-mon[75050]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:39 compute-0 useradd[190254]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 29 07:29:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:40 compute-0 sshd-session[190255]: Connection closed by authenticating user root 143.14.121.41 port 38158 [preauth]
Nov 29 07:29:41 compute-0 ceph-mon[75050]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:42 compute-0 sshd-session[190257]: Invalid user test from 143.14.121.41 port 38178
Nov 29 07:29:43 compute-0 sshd-session[190257]: Connection closed by invalid user test 143.14.121.41 port 38178 [preauth]
Nov 29 07:29:43 compute-0 usermod[190268]: add 'clevis' to group 'tss'
Nov 29 07:29:43 compute-0 usermod[190268]: add 'clevis' to shadow group 'tss'
Nov 29 07:29:43 compute-0 ceph-mon[75050]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:45 compute-0 sshd-session[190269]: Connection closed by authenticating user root 143.14.121.41 port 34024 [preauth]
Nov 29 07:29:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:45 compute-0 ceph-mon[75050]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:45 compute-0 ceph-mon[75050]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:46 compute-0 sshd-session[190277]: Connection closed by authenticating user root 143.14.121.41 port 34028 [preauth]
Nov 29 07:29:47 compute-0 ceph-mon[75050]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:47 compute-0 podman[190284]: 2025-11-29 07:29:47.301297433 +0000 UTC m=+0.068257930 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 07:29:47 compute-0 podman[190283]: 2025-11-29 07:29:47.345841991 +0000 UTC m=+0.111582197 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 07:29:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:48.403846) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401388404009, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2063, "num_deletes": 251, "total_data_size": 3541392, "memory_usage": 3590000, "flush_reason": "Manual Compaction"}
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 29 07:29:48 compute-0 sudo[190340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:48 compute-0 sudo[190340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:48 compute-0 sudo[190340]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:48 compute-0 sudo[190365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:48 compute-0 sudo[190365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:48 compute-0 sudo[190365]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401388630613, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3476601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9822, "largest_seqno": 11884, "table_properties": {"data_size": 3467179, "index_size": 5980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18439, "raw_average_key_size": 19, "raw_value_size": 3448441, "raw_average_value_size": 3680, "num_data_blocks": 271, "num_entries": 937, "num_filter_entries": 937, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401102, "oldest_key_time": 1764401102, "file_creation_time": 1764401388, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 226849 microseconds, and 27156 cpu microseconds.
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:48.630703) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3476601 bytes OK
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:48.630728) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:48.633065) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:48.633083) EVENT_LOG_v1 {"time_micros": 1764401388633077, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:48.633101) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3532724, prev total WAL file size 3532724, number of live WAL files 2.
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:48.634340) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3395KB)], [26(6954KB)]
Nov 29 07:29:48 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401388634472, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10597641, "oldest_snapshot_seqno": -1}
Nov 29 07:29:48 compute-0 sudo[190390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:48 compute-0 sudo[190390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:48 compute-0 sudo[190390]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:48 compute-0 sshd-session[190279]: Invalid user debian from 143.14.121.41 port 34036
Nov 29 07:29:48 compute-0 sudo[190415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:29:48 compute-0 sudo[190415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:48 compute-0 sshd-session[190279]: Connection closed by invalid user debian 143.14.121.41 port 34036 [preauth]
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3815 keys, 8629923 bytes, temperature: kUnknown
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401389098571, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8629923, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8600003, "index_size": 19250, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92195, "raw_average_key_size": 24, "raw_value_size": 8526717, "raw_average_value_size": 2235, "num_data_blocks": 834, "num_entries": 3815, "num_filter_entries": 3815, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764401388, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:49.100247) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8629923 bytes
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:49.103032) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.8 rd, 18.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.8 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 4329, records dropped: 514 output_compression: NoCompression
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:49.103085) EVENT_LOG_v1 {"time_micros": 1764401389103063, "job": 10, "event": "compaction_finished", "compaction_time_micros": 465437, "compaction_time_cpu_micros": 46462, "output_level": 6, "num_output_files": 1, "total_output_size": 8629923, "num_input_records": 4329, "num_output_records": 3815, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401389103710, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401389104792, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:48.634145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:49.104887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:49.104897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:49.104901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:49.104905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:49 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:29:49.104910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:29:49 compute-0 sudo[190415]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:29:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:29:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:29:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:29:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:29:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev bc97705b-b190-401b-81c6-63e7654a19ce does not exist
Nov 29 07:29:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5e58b980-a7f5-47d6-a941-5622f7a9adbd does not exist
Nov 29 07:29:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev deae2cb1-3b40-4384-ad28-1c829a1e5350 does not exist
Nov 29 07:29:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:29:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:29:49 compute-0 ceph-mon[75050]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:29:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:29:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:29:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:29:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:49 compute-0 sudo[190473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:49 compute-0 sudo[190473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:49 compute-0 sudo[190473]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:49 compute-0 sudo[190498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:49 compute-0 sudo[190498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:49 compute-0 sudo[190498]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:49 compute-0 sudo[190523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:49 compute-0 sudo[190523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:49 compute-0 sudo[190523]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:49 compute-0 sudo[190548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:29:49 compute-0 sudo[190548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:50 compute-0 podman[190613]: 2025-11-29 07:29:50.317133622 +0000 UTC m=+0.118104838 container create 210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:29:50 compute-0 podman[190613]: 2025-11-29 07:29:50.238226533 +0000 UTC m=+0.039197809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:50 compute-0 systemd[1]: Started libpod-conmon-210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573.scope.
Nov 29 07:29:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:50 compute-0 podman[190613]: 2025-11-29 07:29:50.45739337 +0000 UTC m=+0.258364586 container init 210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:29:50 compute-0 podman[190613]: 2025-11-29 07:29:50.467655239 +0000 UTC m=+0.268626425 container start 210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:29:50 compute-0 podman[190613]: 2025-11-29 07:29:50.47185812 +0000 UTC m=+0.272829306 container attach 210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:29:50 compute-0 sweet_keldysh[190630]: 167 167
Nov 29 07:29:50 compute-0 systemd[1]: libpod-210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573.scope: Deactivated successfully.
Nov 29 07:29:50 compute-0 conmon[190630]: conmon 210a623468ca2096075c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573.scope/container/memory.events
Nov 29 07:29:50 compute-0 podman[190613]: 2025-11-29 07:29:50.479018157 +0000 UTC m=+0.279989343 container died 210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c2d711fc2269d680718459352c05d8be061b2d7e7702ca3339e9c0b125e748b-merged.mount: Deactivated successfully.
Nov 29 07:29:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:29:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:29:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:29:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:29:50 compute-0 ceph-mon[75050]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:50 compute-0 podman[190613]: 2025-11-29 07:29:50.64843662 +0000 UTC m=+0.449407846 container remove 210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:50 compute-0 systemd[1]: libpod-conmon-210a623468ca2096075c5bfd08c6bc4a54982ae6d24fb81510a9d5e99067d573.scope: Deactivated successfully.
Nov 29 07:29:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:50 compute-0 podman[190653]: 2025-11-29 07:29:50.850477927 +0000 UTC m=+0.029211766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:51 compute-0 sshd-session[190469]: Connection closed by authenticating user root 143.14.121.41 port 34042 [preauth]
Nov 29 07:29:51 compute-0 podman[190653]: 2025-11-29 07:29:51.996349193 +0000 UTC m=+1.175083012 container create 9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:29:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:52 compute-0 systemd[1]: Started libpod-conmon-9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4.scope.
Nov 29 07:29:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805aeda1ee4d9f921209e3c942bc27187b868e8a6b7f6cd7460fd93b360c1e1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805aeda1ee4d9f921209e3c942bc27187b868e8a6b7f6cd7460fd93b360c1e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805aeda1ee4d9f921209e3c942bc27187b868e8a6b7f6cd7460fd93b360c1e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805aeda1ee4d9f921209e3c942bc27187b868e8a6b7f6cd7460fd93b360c1e1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805aeda1ee4d9f921209e3c942bc27187b868e8a6b7f6cd7460fd93b360c1e1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:29:52 compute-0 polkitd[43449]: Reloading rules
Nov 29 07:29:52 compute-0 polkitd[43449]: Collecting garbage unconditionally...
Nov 29 07:29:52 compute-0 polkitd[43449]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 07:29:52 compute-0 polkitd[43449]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 07:29:52 compute-0 polkitd[43449]: Finished loading, compiling and executing 3 rules
Nov 29 07:29:52 compute-0 polkitd[43449]: Reloading rules
Nov 29 07:29:52 compute-0 polkitd[43449]: Collecting garbage unconditionally...
Nov 29 07:29:52 compute-0 polkitd[43449]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 07:29:52 compute-0 polkitd[43449]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 07:29:52 compute-0 polkitd[43449]: Finished loading, compiling and executing 3 rules
Nov 29 07:29:53 compute-0 sshd-session[190668]: Invalid user admin from 143.14.121.41 port 34046
Nov 29 07:29:53 compute-0 sshd-session[190668]: Connection closed by invalid user admin 143.14.121.41 port 34046 [preauth]
Nov 29 07:29:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:55 compute-0 podman[190653]: 2025-11-29 07:29:55.043393691 +0000 UTC m=+4.222127480 container init 9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:29:55 compute-0 podman[190653]: 2025-11-29 07:29:55.061018983 +0000 UTC m=+4.239752762 container start 9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:29:55 compute-0 ceph-mon[75050]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:55 compute-0 podman[190653]: 2025-11-29 07:29:55.450820124 +0000 UTC m=+4.629554003 container attach 9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:29:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:56 compute-0 suspicious_cohen[190676]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:29:56 compute-0 suspicious_cohen[190676]: --> relative data size: 1.0
Nov 29 07:29:56 compute-0 suspicious_cohen[190676]: --> All data devices are unavailable
Nov 29 07:29:56 compute-0 systemd[1]: libpod-9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4.scope: Deactivated successfully.
Nov 29 07:29:56 compute-0 podman[190653]: 2025-11-29 07:29:56.178591798 +0000 UTC m=+5.357325617 container died 9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:29:56 compute-0 systemd[1]: libpod-9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4.scope: Consumed 1.063s CPU time.
Nov 29 07:29:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:29:57 compute-0 sshd-session[190687]: Connection closed by authenticating user root 143.14.121.41 port 48566 [preauth]
Nov 29 07:29:57 compute-0 ceph-mon[75050]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-805aeda1ee4d9f921209e3c942bc27187b868e8a6b7f6cd7460fd93b360c1e1c-merged.mount: Deactivated successfully.
Nov 29 07:29:57 compute-0 podman[190653]: 2025-11-29 07:29:57.953336054 +0000 UTC m=+7.132069843 container remove 9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cohen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:29:57 compute-0 sudo[190548]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:58 compute-0 systemd[1]: libpod-conmon-9cd07cdb9e7914568f33c6c8b293c834a7859179f6402fc4cd370b663a6f08a4.scope: Deactivated successfully.
Nov 29 07:29:58 compute-0 sudo[190792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:58 compute-0 sudo[190792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:58 compute-0 sudo[190792]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:58 compute-0 sudo[190822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:29:58 compute-0 sudo[190822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:58 compute-0 sudo[190822]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:58 compute-0 sudo[190847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:29:58 compute-0 sudo[190847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:58 compute-0 sudo[190847]: pam_unix(sudo:session): session closed for user root
Nov 29 07:29:58 compute-0 sudo[190872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:29:58 compute-0 sudo[190872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:29:58 compute-0 podman[190985]: 2025-11-29 07:29:58.810644754 +0000 UTC m=+0.038812178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:29:58 compute-0 ceph-mon[75050]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:29:59 compute-0 podman[190985]: 2025-11-29 07:29:59.061748408 +0000 UTC m=+0.289915822 container create 59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:29:59 compute-0 sshd-session[190771]: Connection closed by authenticating user root 143.14.121.41 port 48568 [preauth]
Nov 29 07:29:59 compute-0 systemd[1]: Started libpod-conmon-59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d.scope.
Nov 29 07:29:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:29:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:29:59.742 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:29:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:29:59.744 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:29:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:29:59.745 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:30:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:00 compute-0 podman[190985]: 2025-11-29 07:30:00.126615861 +0000 UTC m=+1.354783355 container init 59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:00 compute-0 ceph-mon[75050]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:00 compute-0 podman[190985]: 2025-11-29 07:30:00.13991785 +0000 UTC m=+1.368085284 container start 59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:30:00 compute-0 systemd[1]: libpod-59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d.scope: Deactivated successfully.
Nov 29 07:30:00 compute-0 gracious_yalow[191001]: 167 167
Nov 29 07:30:00 compute-0 conmon[191001]: conmon 59f6e9d0f56e569a0885 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d.scope/container/memory.events
Nov 29 07:30:00 compute-0 podman[190985]: 2025-11-29 07:30:00.160322364 +0000 UTC m=+1.388489858 container attach 59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:30:00 compute-0 podman[190985]: 2025-11-29 07:30:00.162879451 +0000 UTC m=+1.391046965 container died 59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-601099c01990b14f1bf81fdb8d2719f9c4174923b4b8030241676c7687d5e24b-merged.mount: Deactivated successfully.
Nov 29 07:30:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:01 compute-0 ceph-mon[75050]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:01 compute-0 podman[190985]: 2025-11-29 07:30:01.550578169 +0000 UTC m=+2.778745593 container remove 59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:30:01 compute-0 systemd[1]: libpod-conmon-59f6e9d0f56e569a08851794fe8c5a89386e142171aa59a00f807e1f2f272a1d.scope: Deactivated successfully.
Nov 29 07:30:01 compute-0 podman[191065]: 2025-11-29 07:30:01.724778627 +0000 UTC m=+0.032138794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:01 compute-0 podman[191065]: 2025-11-29 07:30:01.827165481 +0000 UTC m=+0.134525628 container create e4043ba830acd969a706486938b76a3ff7596eea439f2470807f4e5140909806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:30:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:02 compute-0 sshd-session[191004]: Connection closed by authenticating user root 143.14.121.41 port 48584 [preauth]
Nov 29 07:30:03 compute-0 systemd[1]: Started libpod-conmon-e4043ba830acd969a706486938b76a3ff7596eea439f2470807f4e5140909806.scope.
Nov 29 07:30:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb687f05364975ec082dddd8d8214743c6e4a0dad21627052083aba7e7268d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb687f05364975ec082dddd8d8214743c6e4a0dad21627052083aba7e7268d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb687f05364975ec082dddd8d8214743c6e4a0dad21627052083aba7e7268d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb687f05364975ec082dddd8d8214743c6e4a0dad21627052083aba7e7268d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:03 compute-0 ceph-mon[75050]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:03 compute-0 podman[191065]: 2025-11-29 07:30:03.52076657 +0000 UTC m=+1.828126817 container init e4043ba830acd969a706486938b76a3ff7596eea439f2470807f4e5140909806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:30:03 compute-0 podman[191065]: 2025-11-29 07:30:03.537751925 +0000 UTC m=+1.845112112 container start e4043ba830acd969a706486938b76a3ff7596eea439f2470807f4e5140909806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:03 compute-0 podman[191065]: 2025-11-29 07:30:03.556319523 +0000 UTC m=+1.863679690 container attach e4043ba830acd969a706486938b76a3ff7596eea439f2470807f4e5140909806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:30:03 compute-0 groupadd[191090]: group added to /etc/group: name=ceph, GID=167
Nov 29 07:30:03 compute-0 groupadd[191090]: group added to /etc/gshadow: name=ceph
Nov 29 07:30:03 compute-0 groupadd[191090]: new group: name=ceph, GID=167
Nov 29 07:30:03 compute-0 useradd[191096]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 29 07:30:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]: {
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:     "0": [
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:         {
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "devices": [
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "/dev/loop3"
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             ],
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_name": "ceph_lv0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_size": "21470642176",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "name": "ceph_lv0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "tags": {
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.crush_device_class": "",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.encrypted": "0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.osd_id": "0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.type": "block",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.vdo": "0"
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             },
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "type": "block",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "vg_name": "ceph_vg0"
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:         }
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:     ],
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:     "1": [
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:         {
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "devices": [
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "/dev/loop4"
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             ],
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_name": "ceph_lv1",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_size": "21470642176",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "name": "ceph_lv1",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "tags": {
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.crush_device_class": "",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.encrypted": "0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.osd_id": "1",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.type": "block",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.vdo": "0"
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             },
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "type": "block",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "vg_name": "ceph_vg1"
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:         }
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:     ],
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:     "2": [
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:         {
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "devices": [
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "/dev/loop5"
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             ],
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_name": "ceph_lv2",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_size": "21470642176",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "name": "ceph_lv2",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "tags": {
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.cluster_name": "ceph",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.crush_device_class": "",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.encrypted": "0",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.osd_id": "2",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.type": "block",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:                 "ceph.vdo": "0"
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             },
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "type": "block",
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:             "vg_name": "ceph_vg2"
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:         }
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]:     ]
Nov 29 07:30:04 compute-0 compassionate_banzai[191084]: }
Nov 29 07:30:04 compute-0 systemd[1]: libpod-e4043ba830acd969a706486938b76a3ff7596eea439f2470807f4e5140909806.scope: Deactivated successfully.
Nov 29 07:30:04 compute-0 podman[191065]: 2025-11-29 07:30:04.335778001 +0000 UTC m=+2.643138188 container died e4043ba830acd969a706486938b76a3ff7596eea439f2470807f4e5140909806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8eb687f05364975ec082dddd8d8214743c6e4a0dad21627052083aba7e7268d8-merged.mount: Deactivated successfully.
Nov 29 07:30:04 compute-0 sshd-session[191079]: Connection closed by authenticating user root 143.14.121.41 port 36944 [preauth]
Nov 29 07:30:04 compute-0 podman[191065]: 2025-11-29 07:30:04.530293211 +0000 UTC m=+2.837653358 container remove e4043ba830acd969a706486938b76a3ff7596eea439f2470807f4e5140909806 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:04 compute-0 systemd[1]: libpod-conmon-e4043ba830acd969a706486938b76a3ff7596eea439f2470807f4e5140909806.scope: Deactivated successfully.
Nov 29 07:30:04 compute-0 sudo[190872]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:04 compute-0 sudo[191121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:04 compute-0 sudo[191121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:04 compute-0 sudo[191121]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:04 compute-0 sudo[191147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:30:04 compute-0 sudo[191147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:04 compute-0 sudo[191147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:04 compute-0 sudo[191172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:04 compute-0 sudo[191172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:04 compute-0 sudo[191172]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:04 compute-0 sudo[191197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:30:04 compute-0 sudo[191197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:05 compute-0 podman[191264]: 2025-11-29 07:30:05.17930094 +0000 UTC m=+0.073058877 container create 1ee31a2cbc3f504a7ac5551a02c12a3f0db77b201d9845780ebf9205cd56aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:30:05 compute-0 podman[191264]: 2025-11-29 07:30:05.128230521 +0000 UTC m=+0.021988468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:05 compute-0 systemd[1]: Started libpod-conmon-1ee31a2cbc3f504a7ac5551a02c12a3f0db77b201d9845780ebf9205cd56aee0.scope.
Nov 29 07:30:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:05 compute-0 podman[191264]: 2025-11-29 07:30:05.300273891 +0000 UTC m=+0.194031848 container init 1ee31a2cbc3f504a7ac5551a02c12a3f0db77b201d9845780ebf9205cd56aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:05 compute-0 podman[191264]: 2025-11-29 07:30:05.309407271 +0000 UTC m=+0.203165218 container start 1ee31a2cbc3f504a7ac5551a02c12a3f0db77b201d9845780ebf9205cd56aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:30:05 compute-0 reverent_gould[191281]: 167 167
Nov 29 07:30:05 compute-0 systemd[1]: libpod-1ee31a2cbc3f504a7ac5551a02c12a3f0db77b201d9845780ebf9205cd56aee0.scope: Deactivated successfully.
Nov 29 07:30:05 compute-0 podman[191264]: 2025-11-29 07:30:05.330615297 +0000 UTC m=+0.224373254 container attach 1ee31a2cbc3f504a7ac5551a02c12a3f0db77b201d9845780ebf9205cd56aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:30:05 compute-0 podman[191264]: 2025-11-29 07:30:05.331357546 +0000 UTC m=+0.225115493 container died 1ee31a2cbc3f504a7ac5551a02c12a3f0db77b201d9845780ebf9205cd56aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:30:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0620dc720ea225ccceb31296c6abb9fdf0778eef15b28dffba4d60f10483e37e-merged.mount: Deactivated successfully.
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:30:05
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'images', 'backups']
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:30:05 compute-0 podman[191264]: 2025-11-29 07:30:05.502013591 +0000 UTC m=+0.395771528 container remove 1ee31a2cbc3f504a7ac5551a02c12a3f0db77b201d9845780ebf9205cd56aee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:05 compute-0 systemd[1]: libpod-conmon-1ee31a2cbc3f504a7ac5551a02c12a3f0db77b201d9845780ebf9205cd56aee0.scope: Deactivated successfully.
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:05 compute-0 ceph-mon[75050]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:05 compute-0 podman[191304]: 2025-11-29 07:30:05.686761846 +0000 UTC m=+0.059476191 container create a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:05 compute-0 podman[191304]: 2025-11-29 07:30:05.651282255 +0000 UTC m=+0.023996620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:30:05 compute-0 systemd[1]: Started libpod-conmon-a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448.scope.
Nov 29 07:30:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11dc00c122cc50d9da784cf5a9e7f18b26f5fa319ad6417cbafd3684074396cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11dc00c122cc50d9da784cf5a9e7f18b26f5fa319ad6417cbafd3684074396cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11dc00c122cc50d9da784cf5a9e7f18b26f5fa319ad6417cbafd3684074396cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11dc00c122cc50d9da784cf5a9e7f18b26f5fa319ad6417cbafd3684074396cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:30:05 compute-0 podman[191304]: 2025-11-29 07:30:05.808761485 +0000 UTC m=+0.181475860 container init a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:30:05 compute-0 podman[191304]: 2025-11-29 07:30:05.816481037 +0000 UTC m=+0.189195392 container start a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:30:05 compute-0 podman[191304]: 2025-11-29 07:30:05.885435645 +0000 UTC m=+0.258149990 container attach a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:30:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:30:06 compute-0 ceph-mon[75050]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:06 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 07:30:06 compute-0 sshd[1008]: Received signal 15; terminating.
Nov 29 07:30:06 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 07:30:06 compute-0 systemd[1]: sshd.service: Unit process 191145 (sshd-session) remains running after unit stopped.
Nov 29 07:30:06 compute-0 systemd[1]: sshd.service: Unit process 191220 (sshd-session) remains running after unit stopped.
Nov 29 07:30:06 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 07:30:06 compute-0 systemd[1]: sshd.service: Consumed 8.691s CPU time, 37.7M memory peak, read 32.0K from disk, written 144.0K to disk.
Nov 29 07:30:06 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 07:30:06 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 29 07:30:06 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 07:30:06 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 07:30:06 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 07:30:06 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 29 07:30:06 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 29 07:30:06 compute-0 sshd[191978]: Server listening on 0.0.0.0 port 22.
Nov 29 07:30:06 compute-0 sshd[191978]: Server listening on :: port 22.
Nov 29 07:30:06 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 29 07:30:06 compute-0 boring_northcutt[191372]: {
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "osd_id": 2,
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "type": "bluestore"
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:     },
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "osd_id": 1,
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "type": "bluestore"
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:     },
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "osd_id": 0,
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:         "type": "bluestore"
Nov 29 07:30:06 compute-0 boring_northcutt[191372]:     }
Nov 29 07:30:06 compute-0 boring_northcutt[191372]: }
Nov 29 07:30:07 compute-0 systemd[1]: libpod-a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448.scope: Deactivated successfully.
Nov 29 07:30:07 compute-0 systemd[1]: libpod-a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448.scope: Consumed 1.186s CPU time.
Nov 29 07:30:07 compute-0 podman[191304]: 2025-11-29 07:30:07.006584883 +0000 UTC m=+1.379299258 container died a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:30:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-11dc00c122cc50d9da784cf5a9e7f18b26f5fa319ad6417cbafd3684074396cb-merged.mount: Deactivated successfully.
Nov 29 07:30:07 compute-0 podman[191304]: 2025-11-29 07:30:07.057764685 +0000 UTC m=+1.430479030 container remove a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_northcutt, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:30:07 compute-0 systemd[1]: libpod-conmon-a41fe632c119933ca8d1f6379086e7d244bb62fa3badf0dbfea3c6e2972ac448.scope: Deactivated successfully.
Nov 29 07:30:07 compute-0 sudo[191197]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:30:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:30:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:30:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:30:07 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev dab39c22-e591-423b-aa35-1f6c5a92e1b0 does not exist
Nov 29 07:30:07 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev f9d9d664-41f5-4a46-a833-4be66a70f879 does not exist
Nov 29 07:30:07 compute-0 sshd-session[191145]: Connection closed by authenticating user root 143.14.121.41 port 36960 [preauth]
Nov 29 07:30:07 compute-0 sudo[192010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:30:07 compute-0 sudo[192010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:07 compute-0 sudo[192010]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:07 compute-0 sudo[192043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:30:07 compute-0 sudo[192043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:30:07 compute-0 sudo[192043]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:30:09 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:30:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:30:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:30:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:10 compute-0 systemd[1]: Reloading.
Nov 29 07:30:10 compute-0 systemd-rc-local-generator[192301]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:10 compute-0 systemd-sysv-generator[192305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:10 compute-0 ceph-mon[75050]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:10 compute-0 sshd-session[192206]: Invalid user ansible from 143.14.121.41 port 36962
Nov 29 07:30:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:30:11 compute-0 sshd-session[192206]: Connection closed by invalid user ansible 143.14.121.41 port 36962 [preauth]
Nov 29 07:30:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:12 compute-0 ceph-mon[75050]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:13 compute-0 ceph-mon[75050]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:13 compute-0 sshd-session[192312]: Connection closed by authenticating user root 143.14.121.41 port 36966 [preauth]
Nov 29 07:30:13 compute-0 sudo[171069]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:14 compute-0 sudo[194554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwuflqbkdkyehmnqurewpcykxipzpfaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401414.0493855-336-119573615144963/AnsiballZ_systemd.py'
Nov 29 07:30:14 compute-0 sudo[194554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:30:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:30:15 compute-0 python3.9[194588]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:30:15 compute-0 systemd[1]: Reloading.
Nov 29 07:30:15 compute-0 systemd-rc-local-generator[194971]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:15 compute-0 systemd-sysv-generator[194978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:15 compute-0 ceph-mon[75050]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:15 compute-0 sudo[194554]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:16 compute-0 sudo[195994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcljohsafyjcmmntztkylnqvusdikfrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401415.74011-336-69228769670024/AnsiballZ_systemd.py'
Nov 29 07:30:16 compute-0 sudo[195994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:16 compute-0 python3.9[196019]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:30:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:16 compute-0 systemd[1]: Reloading.
Nov 29 07:30:16 compute-0 systemd-rc-local-generator[196552]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:16 compute-0 systemd-sysv-generator[196557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:16 compute-0 ceph-mon[75050]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:17 compute-0 sudo[195994]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:17 compute-0 sshd-session[193555]: Connection closed by authenticating user root 143.14.121.41 port 39114 [preauth]
Nov 29 07:30:17 compute-0 sudo[197758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnoyqxuwuulancqqbebtyyyamjoionqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401417.151795-336-208214702801730/AnsiballZ_systemd.py'
Nov 29 07:30:17 compute-0 sudo[197758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:17 compute-0 podman[197658]: 2025-11-29 07:30:17.505267853 +0000 UTC m=+0.089963320 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 07:30:17 compute-0 podman[197633]: 2025-11-29 07:30:17.520376419 +0000 UTC m=+0.106664678 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 29 07:30:17 compute-0 python3.9[197792]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:30:17 compute-0 systemd[1]: Reloading.
Nov 29 07:30:17 compute-0 systemd-rc-local-generator[198315]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:17 compute-0 systemd-sysv-generator[198319]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:18 compute-0 sudo[197758]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:18 compute-0 sudo[199098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnnyuvbwogjnyhzuoayarbhiqhakcjil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401418.2767339-336-15957051143390/AnsiballZ_systemd.py'
Nov 29 07:30:18 compute-0 sudo[199098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:19 compute-0 ceph-mon[75050]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:19 compute-0 python3.9[199100]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:30:19 compute-0 systemd[1]: Reloading.
Nov 29 07:30:19 compute-0 systemd-rc-local-generator[199365]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:19 compute-0 systemd-sysv-generator[199370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:19 compute-0 sudo[199098]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:19 compute-0 sshd-session[197692]: Connection closed by authenticating user root 143.14.121.41 port 39118 [preauth]
Nov 29 07:30:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:20 compute-0 sudo[200834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chidukxcxjqpgfzastumkmwuuhukitfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401420.1726573-365-241076164269986/AnsiballZ_systemd.py'
Nov 29 07:30:20 compute-0 sudo[200834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:20 compute-0 python3.9[200836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:20 compute-0 systemd[1]: Reloading.
Nov 29 07:30:20 compute-0 systemd-sysv-generator[201130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:20 compute-0 systemd-rc-local-generator[201127]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:21 compute-0 ceph-mon[75050]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:21 compute-0 sudo[200834]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:21 compute-0 sshd-session[200154]: Connection closed by authenticating user root 143.14.121.41 port 39130 [preauth]
Nov 29 07:30:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:21 compute-0 sudo[201730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmmxeyfqwmvjyxqlojupdfajvtybsehj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401421.4413834-365-42355177435786/AnsiballZ_systemd.py'
Nov 29 07:30:21 compute-0 sudo[201730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:22 compute-0 python3.9[201732]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:22 compute-0 systemd[1]: Reloading.
Nov 29 07:30:22 compute-0 systemd-rc-local-generator[201764]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:22 compute-0 systemd-sysv-generator[201767]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:22 compute-0 sudo[201730]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:23 compute-0 sudo[202037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwlpxmdwzzovipbiuihikbnqtpwfgfsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401422.7388508-365-154553679873087/AnsiballZ_systemd.py'
Nov 29 07:30:23 compute-0 sudo[202037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:30:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:30:23 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.002s CPU time.
Nov 29 07:30:23 compute-0 systemd[1]: run-r74e910a075b44873bda7252eb6ed970f.service: Deactivated successfully.
Nov 29 07:30:23 compute-0 ceph-mon[75050]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:23 compute-0 python3.9[202039]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:23 compute-0 systemd[1]: Reloading.
Nov 29 07:30:23 compute-0 systemd-rc-local-generator[202068]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:23 compute-0 systemd-sysv-generator[202074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:23 compute-0 sudo[202037]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:23 compute-0 sshd-session[201713]: Connection closed by authenticating user root 143.14.121.41 port 56000 [preauth]
Nov 29 07:30:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:24 compute-0 sudo[202229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjkcmagmeuqspjhbgepnqcgumvnvbup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401424.0315287-365-269109384794609/AnsiballZ_systemd.py'
Nov 29 07:30:24 compute-0 sudo[202229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:24 compute-0 python3.9[202231]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:24 compute-0 sudo[202229]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:25 compute-0 sudo[202385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clvkwlbaslhxwpygrwgdavcxoouxiqcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401424.902091-365-163925713336347/AnsiballZ_systemd.py'
Nov 29 07:30:25 compute-0 sudo[202385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:25 compute-0 ceph-mon[75050]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:25 compute-0 python3.9[202387]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:25 compute-0 systemd[1]: Reloading.
Nov 29 07:30:25 compute-0 systemd-rc-local-generator[202417]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:25 compute-0 systemd-sysv-generator[202421]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:25 compute-0 sudo[202385]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:26 compute-0 sshd-session[202183]: Invalid user testuser from 143.14.121.41 port 56006
Nov 29 07:30:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:26 compute-0 sudo[202575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thcuwsfmlxxjpbisejtzifretwcwfuhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401426.2775798-401-58743586480125/AnsiballZ_systemd.py'
Nov 29 07:30:26 compute-0 sudo[202575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:26 compute-0 sshd-session[202183]: Connection closed by invalid user testuser 143.14.121.41 port 56006 [preauth]
Nov 29 07:30:26 compute-0 python3.9[202577]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 07:30:26 compute-0 systemd[1]: Reloading.
Nov 29 07:30:27 compute-0 systemd-rc-local-generator[202612]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:30:27 compute-0 systemd-sysv-generator[202615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:30:27 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 07:30:27 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 07:30:27 compute-0 sudo[202575]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:27 compute-0 ceph-mon[75050]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:27 compute-0 sudo[202770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngplqrrnqaagrfjffobypjlssxcmjyrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401427.5366628-409-75328797203506/AnsiballZ_systemd.py'
Nov 29 07:30:27 compute-0 sudo[202770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:28 compute-0 python3.9[202772]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:28 compute-0 sudo[202770]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:28 compute-0 sshd-session[202579]: Connection closed by authenticating user root 143.14.121.41 port 56010 [preauth]
Nov 29 07:30:28 compute-0 sudo[202925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvrexdugivmfjaspocnopejuocakpzfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401428.4228008-409-187298977660172/AnsiballZ_systemd.py'
Nov 29 07:30:28 compute-0 sudo[202925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:28 compute-0 ceph-mon[75050]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:29 compute-0 python3.9[202927]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:29 compute-0 sudo[202925]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:29 compute-0 sudo[203082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxigqzezotwhilejsielqovfwtqisurg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401429.404329-409-178043983211240/AnsiballZ_systemd.py'
Nov 29 07:30:29 compute-0 sudo[203082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:30 compute-0 python3.9[203084]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:30 compute-0 sudo[203082]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:30 compute-0 sudo[203237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxwxjwryfgyduodjrccmlahrezcqtscv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401430.2527761-409-38341169937853/AnsiballZ_systemd.py'
Nov 29 07:30:30 compute-0 sudo[203237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:30 compute-0 python3.9[203239]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:30 compute-0 sudo[203237]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:31 compute-0 ceph-mon[75050]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:31 compute-0 sudo[203392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzpqlrprhvmwdmcvpkfdojxdyklbuohw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401431.107469-409-116102914113624/AnsiballZ_systemd.py'
Nov 29 07:30:31 compute-0 sudo[203392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:31 compute-0 python3.9[203394]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:31 compute-0 sudo[203392]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:32 compute-0 sshd-session[202928]: Connection closed by authenticating user root 143.14.121.41 port 56014 [preauth]
Nov 29 07:30:32 compute-0 sudo[203547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaglmifnzoxgfvjdzvrwrxtdiqykcaan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401431.9247022-409-205929031260788/AnsiballZ_systemd.py'
Nov 29 07:30:32 compute-0 sudo[203547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:32 compute-0 python3.9[203549]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:33 compute-0 ceph-mon[75050]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:33 compute-0 sudo[203547]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:33 compute-0 sshd-session[203550]: Invalid user test1 from 143.14.121.41 port 42060
Nov 29 07:30:33 compute-0 sshd-session[203550]: Connection closed by invalid user test1 143.14.121.41 port 42060 [preauth]
Nov 29 07:30:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:34 compute-0 sudo[203704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bonsrqxpihtcwxnbebnhguyizyujkwjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401433.7678702-409-139859115236478/AnsiballZ_systemd.py'
Nov 29 07:30:34 compute-0 sudo[203704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:34 compute-0 python3.9[203706]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:34 compute-0 sudo[203704]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:34 compute-0 sudo[203861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvrseibfjytidnoojkhyrmxnijwtvdfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401434.5663974-409-223779939429204/AnsiballZ_systemd.py'
Nov 29 07:30:34 compute-0 sudo[203861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:35 compute-0 python3.9[203863]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:35 compute-0 sudo[203861]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:35 compute-0 ceph-mon[75050]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:30:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:30:35 compute-0 sudo[204016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqbcucuinvtnbsclpsibtjkejjuiisaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401435.359804-409-83332474757303/AnsiballZ_systemd.py'
Nov 29 07:30:35 compute-0 sudo[204016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:35 compute-0 sshd-session[203707]: Connection closed by authenticating user root 143.14.121.41 port 42062 [preauth]
Nov 29 07:30:36 compute-0 python3.9[204018]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:36 compute-0 sudo[204016]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:36 compute-0 sudo[204172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqjimuqkhqyqgwqapuaanmdorehslphh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401436.2351515-409-235332976908860/AnsiballZ_systemd.py'
Nov 29 07:30:36 compute-0 sudo[204172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:36 compute-0 ceph-mon[75050]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:36 compute-0 python3.9[204174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:37 compute-0 sudo[204172]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:37 compute-0 sudo[204328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkaxwvkamolygzlhooholuheiiykwodu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401437.2201788-409-205432331084089/AnsiballZ_systemd.py'
Nov 29 07:30:37 compute-0 sudo[204328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:37 compute-0 python3.9[204330]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:37 compute-0 sudo[204328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:38 compute-0 sudo[204483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wogpubtlbecfbfjurqckadwltxojpuxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401438.1324809-409-29087692928574/AnsiballZ_systemd.py'
Nov 29 07:30:38 compute-0 sudo[204483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:39 compute-0 sshd-session[204019]: Connection closed by authenticating user root 143.14.121.41 port 42072 [preauth]
Nov 29 07:30:39 compute-0 python3.9[204485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:39 compute-0 sudo[204483]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:39 compute-0 ceph-mon[75050]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:39 compute-0 sudo[204640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azpxxqugvtfimtnsqwuzivtfjzkbcswr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401439.4919446-409-237832844056635/AnsiballZ_systemd.py'
Nov 29 07:30:39 compute-0 sudo[204640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:41 compute-0 python3.9[204642]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:41 compute-0 sshd-session[204489]: Invalid user ali from 143.14.121.41 port 42080
Nov 29 07:30:41 compute-0 sudo[204640]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:41 compute-0 sshd-session[204489]: Connection closed by invalid user ali 143.14.121.41 port 42080 [preauth]
Nov 29 07:30:41 compute-0 sudo[204795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmkpghioybroyletlkggtipvhqaerhjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401441.4340858-409-235754296327642/AnsiballZ_systemd.py'
Nov 29 07:30:41 compute-0 sudo[204795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:42 compute-0 python3.9[204797]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 07:30:42 compute-0 sudo[204795]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:42 compute-0 ceph-mon[75050]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:42 compute-0 sudo[204952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtnikvbukndehcjmnysldaywvdzltjwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401442.5592601-511-120354496139650/AnsiballZ_file.py'
Nov 29 07:30:42 compute-0 sudo[204952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:43 compute-0 python3.9[204954]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:30:43 compute-0 sudo[204952]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:43 compute-0 sudo[205104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlxifnevqsoygbdbzmcdvazogsbqjauv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401443.218849-511-122153433100080/AnsiballZ_file.py'
Nov 29 07:30:43 compute-0 sudo[205104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:43 compute-0 python3.9[205106]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:30:43 compute-0 sudo[205104]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:43 compute-0 sshd-session[204798]: Connection closed by authenticating user root 143.14.121.41 port 32972 [preauth]
Nov 29 07:30:43 compute-0 ceph-mon[75050]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:44 compute-0 sudo[205258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfhooxrbjcwtwxkdojdtwcxpabentrph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401443.9593868-511-86437218938029/AnsiballZ_file.py'
Nov 29 07:30:44 compute-0 sudo[205258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:44 compute-0 python3.9[205260]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:30:44 compute-0 sudo[205258]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:44 compute-0 sudo[205410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txryqfkvtvnlwcgesohucmrfssmewpqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401444.6360505-511-204362085077335/AnsiballZ_file.py'
Nov 29 07:30:44 compute-0 sudo[205410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:45 compute-0 python3.9[205412]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:30:45 compute-0 sudo[205410]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:45 compute-0 ceph-mon[75050]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:45 compute-0 sudo[205562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnvxgybvulaqsagddnmmjfhsnxpmfhxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401445.3248017-511-86752151344314/AnsiballZ_file.py'
Nov 29 07:30:45 compute-0 sudo[205562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:45 compute-0 python3.9[205564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:30:45 compute-0 sudo[205562]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:45 compute-0 sshd-session[205183]: Invalid user teste from 143.14.121.41 port 32978
Nov 29 07:30:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:46 compute-0 sshd-session[205183]: Connection closed by invalid user teste 143.14.121.41 port 32978 [preauth]
Nov 29 07:30:46 compute-0 sudo[205714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwhdxwmbdzymznjkjdiwrsdkqkmypgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401446.036879-511-20650344102429/AnsiballZ_file.py'
Nov 29 07:30:46 compute-0 sudo[205714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:46 compute-0 python3.9[205716]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:30:46 compute-0 sudo[205714]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:47 compute-0 ceph-mon[75050]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:47 compute-0 sudo[205868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwidjejukqabbmeibyiilvkkenkzkxcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401446.7589426-554-96238547231839/AnsiballZ_stat.py'
Nov 29 07:30:47 compute-0 sudo[205868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:47 compute-0 python3.9[205870]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:47 compute-0 sudo[205868]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:47 compute-0 podman[205921]: 2025-11-29 07:30:47.720110588 +0000 UTC m=+0.083453762 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:30:47 compute-0 podman[205920]: 2025-11-29 07:30:47.768278371 +0000 UTC m=+0.131349618 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 07:30:48 compute-0 sudo[206034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srblmvcrvkehcbwexujslyreyswptaqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401446.7589426-554-96238547231839/AnsiballZ_copy.py'
Nov 29 07:30:48 compute-0 sudo[206034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:48 compute-0 python3.9[206036]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401446.7589426-554-96238547231839/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:48 compute-0 sudo[206034]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:48 compute-0 sshd-session[205717]: Invalid user admin from 143.14.121.41 port 32992
Nov 29 07:30:48 compute-0 sshd-session[205717]: Connection closed by invalid user admin 143.14.121.41 port 32992 [preauth]
Nov 29 07:30:48 compute-0 sudo[206186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koquvgwhqbegqafeytgbdmzyfvpdyoep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401448.4176648-554-259790461371447/AnsiballZ_stat.py'
Nov 29 07:30:48 compute-0 sudo[206186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:49 compute-0 python3.9[206188]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:49 compute-0 sudo[206186]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:49 compute-0 ceph-mon[75050]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:49 compute-0 sudo[206313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifnexuefavnmgdqgkuztfkftycuroyfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401448.4176648-554-259790461371447/AnsiballZ_copy.py'
Nov 29 07:30:49 compute-0 sudo[206313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:49 compute-0 python3.9[206315]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401448.4176648-554-259790461371447/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:49 compute-0 sudo[206313]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:50 compute-0 sudo[206465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svctpongdntovcwkxtlvqwlnbbtvumtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401450.0428324-554-226979155188067/AnsiballZ_stat.py'
Nov 29 07:30:50 compute-0 sudo[206465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:50 compute-0 python3.9[206467]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:50 compute-0 sudo[206465]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:50 compute-0 ceph-mon[75050]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:50 compute-0 sshd-session[206189]: Connection closed by authenticating user root 143.14.121.41 port 32998 [preauth]
Nov 29 07:30:50 compute-0 sudo[206590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwjcjsbntckqdgxdtxsgvnbykxhxzmzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401450.0428324-554-226979155188067/AnsiballZ_copy.py'
Nov 29 07:30:50 compute-0 sudo[206590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:51 compute-0 python3.9[206592]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401450.0428324-554-226979155188067/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:51 compute-0 sudo[206590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:51 compute-0 sudo[206744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ishmlrfqmdmfolbwkdtwlffvyjttnxiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401451.5126662-554-270226989001562/AnsiballZ_stat.py'
Nov 29 07:30:51 compute-0 sudo[206744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:52 compute-0 python3.9[206746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:52 compute-0 sudo[206744]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:52 compute-0 sudo[206869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmpnqlexfgegyvguutdcmxklhctdtsir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401451.5126662-554-270226989001562/AnsiballZ_copy.py'
Nov 29 07:30:52 compute-0 sudo[206869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:52 compute-0 python3.9[206871]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401451.5126662-554-270226989001562/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:52 compute-0 sudo[206869]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:53 compute-0 ceph-mon[75050]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:53 compute-0 sudo[207021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynfyoquihkapzdcnlakpuultlngqguhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401452.799715-554-7672415066530/AnsiballZ_stat.py'
Nov 29 07:30:53 compute-0 sudo[207021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:53 compute-0 sshd-session[206593]: Invalid user user from 143.14.121.41 port 33008
Nov 29 07:30:53 compute-0 python3.9[207023]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:53 compute-0 sudo[207021]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:53 compute-0 sshd-session[206593]: Connection closed by invalid user user 143.14.121.41 port 33008 [preauth]
Nov 29 07:30:53 compute-0 sudo[207146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjbbydrvuxcaghikkvhkxpgztzhaixkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401452.799715-554-7672415066530/AnsiballZ_copy.py'
Nov 29 07:30:53 compute-0 sudo[207146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:53 compute-0 python3.9[207148]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401452.799715-554-7672415066530/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:53 compute-0 sudo[207146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:54 compute-0 sudo[207299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdnmxjbgklhdmkfytudryazyenbjwany ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401454.1048238-554-203184930402398/AnsiballZ_stat.py'
Nov 29 07:30:54 compute-0 sudo[207299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:54 compute-0 python3.9[207302]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:54 compute-0 sudo[207299]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:55 compute-0 sudo[207425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjstgxrngoeogkexfovuaeeqxjncvppt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401454.1048238-554-203184930402398/AnsiballZ_copy.py'
Nov 29 07:30:55 compute-0 sudo[207425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:55 compute-0 ceph-mon[75050]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:55 compute-0 python3.9[207427]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401454.1048238-554-203184930402398/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:55 compute-0 sudo[207425]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:55 compute-0 sudo[207577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzjhdjmigssgnxxmwfdodgifajypsdjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401455.687857-554-20173271491052/AnsiballZ_stat.py'
Nov 29 07:30:55 compute-0 sudo[207577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:56 compute-0 python3.9[207579]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:56 compute-0 sudo[207577]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:56 compute-0 sshd-session[207149]: Connection closed by authenticating user root 143.14.121.41 port 41184 [preauth]
Nov 29 07:30:56 compute-0 sudo[207700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdvbejopwnacpyswoepxgryxncqnhosc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401455.687857-554-20173271491052/AnsiballZ_copy.py'
Nov 29 07:30:56 compute-0 sudo[207700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:30:57 compute-0 ceph-mon[75050]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:57 compute-0 python3.9[207702]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401455.687857-554-20173271491052/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:57 compute-0 sudo[207700]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:57 compute-0 sudo[207852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pocxfrleitkeczrdokjnszrjeyceifug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401457.6404045-554-92000783337662/AnsiballZ_stat.py'
Nov 29 07:30:57 compute-0 sudo[207852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:58 compute-0 python3.9[207855]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:30:58 compute-0 sudo[207852]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:58 compute-0 sudo[207979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxyknbheqyzklvwaofvuusyjtcqftptk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401457.6404045-554-92000783337662/AnsiballZ_copy.py'
Nov 29 07:30:58 compute-0 sudo[207979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:58 compute-0 python3.9[207981]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401457.6404045-554-92000783337662/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:30:58 compute-0 sudo[207979]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:59 compute-0 ceph-mon[75050]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:30:59 compute-0 sudo[208131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvpnmhfqsdvlxozyahdneccatqcfkwqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401458.9469783-667-87199695397856/AnsiballZ_command.py'
Nov 29 07:30:59 compute-0 sudo[208131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:30:59 compute-0 python3.9[208133]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 07:30:59 compute-0 sudo[208131]: pam_unix(sudo:session): session closed for user root
Nov 29 07:30:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:30:59.743 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:30:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:30:59.744 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:30:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:30:59.744 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:31:00 compute-0 sudo[208284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ielziwaphzarluaoqnkehrcpshgvgpca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401459.7328377-676-178235614572305/AnsiballZ_file.py'
Nov 29 07:31:00 compute-0 sshd-session[207853]: Invalid user pi from 143.14.121.41 port 41200
Nov 29 07:31:00 compute-0 sudo[208284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:00 compute-0 python3.9[208286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:00 compute-0 sudo[208284]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:00 compute-0 sshd-session[207853]: Connection closed by invalid user pi 143.14.121.41 port 41200 [preauth]
Nov 29 07:31:00 compute-0 sudo[208436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahgbyevqfanzzagaymgzzktfijzyvrtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401460.3962562-676-109670941045040/AnsiballZ_file.py'
Nov 29 07:31:00 compute-0 sudo[208436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:00 compute-0 python3.9[208439]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:00 compute-0 sudo[208436]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:01 compute-0 ceph-mon[75050]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:01 compute-0 sudo[208590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-domtsdmqgbzprpoqhhjrbqelokmhrmzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401461.0865593-676-37424760473391/AnsiballZ_file.py'
Nov 29 07:31:01 compute-0 sudo[208590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:01 compute-0 python3.9[208592]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:01 compute-0 sudo[208590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:02 compute-0 sudo[208742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prckwubsmtckmhffmjjfbtouqjobrflr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401461.7509863-676-55307054437745/AnsiballZ_file.py'
Nov 29 07:31:02 compute-0 sudo[208742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:02 compute-0 python3.9[208744]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:02 compute-0 sudo[208742]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:02 compute-0 sshd-session[208437]: Invalid user httpadmin from 143.14.121.41 port 41210
Nov 29 07:31:02 compute-0 sshd-session[208437]: Connection closed by invalid user httpadmin 143.14.121.41 port 41210 [preauth]
Nov 29 07:31:02 compute-0 sudo[208894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkzydosjlwiblzqksvdqcgtkywdubav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401462.543885-676-206991873036057/AnsiballZ_file.py'
Nov 29 07:31:02 compute-0 sudo[208894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:03 compute-0 python3.9[208896]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:03 compute-0 sudo[208894]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:03 compute-0 ceph-mon[75050]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:03 compute-0 sudo[209048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msizvozvwdhqbutqbaruzojviqidjkrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401463.1815672-676-188681932787512/AnsiballZ_file.py'
Nov 29 07:31:03 compute-0 sudo[209048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:03 compute-0 python3.9[209050]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:03 compute-0 sudo[209048]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:04 compute-0 sudo[209200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugwfnbxcljuvswcbehdlnmgezystkxwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401463.8062668-676-104849617642516/AnsiballZ_file.py'
Nov 29 07:31:04 compute-0 sudo[209200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:04 compute-0 sshd-session[208897]: Invalid user admin from 143.14.121.41 port 41170
Nov 29 07:31:04 compute-0 python3.9[209202]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:04 compute-0 sudo[209200]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:04 compute-0 sshd-session[208897]: Connection closed by invalid user admin 143.14.121.41 port 41170 [preauth]
Nov 29 07:31:05 compute-0 sudo[209355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbhgdibvpiwnctundyorqvicobwrxsgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401464.8565755-676-13175020597553/AnsiballZ_file.py'
Nov 29 07:31:05 compute-0 sudo[209355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:05 compute-0 python3.9[209357]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:05 compute-0 sudo[209355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:31:05
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.mgr', 'images']
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:05 compute-0 sudo[209507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cunvleregabzeqpgxzmutvocqpaxaxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401465.5029206-676-119390597067116/AnsiballZ_file.py'
Nov 29 07:31:05 compute-0 sudo[209507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:05 compute-0 ceph-mon[75050]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:06 compute-0 python3.9[209509]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:06 compute-0 sudo[209507]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:06 compute-0 sudo[209659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjrwcmkmobpcezurgjdvfopoueahbvcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401466.2745466-676-138858109254930/AnsiballZ_file.py'
Nov 29 07:31:06 compute-0 sudo[209659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:31:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:31:06 compute-0 python3.9[209661]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:06 compute-0 sudo[209659]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:06 compute-0 ceph-mon[75050]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:07 compute-0 sudo[209738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:07 compute-0 sudo[209738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:07 compute-0 sudo[209738]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:07 compute-0 sudo[209786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:31:07 compute-0 sudo[209786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:07 compute-0 sudo[209786]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:07 compute-0 sudo[209834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:07 compute-0 sudo[209834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:07 compute-0 sudo[209834]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:07 compute-0 sudo[209888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfxpdxerhcvbmndwtudkxvymzmjcmiyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401467.190687-676-219043747319964/AnsiballZ_file.py'
Nov 29 07:31:07 compute-0 sudo[209888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:07 compute-0 sudo[209886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:31:07 compute-0 sudo[209886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:07 compute-0 python3.9[209906]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:07 compute-0 sudo[209888]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:07 compute-0 sshd-session[209327]: Invalid user teamspeak from 143.14.121.41 port 41186
Nov 29 07:31:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:08 compute-0 sudo[209886]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:31:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:31:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:31:08 compute-0 sshd-session[209327]: Connection closed by invalid user teamspeak 143.14.121.41 port 41186 [preauth]
Nov 29 07:31:08 compute-0 sudo[210095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqogxjoiimzgcwiutqexnggojctntrkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401467.8588514-676-5681115487617/AnsiballZ_file.py'
Nov 29 07:31:08 compute-0 sudo[210095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:08 compute-0 python3.9[210097]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:08 compute-0 sudo[210095]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:31:08 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 83912b0d-f656-4207-8eed-ecc251c51eb5 does not exist
Nov 29 07:31:08 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 9c7803de-cd59-497a-b24e-00090e7d7f1d does not exist
Nov 29 07:31:08 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev f950c577-048e-4ffe-9179-8c57138ef24c does not exist
Nov 29 07:31:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:31:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:31:08 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:31:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:31:08 compute-0 sudo[210123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:08 compute-0 sudo[210123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:08 compute-0 sudo[210123]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:08 compute-0 sudo[210171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:31:08 compute-0 sudo[210171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:08 compute-0 sudo[210171]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:08 compute-0 sudo[210227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:08 compute-0 sudo[210227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:08 compute-0 sudo[210227]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:08 compute-0 sudo[210274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:31:08 compute-0 sudo[210274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:08 compute-0 sudo[210349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sshzbniglakqejolkbifbbczbtdfison ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401468.5284226-676-36429507379314/AnsiballZ_file.py'
Nov 29 07:31:08 compute-0 sudo[210349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:31:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:31:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:31:09 compute-0 python3.9[210351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:09 compute-0 sudo[210349]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:09 compute-0 podman[210400]: 2025-11-29 07:31:09.181427101 +0000 UTC m=+0.028943052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:31:09 compute-0 podman[210400]: 2025-11-29 07:31:09.496096694 +0000 UTC m=+0.343612615 container create 4cba3d99147e589ebd8f9d9df448aa884406ef34b336f31d014cba35fa8f79c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:31:09 compute-0 systemd[1]: Started libpod-conmon-4cba3d99147e589ebd8f9d9df448aa884406ef34b336f31d014cba35fa8f79c7.scope.
Nov 29 07:31:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:31:09 compute-0 sudo[210559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeyjlshpnviubzstitpbjigrjiynjvhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401469.2796662-676-227023435710912/AnsiballZ_file.py'
Nov 29 07:31:09 compute-0 sudo[210559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:09 compute-0 python3.9[210562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:09 compute-0 sudo[210559]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:10 compute-0 podman[210400]: 2025-11-29 07:31:10.03628083 +0000 UTC m=+0.883796751 container init 4cba3d99147e589ebd8f9d9df448aa884406ef34b336f31d014cba35fa8f79c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:31:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:10 compute-0 podman[210400]: 2025-11-29 07:31:10.045884496 +0000 UTC m=+0.893400357 container start 4cba3d99147e589ebd8f9d9df448aa884406ef34b336f31d014cba35fa8f79c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:31:10 compute-0 zen_liskov[210557]: 167 167
Nov 29 07:31:10 compute-0 systemd[1]: libpod-4cba3d99147e589ebd8f9d9df448aa884406ef34b336f31d014cba35fa8f79c7.scope: Deactivated successfully.
Nov 29 07:31:10 compute-0 ceph-mon[75050]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:10 compute-0 sshd-session[210098]: Connection closed by authenticating user root 143.14.121.41 port 41198 [preauth]
Nov 29 07:31:10 compute-0 sudo[210726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqtnmjtujewxtrymwhvgzvahvvnorfsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401470.12879-775-101716680171177/AnsiballZ_stat.py'
Nov 29 07:31:10 compute-0 sudo[210726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:10 compute-0 podman[210400]: 2025-11-29 07:31:10.499018622 +0000 UTC m=+1.346534523 container attach 4cba3d99147e589ebd8f9d9df448aa884406ef34b336f31d014cba35fa8f79c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:31:10 compute-0 podman[210400]: 2025-11-29 07:31:10.500208155 +0000 UTC m=+1.347724036 container died 4cba3d99147e589ebd8f9d9df448aa884406ef34b336f31d014cba35fa8f79c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:31:10 compute-0 python3.9[210728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:10 compute-0 sudo[210726]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:11 compute-0 sudo[210851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfeldaqmbguenmzrtckksiyimlbqmway ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401470.12879-775-101716680171177/AnsiballZ_copy.py'
Nov 29 07:31:11 compute-0 sudo[210851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:11 compute-0 python3.9[210853]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401470.12879-775-101716680171177/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:11 compute-0 sudo[210851]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:12 compute-0 sudo[211003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cupidyuelvkloohiwsdgoojjsmqvpste ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401471.6495647-775-247300944668071/AnsiballZ_stat.py'
Nov 29 07:31:12 compute-0 sudo[211003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:12 compute-0 python3.9[211005]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:12 compute-0 sudo[211003]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:12 compute-0 sshd-session[210729]: Connection closed by authenticating user root 143.14.121.41 port 41208 [preauth]
Nov 29 07:31:13 compute-0 ceph-mon[75050]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-372838d352a04a6bd6a540726793c192ad903fab6f572c8815718ec8395245a9-merged.mount: Deactivated successfully.
Nov 29 07:31:13 compute-0 sudo[211128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahdnuggoexpwudmdzfsecjbgelyauhys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401471.6495647-775-247300944668071/AnsiballZ_copy.py'
Nov 29 07:31:13 compute-0 sudo[211128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:13 compute-0 python3.9[211130]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401471.6495647-775-247300944668071/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:13 compute-0 sudo[211128]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:13 compute-0 podman[210400]: 2025-11-29 07:31:13.817143743 +0000 UTC m=+4.664659604 container remove 4cba3d99147e589ebd8f9d9df448aa884406ef34b336f31d014cba35fa8f79c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 29 07:31:13 compute-0 systemd[1]: libpod-conmon-4cba3d99147e589ebd8f9d9df448aa884406ef34b336f31d014cba35fa8f79c7.scope: Deactivated successfully.
Nov 29 07:31:14 compute-0 sudo[211302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjmskgiuqczonrpvgisphcvesyjwyswo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401473.6624706-775-246069182544497/AnsiballZ_stat.py'
Nov 29 07:31:14 compute-0 sudo[211302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:14 compute-0 podman[211262]: 2025-11-29 07:31:13.96837431 +0000 UTC m=+0.025792095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:31:14 compute-0 python3.9[211304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:14 compute-0 podman[211262]: 2025-11-29 07:31:14.211597964 +0000 UTC m=+0.269015729 container create 1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:31:14 compute-0 sudo[211302]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:14 compute-0 ceph-mon[75050]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:14 compute-0 sudo[211425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwsyzyqiqrkqbnyzxiaqqyueyjqlykjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401473.6624706-775-246069182544497/AnsiballZ_copy.py'
Nov 29 07:31:14 compute-0 sudo[211425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:31:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:31:14 compute-0 python3.9[211427]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401473.6624706-775-246069182544497/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:14 compute-0 sudo[211425]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:15 compute-0 systemd[1]: Started libpod-conmon-1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364.scope.
Nov 29 07:31:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581203a3fafb6d16eefc12f03795db8c3ad2cbb0797e3805899eb8efea25730b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581203a3fafb6d16eefc12f03795db8c3ad2cbb0797e3805899eb8efea25730b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581203a3fafb6d16eefc12f03795db8c3ad2cbb0797e3805899eb8efea25730b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581203a3fafb6d16eefc12f03795db8c3ad2cbb0797e3805899eb8efea25730b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581203a3fafb6d16eefc12f03795db8c3ad2cbb0797e3805899eb8efea25730b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:15 compute-0 sshd-session[211093]: Connection closed by authenticating user root 143.14.121.41 port 57524 [preauth]
Nov 29 07:31:15 compute-0 sudo[211582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpmdsjvchvqkyuuzwipdlagxlhhkjcki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401475.0849054-775-243835752672668/AnsiballZ_stat.py'
Nov 29 07:31:15 compute-0 sudo[211582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:15 compute-0 python3.9[211584]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:15 compute-0 sudo[211582]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:16 compute-0 podman[211262]: 2025-11-29 07:31:16.02630529 +0000 UTC m=+2.083723085 container init 1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hodgkin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:31:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:16 compute-0 podman[211262]: 2025-11-29 07:31:16.041386257 +0000 UTC m=+2.098804052 container start 1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:31:16 compute-0 sudo[211709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eewwpiusaykfrvyetvzexydtjxijusje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401475.0849054-775-243835752672668/AnsiballZ_copy.py'
Nov 29 07:31:16 compute-0 sudo[211709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:16 compute-0 python3.9[211711]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401475.0849054-775-243835752672668/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:16 compute-0 sudo[211709]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:16 compute-0 sudo[211869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqltyzeyfptldvvocxjlxzuuihticvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401476.5401392-775-141144076057533/AnsiballZ_stat.py'
Nov 29 07:31:16 compute-0 sudo[211869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:31:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 2786 writes, 12K keys, 2786 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 2785 writes, 2785 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1106 writes, 4737 keys, 1106 commit groups, 1.0 writes per commit group, ingest: 7.39 MB, 0.01 MB/s
                                           Interval WAL: 1106 writes, 1106 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.8      0.62              0.07         5    0.123       0      0       0.0       0.0
                                             L6      1/0    8.23 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     32.7     27.3      1.07              0.11         4    0.269     15K   1790       0.0       0.0
                                            Sum      1/0    8.23 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     20.8     25.7      1.69              0.18         9    0.188     15K   1790       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     23.8     24.1      1.15              0.12         6    0.191     11K   1500       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     32.7     27.3      1.07              0.11         4    0.269     15K   1790       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.8      0.61              0.07         4    0.153       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.04 GB write, 0.04 MB/s write, 0.03 GB read, 0.03 MB/s read, 1.7 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdb5ecb1f0#2 capacity: 308.00 MB usage: 1021.31 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(65,866.66 KB,0.274787%) FilterBlock(10,50.17 KB,0.0159078%) IndexBlock(10,104.48 KB,0.0331284%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:31:17 compute-0 python3.9[211872]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:17 compute-0 sudo[211869]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:17 compute-0 podman[211262]: 2025-11-29 07:31:17.133900136 +0000 UTC m=+3.191317941 container attach 1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hodgkin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:31:17 compute-0 distracted_hodgkin[211536]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:31:17 compute-0 distracted_hodgkin[211536]: --> relative data size: 1.0
Nov 29 07:31:17 compute-0 distracted_hodgkin[211536]: --> All data devices are unavailable
Nov 29 07:31:17 compute-0 systemd[1]: libpod-1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364.scope: Deactivated successfully.
Nov 29 07:31:17 compute-0 podman[211262]: 2025-11-29 07:31:17.258111116 +0000 UTC m=+3.315528921 container died 1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:31:17 compute-0 systemd[1]: libpod-1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364.scope: Consumed 1.166s CPU time.
Nov 29 07:31:17 compute-0 ceph-mon[75050]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:17 compute-0 sudo[212017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywmzavnhfztmeytosqofnzujzzhybogu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401476.5401392-775-141144076057533/AnsiballZ_copy.py'
Nov 29 07:31:17 compute-0 sudo[212017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:17 compute-0 sshd-session[211585]: Connection closed by authenticating user root 143.14.121.41 port 57534 [preauth]
Nov 29 07:31:17 compute-0 python3.9[212019]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401476.5401392-775-141144076057533/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:17 compute-0 sudo[212017]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:18 compute-0 sudo[212193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruwywtfghphxenqdbdqxogwoavnnkvrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401477.9193714-775-46737138421975/AnsiballZ_stat.py'
Nov 29 07:31:18 compute-0 sudo[212193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:18 compute-0 python3.9[212195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:18 compute-0 sudo[212193]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:18 compute-0 sudo[212316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zieemhzpzasngrxxkffldmbtxkgcbvuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401477.9193714-775-46737138421975/AnsiballZ_copy.py'
Nov 29 07:31:18 compute-0 sudo[212316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:19 compute-0 python3.9[212318]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401477.9193714-775-46737138421975/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:19 compute-0 sudo[212316]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:19 compute-0 sudo[212468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oobtrvwxcoitjgyjyzdiqlbfgcrllrrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401479.5145366-775-175568390067828/AnsiballZ_stat.py'
Nov 29 07:31:19 compute-0 sudo[212468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:20 compute-0 python3.9[212470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:20 compute-0 sudo[212468]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:20 compute-0 sudo[212591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqwipuomvkwlgxhivoasfkebjixvzoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401479.5145366-775-175568390067828/AnsiballZ_copy.py'
Nov 29 07:31:20 compute-0 sudo[212591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:20 compute-0 python3.9[212593]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401479.5145366-775-175568390067828/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:20 compute-0 sudo[212591]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:21 compute-0 sudo[212744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdgmcdkozpozxifxguzsjibpfaroocah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401480.9381208-775-45082860418147/AnsiballZ_stat.py'
Nov 29 07:31:21 compute-0 sudo[212744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:21 compute-0 python3.9[212746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:21 compute-0 sudo[212744]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:21 compute-0 sshd-session[212071]: Connection closed by authenticating user root 143.14.121.41 port 57536 [preauth]
Nov 29 07:31:21 compute-0 sudo[212867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlgokcwrprgszvwymboumvurqsejumep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401480.9381208-775-45082860418147/AnsiballZ_copy.py'
Nov 29 07:31:21 compute-0 sudo[212867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:22 compute-0 python3.9[212870]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401480.9381208-775-45082860418147/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:22 compute-0 sudo[212867]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-581203a3fafb6d16eefc12f03795db8c3ad2cbb0797e3805899eb8efea25730b-merged.mount: Deactivated successfully.
Nov 29 07:31:22 compute-0 sudo[213021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miucdlfszyqlficudxfxqfgbksebvxts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401482.4609137-775-36433322700209/AnsiballZ_stat.py'
Nov 29 07:31:22 compute-0 sudo[213021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:23 compute-0 ceph-mon[75050]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:23 compute-0 python3.9[213023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:23 compute-0 sudo[213021]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-0 sudo[213144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruwgxdoovgxiiahjizmgcshxwzdaycmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401482.4609137-775-36433322700209/AnsiballZ_copy.py'
Nov 29 07:31:23 compute-0 sudo[213144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:23 compute-0 python3.9[213146]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401482.4609137-775-36433322700209/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:23 compute-0 sudo[213144]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-0 podman[211262]: 2025-11-29 07:31:23.738551515 +0000 UTC m=+9.795969290 container remove 1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:31:23 compute-0 sudo[210274]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-0 systemd[1]: libpod-conmon-1f5a79dcb52275616c3d46850da037f1466b81751ac2e5093fd7c6ea68860364.scope: Deactivated successfully.
Nov 29 07:31:23 compute-0 podman[212146]: 2025-11-29 07:31:23.832087844 +0000 UTC m=+5.629022496 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:31:23 compute-0 sudo[213170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:23 compute-0 sudo[213170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:23 compute-0 sudo[213170]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-0 podman[212145]: 2025-11-29 07:31:23.884355892 +0000 UTC m=+5.692842624 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:31:23 compute-0 sudo[213234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:31:23 compute-0 sudo[213234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:23 compute-0 sudo[213234]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:23 compute-0 sudo[213296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:23 compute-0 sudo[213296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:23 compute-0 sudo[213296]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:24 compute-0 sudo[213344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:31:24 compute-0 sudo[213344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:24 compute-0 sudo[213419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmkvlvlkmpdnvjbfcrcweorksnsbrssv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401483.8729334-775-67506099117098/AnsiballZ_stat.py'
Nov 29 07:31:24 compute-0 sudo[213419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:24 compute-0 sshd-session[212868]: Connection closed by authenticating user root 143.14.121.41 port 53172 [preauth]
Nov 29 07:31:24 compute-0 python3.9[213421]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:24 compute-0 sudo[213419]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:24 compute-0 podman[213462]: 2025-11-29 07:31:24.531862589 +0000 UTC m=+0.047072224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:31:24 compute-0 sudo[213598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqifnbdmzdnuegebjupprnjthillslsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401483.8729334-775-67506099117098/AnsiballZ_copy.py'
Nov 29 07:31:24 compute-0 sudo[213598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:24 compute-0 podman[213462]: 2025-11-29 07:31:24.909024552 +0000 UTC m=+0.424234087 container create 4d5f05002a566d2425623d9ed389cc7558d9101090d5f15fd152b483e82251f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bohr, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:31:24 compute-0 ceph-mon[75050]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:24 compute-0 ceph-mon[75050]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:24 compute-0 ceph-mon[75050]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:25 compute-0 systemd[1]: Started libpod-conmon-4d5f05002a566d2425623d9ed389cc7558d9101090d5f15fd152b483e82251f6.scope.
Nov 29 07:31:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:31:25 compute-0 python3.9[213600]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401483.8729334-775-67506099117098/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:25 compute-0 sudo[213598]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:25 compute-0 podman[213462]: 2025-11-29 07:31:25.149521881 +0000 UTC m=+0.664731436 container init 4d5f05002a566d2425623d9ed389cc7558d9101090d5f15fd152b483e82251f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:31:25 compute-0 podman[213462]: 2025-11-29 07:31:25.163262641 +0000 UTC m=+0.678472176 container start 4d5f05002a566d2425623d9ed389cc7558d9101090d5f15fd152b483e82251f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:31:25 compute-0 laughing_bohr[213603]: 167 167
Nov 29 07:31:25 compute-0 systemd[1]: libpod-4d5f05002a566d2425623d9ed389cc7558d9101090d5f15fd152b483e82251f6.scope: Deactivated successfully.
Nov 29 07:31:25 compute-0 podman[213462]: 2025-11-29 07:31:25.228465347 +0000 UTC m=+0.743674882 container attach 4d5f05002a566d2425623d9ed389cc7558d9101090d5f15fd152b483e82251f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:31:25 compute-0 podman[213462]: 2025-11-29 07:31:25.229447674 +0000 UTC m=+0.744657209 container died 4d5f05002a566d2425623d9ed389cc7558d9101090d5f15fd152b483e82251f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:31:25 compute-0 sudo[213769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obupnlnpbgfcubheehwjhmjgdzsumdeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401485.2529433-775-155994545575677/AnsiballZ_stat.py'
Nov 29 07:31:25 compute-0 sudo[213769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-70fe751b44b3dc8ae4100b25bd6353e53746d918faa8d2cb83d8ad229b1de546-merged.mount: Deactivated successfully.
Nov 29 07:31:25 compute-0 python3.9[213771]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:25 compute-0 sudo[213769]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:25 compute-0 podman[213462]: 2025-11-29 07:31:25.873012783 +0000 UTC m=+1.388222358 container remove 4d5f05002a566d2425623d9ed389cc7558d9101090d5f15fd152b483e82251f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:31:25 compute-0 systemd[1]: libpod-conmon-4d5f05002a566d2425623d9ed389cc7558d9101090d5f15fd152b483e82251f6.scope: Deactivated successfully.
Nov 29 07:31:25 compute-0 ceph-mon[75050]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:26 compute-0 sudo[213912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvvgzrsyiqqpnepxjvufuewjjhsjkcmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401485.2529433-775-155994545575677/AnsiballZ_copy.py'
Nov 29 07:31:26 compute-0 sudo[213912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:26 compute-0 podman[213875]: 2025-11-29 07:31:26.129174955 +0000 UTC m=+0.103561368 container create 975a0ef2550df61231d30377f7d03082e6fa5d2fa27dfd33278068201f5bf3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:31:26 compute-0 podman[213875]: 2025-11-29 07:31:26.049337505 +0000 UTC m=+0.023723938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:31:26 compute-0 systemd[1]: Started libpod-conmon-975a0ef2550df61231d30377f7d03082e6fa5d2fa27dfd33278068201f5bf3a6.scope.
Nov 29 07:31:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a1f8041e8f2afa9b0ac2c079e098e441ccb21cf9b443d996e5a2377bbf55d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a1f8041e8f2afa9b0ac2c079e098e441ccb21cf9b443d996e5a2377bbf55d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a1f8041e8f2afa9b0ac2c079e098e441ccb21cf9b443d996e5a2377bbf55d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32a1f8041e8f2afa9b0ac2c079e098e441ccb21cf9b443d996e5a2377bbf55d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:26 compute-0 python3.9[213914]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401485.2529433-775-155994545575677/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:26 compute-0 sudo[213912]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:26 compute-0 podman[213875]: 2025-11-29 07:31:26.334363837 +0000 UTC m=+0.308750340 container init 975a0ef2550df61231d30377f7d03082e6fa5d2fa27dfd33278068201f5bf3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:31:26 compute-0 podman[213875]: 2025-11-29 07:31:26.342519382 +0000 UTC m=+0.316905795 container start 975a0ef2550df61231d30377f7d03082e6fa5d2fa27dfd33278068201f5bf3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:31:26 compute-0 podman[213875]: 2025-11-29 07:31:26.383000883 +0000 UTC m=+0.357387356 container attach 975a0ef2550df61231d30377f7d03082e6fa5d2fa27dfd33278068201f5bf3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:31:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:26 compute-0 sudo[214074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzgdihvnvvjylxgenhhdnabimllmmedz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401486.4940608-775-54394304945718/AnsiballZ_stat.py'
Nov 29 07:31:26 compute-0 sudo[214074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:27 compute-0 python3.9[214076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:27 compute-0 sudo[214074]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]: {
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:     "0": [
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:         {
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "devices": [
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "/dev/loop3"
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             ],
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_name": "ceph_lv0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_size": "21470642176",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "name": "ceph_lv0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "tags": {
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.cluster_name": "ceph",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.crush_device_class": "",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.encrypted": "0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.osd_id": "0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.type": "block",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.vdo": "0"
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             },
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "type": "block",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "vg_name": "ceph_vg0"
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:         }
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:     ],
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:     "1": [
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:         {
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "devices": [
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "/dev/loop4"
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             ],
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_name": "ceph_lv1",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_size": "21470642176",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "name": "ceph_lv1",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "tags": {
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.cluster_name": "ceph",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.crush_device_class": "",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.encrypted": "0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.osd_id": "1",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.type": "block",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.vdo": "0"
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             },
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "type": "block",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "vg_name": "ceph_vg1"
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:         }
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:     ],
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:     "2": [
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:         {
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "devices": [
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "/dev/loop5"
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             ],
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_name": "ceph_lv2",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_size": "21470642176",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "name": "ceph_lv2",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "tags": {
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.cluster_name": "ceph",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.crush_device_class": "",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.encrypted": "0",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.osd_id": "2",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.type": "block",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:                 "ceph.vdo": "0"
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             },
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "type": "block",
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:             "vg_name": "ceph_vg2"
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:         }
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]:     ]
Nov 29 07:31:27 compute-0 flamboyant_swartz[213920]: }
Nov 29 07:31:27 compute-0 sshd-session[213497]: Connection closed by authenticating user root 143.14.121.41 port 53184 [preauth]
Nov 29 07:31:27 compute-0 ceph-mon[75050]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:27 compute-0 systemd[1]: libpod-975a0ef2550df61231d30377f7d03082e6fa5d2fa27dfd33278068201f5bf3a6.scope: Deactivated successfully.
Nov 29 07:31:27 compute-0 podman[213875]: 2025-11-29 07:31:27.155843522 +0000 UTC m=+1.130229945 container died 975a0ef2550df61231d30377f7d03082e6fa5d2fa27dfd33278068201f5bf3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 07:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-32a1f8041e8f2afa9b0ac2c079e098e441ccb21cf9b443d996e5a2377bbf55d2-merged.mount: Deactivated successfully.
Nov 29 07:31:27 compute-0 sudo[214214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjfissobbykmfxatklwyquvafhktjuzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401486.4940608-775-54394304945718/AnsiballZ_copy.py'
Nov 29 07:31:27 compute-0 sudo[214214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:27 compute-0 podman[213875]: 2025-11-29 07:31:27.669452502 +0000 UTC m=+1.643838915 container remove 975a0ef2550df61231d30377f7d03082e6fa5d2fa27dfd33278068201f5bf3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:31:27 compute-0 sudo[213344]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:27 compute-0 python3.9[214216]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401486.4940608-775-54394304945718/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:27 compute-0 systemd[1]: libpod-conmon-975a0ef2550df61231d30377f7d03082e6fa5d2fa27dfd33278068201f5bf3a6.scope: Deactivated successfully.
Nov 29 07:31:27 compute-0 sudo[214214]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:27 compute-0 sudo[214217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:27 compute-0 sudo[214217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:27 compute-0 sudo[214217]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:27 compute-0 sudo[214248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:31:27 compute-0 sudo[214248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:27 compute-0 sudo[214248]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:27 compute-0 sudo[214291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:27 compute-0 sudo[214291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:27 compute-0 sudo[214291]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:27 compute-0 sudo[214339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:31:27 compute-0 sudo[214339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:28 compute-0 sudo[214500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osonpfnfjvktxhdccdmdrgzxkojuvoac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401487.9010775-775-215264737055603/AnsiballZ_stat.py'
Nov 29 07:31:28 compute-0 sudo[214500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:28 compute-0 podman[214509]: 2025-11-29 07:31:28.210452101 +0000 UTC m=+0.021421704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:31:28 compute-0 podman[214509]: 2025-11-29 07:31:28.306737657 +0000 UTC m=+0.117707260 container create 7f10e4e45142b724e9bd932380c1409aa3f9ae6ce8b229743bb4b69dbf347119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:31:28 compute-0 python3.9[214508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:28 compute-0 sudo[214500]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:28 compute-0 systemd[1]: Started libpod-conmon-7f10e4e45142b724e9bd932380c1409aa3f9ae6ce8b229743bb4b69dbf347119.scope.
Nov 29 07:31:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:31:28 compute-0 podman[214509]: 2025-11-29 07:31:28.513580153 +0000 UTC m=+0.324549746 container init 7f10e4e45142b724e9bd932380c1409aa3f9ae6ce8b229743bb4b69dbf347119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:31:28 compute-0 podman[214509]: 2025-11-29 07:31:28.524903797 +0000 UTC m=+0.335873400 container start 7f10e4e45142b724e9bd932380c1409aa3f9ae6ce8b229743bb4b69dbf347119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_satoshi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:31:28 compute-0 wizardly_satoshi[214533]: 167 167
Nov 29 07:31:28 compute-0 systemd[1]: libpod-7f10e4e45142b724e9bd932380c1409aa3f9ae6ce8b229743bb4b69dbf347119.scope: Deactivated successfully.
Nov 29 07:31:28 compute-0 podman[214509]: 2025-11-29 07:31:28.574005527 +0000 UTC m=+0.384975110 container attach 7f10e4e45142b724e9bd932380c1409aa3f9ae6ce8b229743bb4b69dbf347119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:31:28 compute-0 podman[214509]: 2025-11-29 07:31:28.574401448 +0000 UTC m=+0.385371021 container died 7f10e4e45142b724e9bd932380c1409aa3f9ae6ce8b229743bb4b69dbf347119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-16ea178b84f21d6c6bc6f0c004f11ba13cd5e41ee243ce6e0241b30949bdbf75-merged.mount: Deactivated successfully.
Nov 29 07:31:28 compute-0 sudo[214662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxcwqkmigsqngrsqkiehkpithzbwsqot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401487.9010775-775-215264737055603/AnsiballZ_copy.py'
Nov 29 07:31:28 compute-0 sudo[214662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:28 compute-0 sshd-session[214186]: Invalid user admin from 143.14.121.41 port 53198
Nov 29 07:31:28 compute-0 podman[214509]: 2025-11-29 07:31:28.839668523 +0000 UTC m=+0.650638096 container remove 7f10e4e45142b724e9bd932380c1409aa3f9ae6ce8b229743bb4b69dbf347119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:31:28 compute-0 systemd[1]: libpod-conmon-7f10e4e45142b724e9bd932380c1409aa3f9ae6ce8b229743bb4b69dbf347119.scope: Deactivated successfully.
Nov 29 07:31:29 compute-0 python3.9[214664]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401487.9010775-775-215264737055603/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:29 compute-0 sudo[214662]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:29 compute-0 podman[214672]: 2025-11-29 07:31:29.114668387 +0000 UTC m=+0.107140518 container create ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:31:29 compute-0 podman[214672]: 2025-11-29 07:31:29.055713204 +0000 UTC m=+0.048185375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:31:29 compute-0 sshd-session[214186]: Connection closed by invalid user admin 143.14.121.41 port 53198 [preauth]
Nov 29 07:31:29 compute-0 systemd[1]: Started libpod-conmon-ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9.scope.
Nov 29 07:31:29 compute-0 ceph-mon[75050]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9352a7b66f1f9943f06b59b519f3d0a50000a5071dff5611d14ca11c4c3a8a6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9352a7b66f1f9943f06b59b519f3d0a50000a5071dff5611d14ca11c4c3a8a6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9352a7b66f1f9943f06b59b519f3d0a50000a5071dff5611d14ca11c4c3a8a6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9352a7b66f1f9943f06b59b519f3d0a50000a5071dff5611d14ca11c4c3a8a6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:31:29 compute-0 podman[214672]: 2025-11-29 07:31:29.278650217 +0000 UTC m=+0.271122388 container init ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:31:29 compute-0 podman[214672]: 2025-11-29 07:31:29.286804222 +0000 UTC m=+0.279276363 container start ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williamson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:31:29 compute-0 podman[214672]: 2025-11-29 07:31:29.332437626 +0000 UTC m=+0.324909767 container attach ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williamson, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:31:29 compute-0 sudo[214843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nobydqahdlveyjlbtkdmdrguyabzqvgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401489.2240274-775-43924501708047/AnsiballZ_stat.py'
Nov 29 07:31:29 compute-0 sudo[214843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:29 compute-0 python3.9[214845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:31:29 compute-0 sudo[214843]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:30 compute-0 sudo[214984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdwgkcrqtezcjnptvauottghsuyadvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401489.2240274-775-43924501708047/AnsiballZ_copy.py'
Nov 29 07:31:30 compute-0 sudo[214984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]: {
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "osd_id": 2,
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "type": "bluestore"
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:     },
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "osd_id": 1,
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "type": "bluestore"
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:     },
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "osd_id": 0,
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:         "type": "bluestore"
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]:     }
Nov 29 07:31:30 compute-0 suspicious_williamson[214730]: }
Nov 29 07:31:30 compute-0 systemd[1]: libpod-ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9.scope: Deactivated successfully.
Nov 29 07:31:30 compute-0 systemd[1]: libpod-ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9.scope: Consumed 1.063s CPU time.
Nov 29 07:31:30 compute-0 podman[214672]: 2025-11-29 07:31:30.342503662 +0000 UTC m=+1.334975813 container died ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:31:30 compute-0 python3.9[214988]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401489.2240274-775-43924501708047/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:30 compute-0 sudo[214984]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9352a7b66f1f9943f06b59b519f3d0a50000a5071dff5611d14ca11c4c3a8a6b-merged.mount: Deactivated successfully.
Nov 29 07:31:30 compute-0 podman[214672]: 2025-11-29 07:31:30.98743669 +0000 UTC m=+1.979908861 container remove ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williamson, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:31:31 compute-0 systemd[1]: libpod-conmon-ec03804556643787cc5ea45820646f9504f734a96b50e32b08170111c225ddc9.scope: Deactivated successfully.
Nov 29 07:31:31 compute-0 sudo[214339]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:31:31 compute-0 sshd-session[214794]: Invalid user admin from 143.14.121.41 port 53202
Nov 29 07:31:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:31:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:31:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:31:31 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a6fa82c6-a9b5-44e8-b9fd-a71a3ad1aacb does not exist
Nov 29 07:31:31 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 10959508-6038-44cd-b49b-d8ae9e68d68b does not exist
Nov 29 07:31:31 compute-0 sudo[215162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:31:31 compute-0 sudo[215162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:31 compute-0 sudo[215162]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:31 compute-0 python3.9[215161]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:31:31 compute-0 sudo[215187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:31:31 compute-0 sudo[215187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:31:31 compute-0 sudo[215187]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:31 compute-0 sshd-session[214794]: Connection closed by invalid user admin 143.14.121.41 port 53202 [preauth]
Nov 29 07:31:31 compute-0 ceph-mon[75050]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:31 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:31:31 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:31:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:32 compute-0 sudo[215366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqeejelqijxwvenewnjxidslbbpeouhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401491.6390815-981-4232003915226/AnsiballZ_seboolean.py'
Nov 29 07:31:32 compute-0 sudo[215366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:32 compute-0 python3.9[215368]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 07:31:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:34 compute-0 ceph-mon[75050]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:34 compute-0 sshd-session[215268]: Connection closed by authenticating user root 143.14.121.41 port 53210 [preauth]
Nov 29 07:31:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:31:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:31:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:36 compute-0 sudo[215366]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:36 compute-0 ceph-mon[75050]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:36 compute-0 sudo[215524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voclvsyquvakjpnjaeaushfxzxkmefrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401496.4256194-989-15440717743203/AnsiballZ_copy.py'
Nov 29 07:31:36 compute-0 dbus-broker-launch[781]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 07:31:36 compute-0 sudo[215524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:36 compute-0 python3.9[215526]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:36 compute-0 sudo[215524]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:37 compute-0 sudo[215676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilzjrwpgpomyortjstxullhwzenmcpnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401497.1416702-989-131638975947519/AnsiballZ_copy.py'
Nov 29 07:31:37 compute-0 sudo[215676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:37 compute-0 ceph-mon[75050]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:37 compute-0 python3.9[215678]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:37 compute-0 sudo[215676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:37 compute-0 sshd-session[215373]: Connection closed by authenticating user root 143.14.121.41 port 34196 [preauth]
Nov 29 07:31:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:38 compute-0 sudo[215829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzvzaxgvawtrftlkrdfnhniawiteltmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401497.8431535-989-196955611597183/AnsiballZ_copy.py'
Nov 29 07:31:38 compute-0 sudo[215829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:38 compute-0 python3.9[215831]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:38 compute-0 sudo[215829]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:38 compute-0 sudo[215982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzajajcnhzmffsaseiyhroixjnamylvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401498.522058-989-233153031029113/AnsiballZ_copy.py'
Nov 29 07:31:38 compute-0 sudo[215982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:39 compute-0 ceph-mon[75050]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:39 compute-0 python3.9[215984]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:39 compute-0 sudo[215982]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:39 compute-0 sudo[216134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwniahwlsuxeierrqtlntkyowjicxivr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401499.295162-989-60744364731515/AnsiballZ_copy.py'
Nov 29 07:31:39 compute-0 sudo[216134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:39 compute-0 python3.9[216136]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:39 compute-0 sudo[216134]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:40 compute-0 sshd-session[215791]: Connection closed by authenticating user root 143.14.121.41 port 34202 [preauth]
Nov 29 07:31:40 compute-0 sudo[216286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpugiloiwqjttsordmaeuspsegqjyiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401499.98496-1025-182458236492179/AnsiballZ_copy.py'
Nov 29 07:31:40 compute-0 sudo[216286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:40 compute-0 python3.9[216288]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:40 compute-0 sudo[216286]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:40 compute-0 sudo[216438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvgkbmvseokpidjzcjopunbuzagaonrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401500.6151755-1025-78233922945774/AnsiballZ_copy.py'
Nov 29 07:31:40 compute-0 sudo[216438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:41 compute-0 python3.9[216440]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:41 compute-0 sudo[216438]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:41 compute-0 ceph-mon[75050]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:41 compute-0 sudo[216592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rurtdamcwmkgjncuuauzdmpvntnilfnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401501.500309-1025-114435133596177/AnsiballZ_copy.py'
Nov 29 07:31:41 compute-0 sudo[216592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:42 compute-0 python3.9[216594]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:42 compute-0 sudo[216592]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:42 compute-0 sudo[216744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khdzyqtrgpyorziiksfqljwmsafcugpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401502.1776898-1025-254346492126116/AnsiballZ_copy.py'
Nov 29 07:31:42 compute-0 sudo[216744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:42 compute-0 python3.9[216746]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:42 compute-0 sudo[216744]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:43 compute-0 sudo[216896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqgqudhcoeiyyqwewutubjdcjxxxwxsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401502.8553495-1025-264369216682091/AnsiballZ_copy.py'
Nov 29 07:31:43 compute-0 sudo[216896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:43 compute-0 python3.9[216898]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:31:43 compute-0 sudo[216896]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:43 compute-0 sudo[217048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqvhqwtckeuujmetsjrlumtzdhmjvtxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401503.5985067-1061-175580922196635/AnsiballZ_systemd.py'
Nov 29 07:31:43 compute-0 sudo[217048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:45 compute-0 sshd-session[216441]: Connection closed by authenticating user root 143.14.121.41 port 34206 [preauth]
Nov 29 07:31:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:48 compute-0 sshd-session[217051]: Connection closed by authenticating user root 143.14.121.41 port 33418 [preauth]
Nov 29 07:31:48 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:31:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:50 compute-0 sshd-session[217054]: Invalid user pi from 143.14.121.41 port 33422
Nov 29 07:31:51 compute-0 sshd-session[217054]: Connection closed by invalid user pi 143.14.121.41 port 33422 [preauth]
Nov 29 07:31:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:52 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:31:53 compute-0 sshd-session[217056]: Invalid user admin from 143.14.121.41 port 33428
Nov 29 07:31:54 compute-0 sshd-session[217056]: Connection closed by invalid user admin 143.14.121.41 port 33428 [preauth]
Nov 29 07:31:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:54 compute-0 podman[217061]: 2025-11-29 07:31:54.752043021 +0000 UTC m=+0.100712879 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 07:31:54 compute-0 podman[217060]: 2025-11-29 07:31:54.836557511 +0000 UTC m=+0.187106151 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 07:31:56 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf MDS connection to Monitors appears to be laggy; 15.2624s since last acked beacon
Nov 29 07:31:56 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:31:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:56 compute-0 sshd-session[217058]: Invalid user user from 143.14.121.41 port 39958
Nov 29 07:31:56 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:31:57 compute-0 sshd-session[217058]: Connection closed by invalid user user 143.14.121.41 port 39958 [preauth]
Nov 29 07:31:57 compute-0 python3.9[217050]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:31:57 compute-0 systemd[1]: Reloading.
Nov 29 07:31:57 compute-0 systemd-rc-local-generator[217131]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:31:57 compute-0 systemd-sysv-generator[217134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:31:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 16.2786 seconds
Nov 29 07:31:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:31:57 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf  MDS is no longer laggy
Nov 29 07:31:57 compute-0 ceph-mon[75050]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:57 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 07:31:57 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 07:31:57 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 07:31:57 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 07:31:57 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 29 07:31:57 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 29 07:31:57 compute-0 sudo[217048]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:58 compute-0 sudo[217300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkrknfrpigyfpbccrsucuudxmitisbbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401518.1316504-1061-149632151931516/AnsiballZ_systemd.py'
Nov 29 07:31:58 compute-0 sudo[217300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:58 compute-0 sshd-session[217109]: Invalid user server from 143.14.121.41 port 39970
Nov 29 07:31:58 compute-0 ceph-mon[75050]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:58 compute-0 ceph-mon[75050]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:58 compute-0 ceph-mon[75050]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:58 compute-0 ceph-mon[75050]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:58 compute-0 ceph-mon[75050]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:58 compute-0 ceph-mon[75050]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:58 compute-0 ceph-mon[75050]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:58 compute-0 ceph-mon[75050]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:31:58 compute-0 python3.9[217302]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:31:58 compute-0 systemd[1]: Reloading.
Nov 29 07:31:58 compute-0 systemd-sysv-generator[217332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:31:58 compute-0 systemd-rc-local-generator[217328]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:31:59 compute-0 sshd-session[217109]: Connection closed by invalid user server 143.14.121.41 port 39970 [preauth]
Nov 29 07:31:59 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 07:31:59 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 07:31:59 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 07:31:59 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 07:31:59 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 07:31:59 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 07:31:59 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 07:31:59 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 07:31:59 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 29 07:31:59 compute-0 sudo[217300]: pam_unix(sudo:session): session closed for user root
Nov 29 07:31:59 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 07:31:59 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 07:31:59 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 07:31:59 compute-0 sudo[217525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dumqdlicvdeqxvmbeksqenzoxapfrxnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401519.4386666-1061-146180674176674/AnsiballZ_systemd.py'
Nov 29 07:31:59 compute-0 sudo[217525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:31:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:31:59.744 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:31:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:31:59.746 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:31:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:31:59.746 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:31:59 compute-0 python3.9[217527]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:32:00 compute-0 systemd[1]: Reloading.
Nov 29 07:32:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:00 compute-0 systemd-rc-local-generator[217557]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:00 compute-0 systemd-sysv-generator[217561]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:00 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 07:32:00 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 07:32:00 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 07:32:00 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 07:32:00 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 29 07:32:00 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 29 07:32:00 compute-0 sudo[217525]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:00 compute-0 setroubleshoot[217338]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l debfa218-e2f8-4f2d-afe1-e6218c5018c7
Nov 29 07:32:00 compute-0 setroubleshoot[217338]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 07:32:00 compute-0 setroubleshoot[217338]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l debfa218-e2f8-4f2d-afe1-e6218c5018c7
Nov 29 07:32:00 compute-0 setroubleshoot[217338]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 07:32:00 compute-0 sudo[217739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xutxvuyejrjcoapjaovygtmyunvwafcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401520.6693165-1061-189550851940513/AnsiballZ_systemd.py'
Nov 29 07:32:00 compute-0 sudo[217739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:01 compute-0 python3.9[217741]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:32:01 compute-0 systemd[1]: Reloading.
Nov 29 07:32:01 compute-0 ceph-mon[75050]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:01 compute-0 systemd-sysv-generator[217770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:01 compute-0 systemd-rc-local-generator[217767]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:01 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 07:32:01 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 07:32:01 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 07:32:01 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 07:32:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 07:32:01 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 07:32:01 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 07:32:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 07:32:01 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 07:32:01 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 07:32:01 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 07:32:01 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 29 07:32:01 compute-0 sudo[217739]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:01 compute-0 sshd-session[217366]: Connection closed by authenticating user root 143.14.121.41 port 39984 [preauth]
Nov 29 07:32:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:02 compute-0 sudo[217955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksnuxfnggucnygeckbdqshqfmtkscuen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401521.8716035-1061-193301381421841/AnsiballZ_systemd.py'
Nov 29 07:32:02 compute-0 sudo[217955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:02 compute-0 python3.9[217957]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:32:02 compute-0 systemd[1]: Reloading.
Nov 29 07:32:02 compute-0 systemd-rc-local-generator[217985]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:02 compute-0 systemd-sysv-generator[217988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:02 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 07:32:02 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 07:32:02 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 07:32:02 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 07:32:02 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 07:32:02 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 07:32:02 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 29 07:32:02 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 29 07:32:02 compute-0 sudo[217955]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:03 compute-0 ceph-mon[75050]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:03 compute-0 sudo[218167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqkfqiwwvebvrjmswgwzisuexdgjmdsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401523.3920639-1098-155834348820395/AnsiballZ_file.py'
Nov 29 07:32:03 compute-0 sudo[218167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:03 compute-0 python3.9[218169]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:03 compute-0 sudo[218167]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:04 compute-0 sshd-session[217950]: Connection closed by authenticating user root 143.14.121.41 port 41836 [preauth]
Nov 29 07:32:04 compute-0 sudo[218321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkhxbfqhjyrgbjomdnataptwbfiaxukk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401524.2475135-1106-236147284176463/AnsiballZ_find.py'
Nov 29 07:32:04 compute-0 sudo[218321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:04 compute-0 ceph-mon[75050]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:04 compute-0 python3.9[218323]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:32:04 compute-0 sudo[218321]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:32:05
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', '.mgr', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control']
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:32:05 compute-0 sudo[218473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ietqzqvwmvngrsflqscpyrcgaxbxkuvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401525.2059243-1114-279938749652190/AnsiballZ_command.py'
Nov 29 07:32:05 compute-0 sudo[218473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:05 compute-0 python3.9[218475]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:32:05 compute-0 sudo[218473]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:06 compute-0 python3.9[218629]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:32:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:32:06 compute-0 ceph-mon[75050]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:06 compute-0 sshd-session[218293]: Connection closed by authenticating user root 143.14.121.41 port 41848 [preauth]
Nov 29 07:32:07 compute-0 python3.9[218780]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.755403) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401527755476, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1184, "num_deletes": 251, "total_data_size": 1792762, "memory_usage": 1816096, "flush_reason": "Manual Compaction"}
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401527769301, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1056095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11885, "largest_seqno": 13068, "table_properties": {"data_size": 1051758, "index_size": 1861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11191, "raw_average_key_size": 20, "raw_value_size": 1042311, "raw_average_value_size": 1878, "num_data_blocks": 85, "num_entries": 555, "num_filter_entries": 555, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401389, "oldest_key_time": 1764401389, "file_creation_time": 1764401527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 13969 microseconds, and 3389 cpu microseconds.
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.769381) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1056095 bytes OK
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.769404) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.771715) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.771739) EVENT_LOG_v1 {"time_micros": 1764401527771732, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.771760) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1787343, prev total WAL file size 1787343, number of live WAL files 2.
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.772615) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1031KB)], [29(8427KB)]
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401527772681, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9686018, "oldest_snapshot_seqno": -1}
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3910 keys, 7083413 bytes, temperature: kUnknown
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401527886691, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7083413, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7055963, "index_size": 16588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 94481, "raw_average_key_size": 24, "raw_value_size": 6983967, "raw_average_value_size": 1786, "num_data_blocks": 720, "num_entries": 3910, "num_filter_entries": 3910, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764401527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.886943) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7083413 bytes
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.890145) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.9 rd, 62.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.2 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(15.9) write-amplify(6.7) OK, records in: 4370, records dropped: 460 output_compression: NoCompression
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.890171) EVENT_LOG_v1 {"time_micros": 1764401527890159, "job": 12, "event": "compaction_finished", "compaction_time_micros": 114083, "compaction_time_cpu_micros": 22727, "output_level": 6, "num_output_files": 1, "total_output_size": 7083413, "num_input_records": 4370, "num_output_records": 3910, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401527890463, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401527892192, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.772493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.892259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.892263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.892264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.892266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:32:07 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:32:07.892268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:32:07 compute-0 python3.9[218902]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401526.9630146-1133-39751111335654/.source.xml follow=False _original_basename=secret.xml.j2 checksum=1917dbf7985e4b26eaf27a2ca0867947820263a0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:08 compute-0 sudo[219052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcqzbtvloypcqtcasjpxfcxrqvyoimbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401528.1161969-1148-61441873739832/AnsiballZ_command.py'
Nov 29 07:32:08 compute-0 sudo[219052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:08 compute-0 python3.9[219054]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 14ff1f30-5059-58f1-9a23-69871bb275a1
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:32:08 compute-0 polkitd[43449]: Registered Authentication Agent for unix-process:219056:384393 (system bus name :1.2883 [pkttyagent --process 219056 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 07:32:08 compute-0 polkitd[43449]: Unregistered Authentication Agent for unix-process:219056:384393 (system bus name :1.2883, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 07:32:08 compute-0 polkitd[43449]: Registered Authentication Agent for unix-process:219055:384393 (system bus name :1.2884 [pkttyagent --process 219055 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 07:32:08 compute-0 polkitd[43449]: Unregistered Authentication Agent for unix-process:219055:384393 (system bus name :1.2884, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 07:32:08 compute-0 sudo[219052]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:08 compute-0 ceph-mon[75050]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:09 compute-0 sshd-session[218777]: Connection closed by authenticating user root 143.14.121.41 port 41860 [preauth]
Nov 29 07:32:09 compute-0 python3.9[219216]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:09 compute-0 sudo[219368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkrwckpqeqnbapechxlfrwvhobhlngus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401529.4690585-1164-155665929634201/AnsiballZ_command.py'
Nov 29 07:32:09 compute-0 sudo[219368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:09 compute-0 sudo[219368]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:10 compute-0 sudo[219521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lavweluxllogtzlrowoosoevhdyjeaea ; FSID=14ff1f30-5059-58f1-9a23-69871bb275a1 KEY=AQBznCppAAAAABAATpsmuZlSZuS833gbXPyFSA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401530.1380847-1172-198193314206638/AnsiballZ_command.py'
Nov 29 07:32:10 compute-0 sudo[219521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:10 compute-0 polkitd[43449]: Registered Authentication Agent for unix-process:219524:384594 (system bus name :1.2887 [pkttyagent --process 219524 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 07:32:10 compute-0 polkitd[43449]: Unregistered Authentication Agent for unix-process:219524:384594 (system bus name :1.2887, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 07:32:10 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 07:32:10 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.001s CPU time.
Nov 29 07:32:10 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 07:32:11 compute-0 sshd-session[219241]: Connection closed by authenticating user root 143.14.121.41 port 41874 [preauth]
Nov 29 07:32:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:14 compute-0 sudo[219521]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:14 compute-0 ceph-mon[75050]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:32:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:32:14 compute-0 sshd-session[219530]: Connection closed by authenticating user root 143.14.121.41 port 41876 [preauth]
Nov 29 07:32:15 compute-0 sudo[219681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jabyhqighsysrtnbyzvhsgcmfkpawbln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401534.553135-1180-136413289010862/AnsiballZ_copy.py'
Nov 29 07:32:15 compute-0 sudo[219681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:15 compute-0 python3.9[219683]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:15 compute-0 sudo[219681]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:15 compute-0 sudo[219834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdzvwqbsocjjclkgzndgjqsoiwoqtmvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401535.443861-1188-261505965130469/AnsiballZ_stat.py'
Nov 29 07:32:15 compute-0 sudo[219834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:15 compute-0 ceph-mon[75050]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:15 compute-0 ceph-mon[75050]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:16 compute-0 python3.9[219836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:16 compute-0 sudo[219834]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:16 compute-0 sudo[219958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijjptqpuerucgzysnzgtzkzqbbvyeusw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401535.443861-1188-261505965130469/AnsiballZ_copy.py'
Nov 29 07:32:16 compute-0 sudo[219958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:16 compute-0 python3.9[219960]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401535.443861-1188-261505965130469/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:16 compute-0 sudo[219958]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:17 compute-0 ceph-mon[75050]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:17 compute-0 sudo[220110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gndmhtposjvwldwmgznxbuvbxputoirx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401537.089632-1204-58836558571623/AnsiballZ_file.py'
Nov 29 07:32:17 compute-0 sudo[220110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:17 compute-0 python3.9[220112]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:17 compute-0 sudo[220110]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:17 compute-0 sshd-session[219684]: Connection closed by authenticating user ftp 143.14.121.41 port 37632 [preauth]
Nov 29 07:32:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:18 compute-0 sudo[220262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfrxjjzjrqpkktzpfwsouitgfdgddhjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401537.8001006-1212-57374487505801/AnsiballZ_stat.py'
Nov 29 07:32:18 compute-0 sudo[220262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:18 compute-0 python3.9[220264]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:18 compute-0 sudo[220262]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:18 compute-0 sudo[220340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjgxlfqshrnqdufhmssssjycxkkliqmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401537.8001006-1212-57374487505801/AnsiballZ_file.py'
Nov 29 07:32:18 compute-0 sudo[220340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:18 compute-0 python3.9[220342]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:18 compute-0 sudo[220340]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:18 compute-0 ceph-mon[75050]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:19 compute-0 sudo[220494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyupmdkabemjalnrrxlgyhccrpqbcttz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401538.9463983-1224-211014829477113/AnsiballZ_stat.py'
Nov 29 07:32:19 compute-0 sudo[220494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:19 compute-0 python3.9[220496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:19 compute-0 sudo[220494]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:19 compute-0 sudo[220572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdsblojywlhgbdhyazngvpuqxdgggsts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401538.9463983-1224-211014829477113/AnsiballZ_file.py'
Nov 29 07:32:19 compute-0 sudo[220572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:19 compute-0 python3.9[220574]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.uw7tkgi6 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:19 compute-0 sudo[220572]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:20 compute-0 sudo[220724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhofeigmgyjvxjlsxsbakbfoabxesth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401540.0494094-1236-6237404194437/AnsiballZ_stat.py'
Nov 29 07:32:20 compute-0 sudo[220724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:20 compute-0 python3.9[220726]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:20 compute-0 sudo[220724]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:20 compute-0 sudo[220802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aesxlnipdzephrdoexwzszlxgujrumck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401540.0494094-1236-6237404194437/AnsiballZ_file.py'
Nov 29 07:32:20 compute-0 sudo[220802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:20 compute-0 sshd-session[220426]: Connection closed by authenticating user root 143.14.121.41 port 37634 [preauth]
Nov 29 07:32:21 compute-0 python3.9[220804]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:21 compute-0 sudo[220802]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:21 compute-0 ceph-mon[75050]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:21 compute-0 sudo[220954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbungdnycubvgbtzwkaucwsomncrlwrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401541.2764285-1249-825427121270/AnsiballZ_command.py'
Nov 29 07:32:21 compute-0 sudo[220954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:21 compute-0 python3.9[220956]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:32:21 compute-0 sudo[220954]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:22 compute-0 sudo[221109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqzfjrtvdtmophlaljdjegqvnerqvtfu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401542.0353734-1257-130730508587978/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 07:32:22 compute-0 sudo[221109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:22 compute-0 python3[221111]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 07:32:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:22 compute-0 sudo[221109]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:23 compute-0 sudo[221261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zndphgixotvhhqrxafaotqoarpheiowj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401543.037109-1265-62850829951106/AnsiballZ_stat.py'
Nov 29 07:32:23 compute-0 sudo[221261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:23 compute-0 python3.9[221263]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:23 compute-0 sudo[221261]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:23 compute-0 ceph-mon[75050]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:23 compute-0 sudo[221339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yomadumuqpvvsktlaizwphpszabpggna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401543.037109-1265-62850829951106/AnsiballZ_file.py'
Nov 29 07:32:23 compute-0 sudo[221339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:23 compute-0 sshd-session[221034]: Connection closed by authenticating user root 143.14.121.41 port 37646 [preauth]
Nov 29 07:32:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:24 compute-0 python3.9[221341]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:24 compute-0 sudo[221339]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:24 compute-0 ceph-mon[75050]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:24 compute-0 podman[221467]: 2025-11-29 07:32:24.854797674 +0000 UTC m=+0.048726160 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:32:24 compute-0 sudo[221509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbvvzurgqqaqnabsalvclugqnkzcvgjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401544.5311503-1277-271320680848600/AnsiballZ_stat.py'
Nov 29 07:32:24 compute-0 sudo[221509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:24 compute-0 podman[221514]: 2025-11-29 07:32:24.978575842 +0000 UTC m=+0.088183733 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 07:32:25 compute-0 python3.9[221515]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:25 compute-0 sudo[221509]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:25 compute-0 sudo[221616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpwnlqymcuvgbblhggusxztvgkxyfgwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401544.5311503-1277-271320680848600/AnsiballZ_file.py'
Nov 29 07:32:25 compute-0 sudo[221616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:25 compute-0 python3.9[221618]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:25 compute-0 sudo[221616]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:26 compute-0 sudo[221768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oepuubvxhzsumlfwpkudtsyrsuqundgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401545.8046768-1289-2177356423082/AnsiballZ_stat.py'
Nov 29 07:32:26 compute-0 sudo[221768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:26 compute-0 python3.9[221770]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:26 compute-0 sudo[221768]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:26 compute-0 sudo[221846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjlvzdpkwpvlcilqdojxytdlnxthlhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401545.8046768-1289-2177356423082/AnsiballZ_file.py'
Nov 29 07:32:26 compute-0 sudo[221846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:26 compute-0 python3.9[221848]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:26 compute-0 sudo[221846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:27 compute-0 sshd-session[221342]: Connection closed by authenticating user root 143.14.121.41 port 60276 [preauth]
Nov 29 07:32:27 compute-0 sudo[221998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmvwxkbvjhhtgtsyekqtmjgxxrgxsjfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401546.9607465-1301-126917374108264/AnsiballZ_stat.py'
Nov 29 07:32:27 compute-0 sudo[221998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:27 compute-0 python3.9[222000]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:27 compute-0 sudo[221998]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:27 compute-0 ceph-mon[75050]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:27 compute-0 sudo[222078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrahbxdnosegewvuezxfnpclaxhpaati ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401546.9607465-1301-126917374108264/AnsiballZ_file.py'
Nov 29 07:32:27 compute-0 sudo[222078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:28 compute-0 python3.9[222080]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:28 compute-0 sudo[222078]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:28 compute-0 sudo[222230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tswaohescxutzyyqgnigkneqjzbrzcsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401548.2698493-1313-83698428842404/AnsiballZ_stat.py'
Nov 29 07:32:28 compute-0 sudo[222230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:28 compute-0 python3.9[222232]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:28 compute-0 sudo[222230]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:29 compute-0 sudo[222355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyygdukzmlylnjqxutovnchyznkxtwsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401548.2698493-1313-83698428842404/AnsiballZ_copy.py'
Nov 29 07:32:29 compute-0 sudo[222355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:29 compute-0 ceph-mon[75050]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:29 compute-0 python3.9[222357]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401548.2698493-1313-83698428842404/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:29 compute-0 sudo[222355]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:29 compute-0 sshd-session[222001]: Invalid user minecraft from 143.14.121.41 port 60290
Nov 29 07:32:30 compute-0 sudo[222507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzrgaokzsamoyfmgimauanxeushlqbwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401549.7106535-1328-82548491909640/AnsiballZ_file.py'
Nov 29 07:32:30 compute-0 sudo[222507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:30 compute-0 sshd-session[222001]: Connection closed by invalid user minecraft 143.14.121.41 port 60290 [preauth]
Nov 29 07:32:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:31 compute-0 sudo[222512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:31 compute-0 sudo[222512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:31 compute-0 sudo[222512]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:31 compute-0 sudo[222537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:32:31 compute-0 sudo[222537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:31 compute-0 sudo[222537]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:31 compute-0 sudo[222562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:31 compute-0 sudo[222562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:31 compute-0 sudo[222562]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:31 compute-0 sudo[222587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:32:31 compute-0 sudo[222587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:31 compute-0 python3.9[222509]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:31 compute-0 sudo[222507]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:32 compute-0 sudo[222587]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:32 compute-0 sudo[222792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdnzxfadxouchctklkykrbgmqwokshhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401551.9764001-1336-48712586037388/AnsiballZ_command.py'
Nov 29 07:32:32 compute-0 sudo[222792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:32 compute-0 sshd-session[222510]: Invalid user huawei from 143.14.121.41 port 60294
Nov 29 07:32:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:32:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:32:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:32:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:32:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:32:32 compute-0 python3.9[222794]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:32:32 compute-0 sudo[222792]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:32 compute-0 sshd-session[222510]: Connection closed by invalid user huawei 143.14.121.41 port 60294 [preauth]
Nov 29 07:32:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:33 compute-0 sudo[222949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gggynyvezlyvxnwmnuvrevjgirnietfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401552.8056793-1344-206233035239847/AnsiballZ_blockinfile.py'
Nov 29 07:32:33 compute-0 sudo[222949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:33 compute-0 python3.9[222951]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:33 compute-0 sudo[222949]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:33 compute-0 ceph-mon[75050]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:32:33 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 4a98ee9b-638c-492e-8682-e11f83d59a38 does not exist
Nov 29 07:32:33 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1699d3a6-a631-4ce0-b3ff-067cae10bf8a does not exist
Nov 29 07:32:33 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 34b19e7b-91b0-4fa6-8e95-f75dc776caaf does not exist
Nov 29 07:32:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:32:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:32:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:32:33 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:32:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:32:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:32:34 compute-0 sudo[223055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:34 compute-0 sudo[223055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:34 compute-0 sudo[223055]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:34 compute-0 sshd-session[222823]: Invalid user student from 143.14.121.41 port 46576
Nov 29 07:32:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:34 compute-0 sudo[223146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbfejsdqkdvpfmohxztceopwmkrzmvpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401553.7565532-1353-149644990001435/AnsiballZ_command.py'
Nov 29 07:32:34 compute-0 sudo[223146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:34 compute-0 sudo[223111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:32:34 compute-0 sudo[223111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:34 compute-0 sudo[223111]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:34 compute-0 sudo[223154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:34 compute-0 sudo[223154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:34 compute-0 sudo[223154]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:34 compute-0 sudo[223179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:32:34 compute-0 sudo[223179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:34 compute-0 python3.9[223151]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:32:34 compute-0 sudo[223146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:34 compute-0 sshd-session[222823]: Connection closed by invalid user student 143.14.121.41 port 46576 [preauth]
Nov 29 07:32:34 compute-0 podman[223319]: 2025-11-29 07:32:34.667524455 +0000 UTC m=+0.021150296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:32:34 compute-0 sudo[223407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtmivztdwzumhqkdlceyhecwldoscrjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401554.5430193-1361-171838302885059/AnsiballZ_stat.py'
Nov 29 07:32:34 compute-0 sudo[223407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:35 compute-0 python3.9[223410]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:32:35 compute-0 sudo[223407]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:35 compute-0 ceph-mon[75050]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:35 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:32:35 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:32:35 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:32:35 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:32:35 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:32:35 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:32:35 compute-0 podman[223319]: 2025-11-29 07:32:35.36223072 +0000 UTC m=+0.715856571 container create bb62f2e96ae21c216e44769bb8447c94708775b9aec19807300e50752fac074f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:32:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:32:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:32:35 compute-0 sudo[223563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmrxqldybhtfvkuayanvanuztnksrki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401555.2976182-1369-257021245406053/AnsiballZ_command.py'
Nov 29 07:32:35 compute-0 sudo[223563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:35 compute-0 python3.9[223565]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:32:35 compute-0 sudo[223563]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:35 compute-0 systemd[1]: Started libpod-conmon-bb62f2e96ae21c216e44769bb8447c94708775b9aec19807300e50752fac074f.scope.
Nov 29 07:32:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:32:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:36 compute-0 sudo[223723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxfhuwfudkfrvixqceqsnyuypuoomhiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401556.0407043-1377-57987336519112/AnsiballZ_file.py'
Nov 29 07:32:36 compute-0 sudo[223723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:36 compute-0 python3.9[223725]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:36 compute-0 sudo[223723]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:37 compute-0 sshd-session[223322]: Connection closed by authenticating user root 143.14.121.41 port 46590 [preauth]
Nov 29 07:32:37 compute-0 podman[223319]: 2025-11-29 07:32:37.622082721 +0000 UTC m=+2.975708572 container init bb62f2e96ae21c216e44769bb8447c94708775b9aec19807300e50752fac074f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:32:37 compute-0 podman[223319]: 2025-11-29 07:32:37.630484923 +0000 UTC m=+2.984110764 container start bb62f2e96ae21c216e44769bb8447c94708775b9aec19807300e50752fac074f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:32:37 compute-0 bold_poincare[223596]: 167 167
Nov 29 07:32:37 compute-0 systemd[1]: libpod-bb62f2e96ae21c216e44769bb8447c94708775b9aec19807300e50752fac074f.scope: Deactivated successfully.
Nov 29 07:32:37 compute-0 sudo[223886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfjubrklajoucehqlqnidonrrughbmyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401557.3105414-1385-198819059260849/AnsiballZ_stat.py'
Nov 29 07:32:37 compute-0 sudo[223886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:37 compute-0 python3.9[223892]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:37 compute-0 sudo[223886]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:38 compute-0 sudo[224013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wckcwjjrryqkoycytnvkffzriaudgiav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401557.3105414-1385-198819059260849/AnsiballZ_copy.py'
Nov 29 07:32:38 compute-0 sudo[224013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:38 compute-0 python3.9[224015]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401557.3105414-1385-198819059260849/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:38 compute-0 sudo[224013]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:38 compute-0 ceph-mon[75050]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:38 compute-0 podman[223319]: 2025-11-29 07:32:38.887498198 +0000 UTC m=+4.241124049 container attach bb62f2e96ae21c216e44769bb8447c94708775b9aec19807300e50752fac074f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:32:38 compute-0 podman[223319]: 2025-11-29 07:32:38.889266016 +0000 UTC m=+4.242891897 container died bb62f2e96ae21c216e44769bb8447c94708775b9aec19807300e50752fac074f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:32:39 compute-0 sudo[224165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqwyfhnvbddvoaliouxiuuuctpuzcph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401558.6888695-1400-222165114515246/AnsiballZ_stat.py'
Nov 29 07:32:39 compute-0 sudo[224165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:39 compute-0 sshd-session[223807]: Connection closed by authenticating user root 143.14.121.41 port 46594 [preauth]
Nov 29 07:32:39 compute-0 python3.9[224167]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:39 compute-0 sudo[224165]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:39 compute-0 sudo[224290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmnljeaskobwwyjjxkznrfiouhtiowgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401558.6888695-1400-222165114515246/AnsiballZ_copy.py'
Nov 29 07:32:39 compute-0 sudo[224290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:40 compute-0 python3.9[224292]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401558.6888695-1400-222165114515246/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:40 compute-0 sudo[224290]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:40 compute-0 sudo[224442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maqbsoafbmnjajeglcuhkwfyfbmkthme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401560.2250943-1415-33909751663000/AnsiballZ_stat.py'
Nov 29 07:32:40 compute-0 sudo[224442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:40 compute-0 python3.9[224444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:32:40 compute-0 sudo[224442]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:41 compute-0 ceph-mon[75050]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:41 compute-0 ceph-mon[75050]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b827cf6d7eee26fdbd789fcf3c6a5f33321dabb9b39a10ecf058b0f37b0498f3-merged.mount: Deactivated successfully.
Nov 29 07:32:41 compute-0 sudo[224566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyqandxwsdcsnrxrfxdpyvdzznxeoxfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401560.2250943-1415-33909751663000/AnsiballZ_copy.py'
Nov 29 07:32:41 compute-0 sudo[224566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:41 compute-0 python3.9[224568]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401560.2250943-1415-33909751663000/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:32:41 compute-0 sudo[224566]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:42 compute-0 sudo[224718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrstmbosfamfbppcecmjvnpdzhkkpqwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401561.9032848-1430-234659547518797/AnsiballZ_systemd.py'
Nov 29 07:32:42 compute-0 sudo[224718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:42 compute-0 podman[223319]: 2025-11-29 07:32:42.301808691 +0000 UTC m=+7.655434552 container remove bb62f2e96ae21c216e44769bb8447c94708775b9aec19807300e50752fac074f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:32:42 compute-0 ceph-mon[75050]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:42 compute-0 sshd-session[224168]: Connection closed by authenticating user root 143.14.121.41 port 46606 [preauth]
Nov 29 07:32:42 compute-0 systemd[1]: libpod-conmon-bb62f2e96ae21c216e44769bb8447c94708775b9aec19807300e50752fac074f.scope: Deactivated successfully.
Nov 29 07:32:42 compute-0 podman[224728]: 2025-11-29 07:32:42.497868619 +0000 UTC m=+0.034540557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:32:42 compute-0 python3.9[224720]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:32:42 compute-0 systemd[1]: Reloading.
Nov 29 07:32:42 compute-0 systemd-sysv-generator[224774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:42 compute-0 systemd-rc-local-generator[224770]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:43 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 07:32:43 compute-0 sudo[224718]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:43 compute-0 podman[224728]: 2025-11-29 07:32:43.311281502 +0000 UTC m=+0.847953350 container create c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:32:43 compute-0 systemd[1]: Started libpod-conmon-c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b.scope.
Nov 29 07:32:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5357111348ac0869300cd3709dffa14eddd4ef0adf1665cf04d83dfc3e563781/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5357111348ac0869300cd3709dffa14eddd4ef0adf1665cf04d83dfc3e563781/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5357111348ac0869300cd3709dffa14eddd4ef0adf1665cf04d83dfc3e563781/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5357111348ac0869300cd3709dffa14eddd4ef0adf1665cf04d83dfc3e563781/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5357111348ac0869300cd3709dffa14eddd4ef0adf1665cf04d83dfc3e563781/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:43 compute-0 ceph-mon[75050]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:43 compute-0 podman[224728]: 2025-11-29 07:32:43.705217228 +0000 UTC m=+1.241889096 container init c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 29 07:32:43 compute-0 podman[224728]: 2025-11-29 07:32:43.716675336 +0000 UTC m=+1.253347184 container start c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:32:43 compute-0 podman[224728]: 2025-11-29 07:32:43.721778027 +0000 UTC m=+1.258449895 container attach c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:32:43 compute-0 sudo[224938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyuwyuvvmrbvgfabolfqbmwvavglmylr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401563.4438481-1438-12461851271763/AnsiballZ_systemd.py'
Nov 29 07:32:43 compute-0 sudo[224938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:32:44 compute-0 python3.9[224940]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 07:32:44 compute-0 systemd[1]: Reloading.
Nov 29 07:32:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:44 compute-0 systemd-rc-local-generator[224962]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:44 compute-0 systemd-sysv-generator[224968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:44 compute-0 systemd[1]: Reloading.
Nov 29 07:32:44 compute-0 systemd-rc-local-generator[225019]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:32:44 compute-0 systemd-sysv-generator[225022]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:32:44 compute-0 goofy_heyrovsky[224901]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:32:44 compute-0 goofy_heyrovsky[224901]: --> relative data size: 1.0
Nov 29 07:32:44 compute-0 goofy_heyrovsky[224901]: --> All data devices are unavailable
Nov 29 07:32:44 compute-0 podman[224728]: 2025-11-29 07:32:44.841269883 +0000 UTC m=+2.377941761 container died c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:32:44 compute-0 systemd[1]: libpod-c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b.scope: Deactivated successfully.
Nov 29 07:32:44 compute-0 systemd[1]: libpod-c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b.scope: Consumed 1.067s CPU time.
Nov 29 07:32:44 compute-0 sudo[224938]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:44 compute-0 ceph-mon[75050]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:45 compute-0 sshd-session[164553]: Connection closed by 192.168.122.30 port 53642
Nov 29 07:32:45 compute-0 sshd-session[164547]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:32:45 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 07:32:45 compute-0 systemd[1]: session-48.scope: Consumed 3min 48.211s CPU time.
Nov 29 07:32:45 compute-0 systemd-logind[807]: Session 48 logged out. Waiting for processes to exit.
Nov 29 07:32:45 compute-0 systemd-logind[807]: Removed session 48.
Nov 29 07:32:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5357111348ac0869300cd3709dffa14eddd4ef0adf1665cf04d83dfc3e563781-merged.mount: Deactivated successfully.
Nov 29 07:32:45 compute-0 sshd-session[224742]: Connection closed by authenticating user root 143.14.121.41 port 60712 [preauth]
Nov 29 07:32:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:46 compute-0 podman[224728]: 2025-11-29 07:32:46.102468364 +0000 UTC m=+3.639140212 container remove c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:32:46 compute-0 sudo[223179]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:46 compute-0 systemd[1]: libpod-conmon-c9fe756d8d05403a8c3022481b30e38bf8be09c5f4e3d83edfa392cd10a1c38b.scope: Deactivated successfully.
Nov 29 07:32:46 compute-0 sudo[225073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:46 compute-0 sudo[225073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:46 compute-0 sudo[225073]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:46 compute-0 sudo[225098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:32:46 compute-0 sudo[225098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:46 compute-0 sudo[225098]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:46 compute-0 sudo[225123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:46 compute-0 sudo[225123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:46 compute-0 sudo[225123]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:46 compute-0 sudo[225148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:32:46 compute-0 sudo[225148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:46 compute-0 podman[225212]: 2025-11-29 07:32:46.788817037 +0000 UTC m=+0.023227984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:32:47 compute-0 ceph-mon[75050]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:47 compute-0 podman[225212]: 2025-11-29 07:32:47.274863945 +0000 UTC m=+0.509274852 container create 04ef096ae2821f6441a42de3b813468998849d467daa5ee2118accb3ef971e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rhodes, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:32:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:48 compute-0 systemd[1]: Started libpod-conmon-04ef096ae2821f6441a42de3b813468998849d467daa5ee2118accb3ef971e0c.scope.
Nov 29 07:32:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:32:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:48 compute-0 sshd-session[225071]: Connection closed by authenticating user root 143.14.121.41 port 60720 [preauth]
Nov 29 07:32:48 compute-0 podman[225212]: 2025-11-29 07:32:48.531290962 +0000 UTC m=+1.765701909 container init 04ef096ae2821f6441a42de3b813468998849d467daa5ee2118accb3ef971e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:32:48 compute-0 podman[225212]: 2025-11-29 07:32:48.543289134 +0000 UTC m=+1.777700011 container start 04ef096ae2821f6441a42de3b813468998849d467daa5ee2118accb3ef971e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:32:48 compute-0 tender_rhodes[225228]: 167 167
Nov 29 07:32:48 compute-0 systemd[1]: libpod-04ef096ae2821f6441a42de3b813468998849d467daa5ee2118accb3ef971e0c.scope: Deactivated successfully.
Nov 29 07:32:49 compute-0 podman[225212]: 2025-11-29 07:32:49.551988422 +0000 UTC m=+2.786399319 container attach 04ef096ae2821f6441a42de3b813468998849d467daa5ee2118accb3ef971e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rhodes, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:32:49 compute-0 podman[225212]: 2025-11-29 07:32:49.553789592 +0000 UTC m=+2.788200529 container died 04ef096ae2821f6441a42de3b813468998849d467daa5ee2118accb3ef971e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:32:49 compute-0 ceph-mon[75050]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdbc89b1dee0b18b9b125645ff32937b55f771595755867352f3d8c6b1632c05-merged.mount: Deactivated successfully.
Nov 29 07:32:50 compute-0 podman[225212]: 2025-11-29 07:32:50.798782098 +0000 UTC m=+4.033193005 container remove 04ef096ae2821f6441a42de3b813468998849d467daa5ee2118accb3ef971e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rhodes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:32:50 compute-0 systemd[1]: libpod-conmon-04ef096ae2821f6441a42de3b813468998849d467daa5ee2118accb3ef971e0c.scope: Deactivated successfully.
Nov 29 07:32:51 compute-0 podman[225254]: 2025-11-29 07:32:50.983724473 +0000 UTC m=+0.026838677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:32:51 compute-0 sshd-session[225231]: Connection closed by authenticating user root 143.14.121.41 port 60722 [preauth]
Nov 29 07:32:51 compute-0 ceph-mon[75050]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:51 compute-0 podman[225254]: 2025-11-29 07:32:51.869471519 +0000 UTC m=+0.912585683 container create a24956803684f426ea65220b605ab3560317b1ab64226f481ab5a624d039d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:32:51 compute-0 systemd[1]: Started libpod-conmon-a24956803684f426ea65220b605ab3560317b1ab64226f481ab5a624d039d495.scope.
Nov 29 07:32:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3154fe69d1b238a232e04218a2b0482b3959ff6692f60eda6646ab44d50376f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3154fe69d1b238a232e04218a2b0482b3959ff6692f60eda6646ab44d50376f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3154fe69d1b238a232e04218a2b0482b3959ff6692f60eda6646ab44d50376f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3154fe69d1b238a232e04218a2b0482b3959ff6692f60eda6646ab44d50376f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:32:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:52 compute-0 podman[225254]: 2025-11-29 07:32:52.161896893 +0000 UTC m=+1.205011067 container init a24956803684f426ea65220b605ab3560317b1ab64226f481ab5a624d039d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:32:52 compute-0 podman[225254]: 2025-11-29 07:32:52.17927509 +0000 UTC m=+1.222389284 container start a24956803684f426ea65220b605ab3560317b1ab64226f481ab5a624d039d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:32:52 compute-0 podman[225254]: 2025-11-29 07:32:52.36583847 +0000 UTC m=+1.408952724 container attach a24956803684f426ea65220b605ab3560317b1ab64226f481ab5a624d039d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]: {
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:     "0": [
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:         {
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "devices": [
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "/dev/loop3"
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             ],
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_name": "ceph_lv0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_size": "21470642176",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "name": "ceph_lv0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "tags": {
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.cluster_name": "ceph",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.crush_device_class": "",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.encrypted": "0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.osd_id": "0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.type": "block",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.vdo": "0"
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             },
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "type": "block",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "vg_name": "ceph_vg0"
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:         }
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:     ],
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:     "1": [
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:         {
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "devices": [
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "/dev/loop4"
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             ],
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_name": "ceph_lv1",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_size": "21470642176",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "name": "ceph_lv1",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "tags": {
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.cluster_name": "ceph",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.crush_device_class": "",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.encrypted": "0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.osd_id": "1",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.type": "block",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.vdo": "0"
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             },
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "type": "block",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "vg_name": "ceph_vg1"
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:         }
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:     ],
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:     "2": [
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:         {
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "devices": [
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "/dev/loop5"
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             ],
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_name": "ceph_lv2",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_size": "21470642176",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "name": "ceph_lv2",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "tags": {
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.cluster_name": "ceph",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.crush_device_class": "",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.encrypted": "0",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.osd_id": "2",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.type": "block",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:                 "ceph.vdo": "0"
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             },
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "type": "block",
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:             "vg_name": "ceph_vg2"
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:         }
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]:     ]
Nov 29 07:32:52 compute-0 gifted_goldberg[225271]: }
Nov 29 07:32:52 compute-0 systemd[1]: libpod-a24956803684f426ea65220b605ab3560317b1ab64226f481ab5a624d039d495.scope: Deactivated successfully.
Nov 29 07:32:52 compute-0 podman[225254]: 2025-11-29 07:32:52.984714162 +0000 UTC m=+2.027828366 container died a24956803684f426ea65220b605ab3560317b1ab64226f481ab5a624d039d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:32:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:53 compute-0 ceph-mon[75050]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3154fe69d1b238a232e04218a2b0482b3959ff6692f60eda6646ab44d50376f7-merged.mount: Deactivated successfully.
Nov 29 07:32:54 compute-0 sshd-session[225270]: Invalid user test from 143.14.121.41 port 34830
Nov 29 07:32:55 compute-0 sshd-session[225270]: Connection closed by invalid user test 143.14.121.41 port 34830 [preauth]
Nov 29 07:32:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:56 compute-0 podman[225254]: 2025-11-29 07:32:56.552270938 +0000 UTC m=+5.595385102 container remove a24956803684f426ea65220b605ab3560317b1ab64226f481ab5a624d039d495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:32:56 compute-0 systemd[1]: libpod-conmon-a24956803684f426ea65220b605ab3560317b1ab64226f481ab5a624d039d495.scope: Deactivated successfully.
Nov 29 07:32:56 compute-0 sudo[225148]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:56 compute-0 ceph-mon[75050]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:56 compute-0 podman[225294]: 2025-11-29 07:32:56.664033405 +0000 UTC m=+1.706328754 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:32:56 compute-0 sudo[225323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:56 compute-0 sudo[225323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:56 compute-0 podman[225304]: 2025-11-29 07:32:56.710043398 +0000 UTC m=+1.077609502 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:32:56 compute-0 sudo[225323]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:56 compute-0 sudo[225365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:32:56 compute-0 sudo[225365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:56 compute-0 sudo[225365]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:56 compute-0 sudo[225390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:32:56 compute-0 sudo[225390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:56 compute-0 sudo[225390]: pam_unix(sudo:session): session closed for user root
Nov 29 07:32:56 compute-0 sudo[225415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:32:56 compute-0 sudo[225415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:32:57 compute-0 podman[225481]: 2025-11-29 07:32:57.372823375 +0000 UTC m=+0.041655034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:32:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:58 compute-0 podman[225481]: 2025-11-29 07:32:58.133491078 +0000 UTC m=+0.802322687 container create 114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:32:58 compute-0 ceph-mon[75050]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:58 compute-0 systemd[1]: Started libpod-conmon-114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5.scope.
Nov 29 07:32:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:32:58 compute-0 sshd-session[225315]: Connection closed by authenticating user root 143.14.121.41 port 34840 [preauth]
Nov 29 07:32:58 compute-0 podman[225481]: 2025-11-29 07:32:58.657762975 +0000 UTC m=+1.326594644 container init 114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:32:58 compute-0 podman[225481]: 2025-11-29 07:32:58.667276345 +0000 UTC m=+1.336107924 container start 114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:32:58 compute-0 funny_hopper[225497]: 167 167
Nov 29 07:32:58 compute-0 systemd[1]: libpod-114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5.scope: Deactivated successfully.
Nov 29 07:32:58 compute-0 conmon[225497]: conmon 114971bb65c7343f0399 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5.scope/container/memory.events
Nov 29 07:32:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:32:58 compute-0 podman[225481]: 2025-11-29 07:32:58.765357826 +0000 UTC m=+1.434189485 container attach 114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hopper, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:32:58 compute-0 podman[225481]: 2025-11-29 07:32:58.765921762 +0000 UTC m=+1.434753371 container died 114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7696e0a9f8092ecced398a721dcca000d086842c1097bb3b5f18a0af25eb746-merged.mount: Deactivated successfully.
Nov 29 07:32:59 compute-0 ceph-mon[75050]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:32:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:32:59.747 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:32:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:32:59.749 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:32:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:32:59.749 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:33:00 compute-0 podman[225481]: 2025-11-29 07:33:00.096428941 +0000 UTC m=+2.765260520 container remove 114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:33:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:00 compute-0 systemd[1]: libpod-conmon-114971bb65c7343f03996a09f06bff15ffb5324b953ba05e381ba9fead42a3a5.scope: Deactivated successfully.
Nov 29 07:33:00 compute-0 podman[225523]: 2025-11-29 07:33:00.335874233 +0000 UTC m=+0.033737297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:33:00 compute-0 sshd-session[225537]: Accepted publickey for zuul from 192.168.122.30 port 58076 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:33:00 compute-0 systemd-logind[807]: New session 49 of user zuul.
Nov 29 07:33:00 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 29 07:33:00 compute-0 sshd-session[225537]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:33:00 compute-0 podman[225523]: 2025-11-29 07:33:00.754122959 +0000 UTC m=+0.451985973 container create df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:33:00 compute-0 systemd[1]: Started libpod-conmon-df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32.scope.
Nov 29 07:33:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e63176299a80c52052cb504790de98bbc6a7b47ee2a8d7d23b7e7b447b97cc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e63176299a80c52052cb504790de98bbc6a7b47ee2a8d7d23b7e7b447b97cc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e63176299a80c52052cb504790de98bbc6a7b47ee2a8d7d23b7e7b447b97cc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e63176299a80c52052cb504790de98bbc6a7b47ee2a8d7d23b7e7b447b97cc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:33:01 compute-0 podman[225523]: 2025-11-29 07:33:01.373886606 +0000 UTC m=+1.071749630 container init df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:33:01 compute-0 podman[225523]: 2025-11-29 07:33:01.386008429 +0000 UTC m=+1.083871443 container start df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:33:01 compute-0 podman[225523]: 2025-11-29 07:33:01.491873184 +0000 UTC m=+1.189736158 container attach df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:33:01 compute-0 ceph-mon[75050]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:01 compute-0 sshd-session[225513]: Connection closed by authenticating user root 143.14.121.41 port 34850 [preauth]
Nov 29 07:33:01 compute-0 python3.9[225698]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:33:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]: {
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "osd_id": 2,
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "type": "bluestore"
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:     },
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "osd_id": 1,
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "type": "bluestore"
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:     },
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "osd_id": 0,
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:         "type": "bluestore"
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]:     }
Nov 29 07:33:02 compute-0 infallible_wozniak[225595]: }
Nov 29 07:33:02 compute-0 systemd[1]: libpod-df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32.scope: Deactivated successfully.
Nov 29 07:33:02 compute-0 systemd[1]: libpod-df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32.scope: Consumed 1.135s CPU time.
Nov 29 07:33:02 compute-0 podman[225523]: 2025-11-29 07:33:02.510556697 +0000 UTC m=+2.208419691 container died df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:33:02 compute-0 python3.9[225890]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:33:03 compute-0 network[225907]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:33:03 compute-0 network[225908]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:33:03 compute-0 network[225909]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:33:03 compute-0 ceph-mon[75050]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e63176299a80c52052cb504790de98bbc6a7b47ee2a8d7d23b7e7b447b97cc8-merged.mount: Deactivated successfully.
Nov 29 07:33:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:33:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5468 writes, 23K keys, 5468 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5468 writes, 818 syncs, 6.68 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b27090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x558198b271f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:33:05
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:06 compute-0 ceph-mon[75050]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:06 compute-0 podman[225523]: 2025-11-29 07:33:06.552890042 +0000 UTC m=+6.250753056 container remove df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:33:06 compute-0 systemd[1]: libpod-conmon-df594fe1404b7d56be0903fb057f1e438ceefc27fde74d8d0357861a6a7f2c32.scope: Deactivated successfully.
Nov 29 07:33:06 compute-0 sudo[225415]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:33:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:33:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:33:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:33:07 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:33:07 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c8d91e29-332a-4aa6-b37b-651ebf3b5077 does not exist
Nov 29 07:33:07 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 0fd5dd0b-f40e-4f76-9cc4-bd6ec884bde7 does not exist
Nov 29 07:33:07 compute-0 sudo[226036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:33:07 compute-0 sudo[226036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:07 compute-0 sudo[226036]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:07 compute-0 sudo[226063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:33:07 compute-0 sudo[226063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:33:07 compute-0 sudo[226063]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:07 compute-0 ceph-mon[75050]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:33:07 compute-0 sudo[226236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urvibbksqyodmzagprvxuqqthoktxlkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401587.524781-47-187211354831074/AnsiballZ_setup.py'
Nov 29 07:33:07 compute-0 sudo[226236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:07 compute-0 sshd-session[225920]: Connection closed by authenticating user root 143.14.121.41 port 47190 [preauth]
Nov 29 07:33:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:08 compute-0 python3.9[226238]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 07:33:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:33:08 compute-0 sudo[226236]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:09 compute-0 sudo[226322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywvgyhkqyqbbpuwvkjjjevjkvqgbouew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401587.524781-47-187211354831074/AnsiballZ_dnf.py'
Nov 29 07:33:09 compute-0 sudo[226322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:09 compute-0 python3.9[226324]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:33:09 compute-0 ceph-mon[75050]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:09 compute-0 sshd-session[226245]: Connection closed by authenticating user root 143.14.121.41 port 47192 [preauth]
Nov 29 07:33:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:33:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 6536 writes, 27K keys, 6536 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6536 writes, 1117 syncs, 5.85 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 273 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.019       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f949605090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f949605090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f949605090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9496051f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:33:10 compute-0 ceph-mon[75050]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:12 compute-0 sshd-session[226326]: Connection closed by authenticating user root 143.14.121.41 port 47206 [preauth]
Nov 29 07:33:13 compute-0 ceph-mon[75050]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:33:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:33:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:16 compute-0 sudo[226322]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:33:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Cumulative writes: 5476 writes, 23K keys, 5476 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5476 writes, 797 syncs, 6.87 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.39              0.00         1    0.392       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc49090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc49090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc49090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.5 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560f3dc491f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 07:33:17 compute-0 sudo[226479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjgxwoerdoynyqtgccamdnqyeodhyvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401596.6703873-59-95098381221587/AnsiballZ_stat.py'
Nov 29 07:33:17 compute-0 sudo[226479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:17 compute-0 python3.9[226481]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:33:17 compute-0 sudo[226479]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:17 compute-0 sshd-session[226328]: Invalid user test from 143.14.121.41 port 35140
Nov 29 07:33:17 compute-0 ceph-mon[75050]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:17 compute-0 sshd-session[226328]: Connection closed by invalid user test 143.14.121.41 port 35140 [preauth]
Nov 29 07:33:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:18 compute-0 sudo[226632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msadtzsozyhrxflyzrtfmyiabvagvrvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401597.6025941-69-262157082645185/AnsiballZ_command.py'
Nov 29 07:33:18 compute-0 sudo[226632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:18 compute-0 python3.9[226634]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:18 compute-0 sudo[226632]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:18 compute-0 sudo[226786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlrwrphfutzqudkmorftcdiyowefbcrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401598.6368737-79-48112356084389/AnsiballZ_stat.py'
Nov 29 07:33:18 compute-0 sudo[226786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:18 compute-0 ceph-mon[75050]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:18 compute-0 ceph-mon[75050]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:19 compute-0 python3.9[226788]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:33:19 compute-0 sudo[226786]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:19 compute-0 sudo[226938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzxcpwwyktbmyimhmknvqhplbwokjkbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401599.4105978-87-222060149832877/AnsiballZ_command.py'
Nov 29 07:33:19 compute-0 sudo[226938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:19 compute-0 sshd-session[226605]: Invalid user test from 143.14.121.41 port 35156
Nov 29 07:33:19 compute-0 python3.9[226940]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:33:19 compute-0 sudo[226938]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:20 compute-0 sshd-session[226605]: Connection closed by invalid user test 143.14.121.41 port 35156 [preauth]
Nov 29 07:33:20 compute-0 sudo[227092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvdnjfbafsqzdhascnrxuwuiammfidna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401600.1704779-95-185299466928726/AnsiballZ_stat.py'
Nov 29 07:33:20 compute-0 sudo[227092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:20 compute-0 python3.9[227094]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:33:20 compute-0 sudo[227092]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:21 compute-0 sudo[227216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nknquypbbgeyohlulxupxnmirwkixzoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401600.1704779-95-185299466928726/AnsiballZ_copy.py'
Nov 29 07:33:21 compute-0 sudo[227216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:21 compute-0 ceph-mgr[75345]: [devicehealth INFO root] Check health
Nov 29 07:33:21 compute-0 python3.9[227218]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401600.1704779-95-185299466928726/.source.iscsi _original_basename=.ayddu1iz follow=False checksum=9125232e0ceb23ebcb8ecc2d0e270cc63cc7948c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:21 compute-0 sudo[227216]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:22 compute-0 sudo[227368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmfztrgkihmcgbznmjggckqanathmqbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401601.894261-110-18943412968308/AnsiballZ_file.py'
Nov 29 07:33:22 compute-0 sudo[227368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:22 compute-0 ceph-mon[75050]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:22 compute-0 python3.9[227370]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:22 compute-0 sudo[227368]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:23 compute-0 sshd-session[227054]: Connection closed by authenticating user root 143.14.121.41 port 35166 [preauth]
Nov 29 07:33:24 compute-0 sudo[227522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esizvaqejublysxzfcwnczgslfmymjyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401603.5465965-118-92755767663005/AnsiballZ_lineinfile.py'
Nov 29 07:33:24 compute-0 sudo[227522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:24 compute-0 ceph-mon[75050]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:24 compute-0 python3.9[227524]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:24 compute-0 sudo[227522]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:25 compute-0 sudo[227674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpvpzjwfitmyvqwbucleqgzlhwtradrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401604.594958-127-60223839011275/AnsiballZ_systemd_service.py'
Nov 29 07:33:25 compute-0 sudo[227674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:25 compute-0 sshd-session[227441]: Connection closed by authenticating user root 143.14.121.41 port 52908 [preauth]
Nov 29 07:33:25 compute-0 python3.9[227676]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:33:25 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 07:33:25 compute-0 sudo[227674]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:28 compute-0 sshd-session[227679]: Invalid user postgres from 143.14.121.41 port 52924
Nov 29 07:33:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:28 compute-0 sshd-session[227679]: Connection closed by invalid user postgres 143.14.121.41 port 52924 [preauth]
Nov 29 07:33:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:31 compute-0 sshd-session[227779]: Invalid user jenkins from 143.14.121.41 port 52934
Nov 29 07:33:32 compute-0 sshd-session[227779]: Connection closed by invalid user jenkins 143.14.121.41 port 52934 [preauth]
Nov 29 07:33:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:32 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:33:33 compute-0 podman[227759]: 2025-11-29 07:33:33.198220181 +0000 UTC m=+5.554438240 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 07:33:33 compute-0 podman[227758]: 2025-11-29 07:33:33.254455104 +0000 UTC m=+5.617045897 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:33:33 compute-0 sudo[227885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvgtcafjlajrhpijkqqcefvhcjqlhbhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401605.9238985-135-227509947192902/AnsiballZ_systemd_service.py'
Nov 29 07:33:33 compute-0 sudo[227885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:33 compute-0 python3.9[227887]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:33:33 compute-0 systemd[1]: Reloading.
Nov 29 07:33:33 compute-0 sshd-session[227782]: Connection closed by authenticating user root 143.14.121.41 port 35470 [preauth]
Nov 29 07:33:33 compute-0 systemd-rc-local-generator[227917]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:33:33 compute-0 systemd-sysv-generator[227921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:33:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:33:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:33:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:36 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:33:36 compute-0 sshd-session[227925]: Connection closed by authenticating user root 143.14.121.41 port 35486 [preauth]
Nov 29 07:33:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:40 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:33:41 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf MDS connection to Monitors appears to be laggy; 16.2544s since last acked beacon
Nov 29 07:33:41 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:33:42 compute-0 sshd-session[227927]: Connection closed by authenticating user root 143.14.121.41 port 35490 [preauth]
Nov 29 07:33:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 18.3335 seconds
Nov 29 07:33:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:42 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf  MDS is no longer laggy
Nov 29 07:33:43 compute-0 sshd-session[227929]: Connection closed by authenticating user root 143.14.121.41 port 33356 [preauth]
Nov 29 07:33:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:45 compute-0 sshd-session[227931]: Connection closed by authenticating user root 143.14.121.41 port 33370 [preauth]
Nov 29 07:33:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:46 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 07:33:46 compute-0 ceph-mon[75050]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:46 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 29 07:33:46 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 07:33:46 compute-0 systemd[1]: Started Open-iSCSI.
Nov 29 07:33:46 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 07:33:46 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 07:33:46 compute-0 sudo[227885]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:47 compute-0 sudo[228096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icdhgnkkptnzhusbdtbblohbttayjccy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401627.1424115-146-135633353060845/AnsiballZ_service_facts.py'
Nov 29 07:33:47 compute-0 sudo[228096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:47 compute-0 python3.9[228098]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:33:47 compute-0 network[228115]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:33:47 compute-0 network[228116]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:33:47 compute-0 network[228117]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:33:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 ceph-mon[75050]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:48 compute-0 sshd-session[227933]: Invalid user ubuntu from 143.14.121.41 port 33376
Nov 29 07:33:49 compute-0 sshd-session[227933]: Connection closed by invalid user ubuntu 143.14.121.41 port 33376 [preauth]
Nov 29 07:33:49 compute-0 ceph-mon[75050]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:51 compute-0 sshd-session[228130]: Invalid user test2 from 143.14.121.41 port 33378
Nov 29 07:33:51 compute-0 ceph-mon[75050]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:51 compute-0 sshd-session[228130]: Connection closed by invalid user test2 143.14.121.41 port 33378 [preauth]
Nov 29 07:33:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:52 compute-0 sudo[228096]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:53 compute-0 sudo[228391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ounyjkahpkdrfbqmomvqzkwmvnhoqiit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401633.1084871-156-180908025234459/AnsiballZ_file.py'
Nov 29 07:33:53 compute-0 sudo[228391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:53 compute-0 sshd-session[228198]: Connection closed by authenticating user root 143.14.121.41 port 33384 [preauth]
Nov 29 07:33:53 compute-0 python3.9[228393]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:33:53 compute-0 sudo[228391]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:53 compute-0 ceph-mon[75050]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:54 compute-0 sudo[228545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftjhyfmcswxtckddrtpqytcrqnyxyqyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401633.8641238-164-77913803789483/AnsiballZ_modprobe.py'
Nov 29 07:33:54 compute-0 sudo[228545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:54 compute-0 python3.9[228547]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 07:33:54 compute-0 sudo[228545]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:55 compute-0 ceph-mon[75050]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:55 compute-0 sudo[228701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvctyreqvuakhyunkagpxhonwpncpdij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401634.8640947-172-248349106031172/AnsiballZ_stat.py'
Nov 29 07:33:55 compute-0 sudo[228701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:55 compute-0 python3.9[228703]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:33:55 compute-0 sudo[228701]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:55 compute-0 sudo[228824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfmpzquwivjridjddxdifjyzfffjlrcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401634.8640947-172-248349106031172/AnsiballZ_copy.py'
Nov 29 07:33:55 compute-0 sudo[228824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:56 compute-0 sshd-session[228424]: Connection closed by authenticating user root 143.14.121.41 port 40488 [preauth]
Nov 29 07:33:56 compute-0 python3.9[228826]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401634.8640947-172-248349106031172/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:56 compute-0 sudo[228824]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:56 compute-0 sudo[228978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uggbjpxjkkjoeygwnxubfysosklkpxkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401636.7175572-188-66583159540819/AnsiballZ_lineinfile.py'
Nov 29 07:33:56 compute-0 sudo[228978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:57 compute-0 python3.9[228980]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:33:57 compute-0 sudo[228978]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:33:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:58 compute-0 sudo[229130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyiilpuirotybheoqwbsudeklcrkqwow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401637.4591084-196-174828166106808/AnsiballZ_systemd.py'
Nov 29 07:33:58 compute-0 ceph-mon[75050]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:58 compute-0 sudo[229130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:58 compute-0 python3.9[229132]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:33:58 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 07:33:58 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 29 07:33:58 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 29 07:33:58 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 07:33:58 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 07:33:58 compute-0 sshd-session[228851]: Connection closed by authenticating user root 143.14.121.41 port 40500 [preauth]
Nov 29 07:33:58 compute-0 sudo[229130]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:59 compute-0 sudo[229287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukqzaetvqbbsmwgojzkilqqbanbpzbth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401638.8482945-204-187343186161525/AnsiballZ_file.py'
Nov 29 07:33:59 compute-0 sudo[229287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:33:59 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 07:33:59 compute-0 python3.9[229289]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:33:59 compute-0 sudo[229287]: pam_unix(sudo:session): session closed for user root
Nov 29 07:33:59 compute-0 ceph-mon[75050]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:33:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:33:59.748 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:33:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:33:59.750 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:33:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:33:59.750 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:34:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:00 compute-0 sudo[229441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trumkerjftnvqnvkknzhkzfvigrvzvyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401639.6706555-213-166040771951497/AnsiballZ_stat.py'
Nov 29 07:34:00 compute-0 sudo[229441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:00 compute-0 python3.9[229443]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:34:00 compute-0 sudo[229441]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:00 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 07:34:00 compute-0 sudo[229594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skcibfudextqahvafagifjqssnsqevtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401640.6499653-222-29022792238167/AnsiballZ_stat.py'
Nov 29 07:34:00 compute-0 sudo[229594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:01 compute-0 python3.9[229596]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:34:01 compute-0 sudo[229594]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:01 compute-0 sshd-session[229184]: Connection closed by authenticating user root 143.14.121.41 port 40508 [preauth]
Nov 29 07:34:01 compute-0 ceph-mon[75050]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:01 compute-0 sudo[229746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyvyfqvfdtdjrtflecelkrlwetnjxwig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401641.4729125-230-35805770366565/AnsiballZ_stat.py'
Nov 29 07:34:01 compute-0 sudo[229746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:01 compute-0 python3.9[229749]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:01 compute-0 sudo[229746]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:02 compute-0 sudo[229870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loexvpyblimamizgldvfswitaiizbpsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401641.4729125-230-35805770366565/AnsiballZ_copy.py'
Nov 29 07:34:02 compute-0 sudo[229870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:02 compute-0 python3.9[229872]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401641.4729125-230-35805770366565/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:02 compute-0 sudo[229870]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:03 compute-0 sudo[230022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alunhtqzjvrztletejzbgqcmludlehud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401642.8497117-245-142802210418948/AnsiballZ_command.py'
Nov 29 07:34:03 compute-0 sudo[230022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:03 compute-0 python3.9[230024]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:34:03 compute-0 sudo[230022]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:03 compute-0 podman[230116]: 2025-11-29 07:34:03.718950991 +0000 UTC m=+0.082311559 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 07:34:03 compute-0 podman[230107]: 2025-11-29 07:34:03.784836949 +0000 UTC m=+0.147414346 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 07:34:03 compute-0 sudo[230217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjocfvwzuaihqlgdpulqqdqsmkdgtqqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401643.515828-253-217809952332977/AnsiballZ_lineinfile.py'
Nov 29 07:34:03 compute-0 sudo[230217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:04 compute-0 python3.9[230219]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:04 compute-0 sudo[230217]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:04 compute-0 sudo[230369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enatzvqupmhkasbeevbrzkbenkdxgeda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401644.2171526-261-126074203963621/AnsiballZ_replace.py'
Nov 29 07:34:04 compute-0 sudo[230369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:04 compute-0 ceph-mon[75050]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:04 compute-0 python3.9[230371]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:04 compute-0 sudo[230369]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:05 compute-0 sudo[230521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhefnldtgpdoccczplytogiqvquyqopn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401645.1209846-269-60589573094306/AnsiballZ_replace.py'
Nov 29 07:34:05 compute-0 sudo[230521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:34:05
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'vms']
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:05 compute-0 sshd-session[229748]: Connection closed by authenticating user root 143.14.121.41 port 33070 [preauth]
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:34:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:34:07 compute-0 sudo[230526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:07 compute-0 sudo[230526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:07 compute-0 sudo[230526]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:07 compute-0 sudo[230551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:34:07 compute-0 sudo[230551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:07 compute-0 sudo[230551]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:07 compute-0 sudo[230576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:07 compute-0 sudo[230576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:07 compute-0 sudo[230576]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:07 compute-0 sudo[230601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:34:07 compute-0 sudo[230601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:07 compute-0 sudo[230601]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:08 compute-0 sshd-session[230524]: Invalid user zjw from 143.14.121.41 port 33080
Nov 29 07:34:08 compute-0 sshd-session[230524]: Connection closed by invalid user zjw 143.14.121.41 port 33080 [preauth]
Nov 29 07:34:08 compute-0 python3.9[230523]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:08 compute-0 sudo[230521]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:34:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:09 compute-0 sudo[230797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgvrkqkrhreortapxvbbbqiornqdbats ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401649.017307-278-25615317361906/AnsiballZ_lineinfile.py'
Nov 29 07:34:09 compute-0 sudo[230797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:09 compute-0 python3.9[230799]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:09 compute-0 sudo[230797]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:09 compute-0 ceph-mon[75050]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:09 compute-0 ceph-mon[75050]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:10 compute-0 sudo[230949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivgufykdcqhsdyqbsxxuqmlurdswufew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401649.830765-278-211664623963765/AnsiballZ_lineinfile.py'
Nov 29 07:34:10 compute-0 sudo[230949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:10 compute-0 python3.9[230951]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:10 compute-0 sudo[230949]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:10 compute-0 sshd-session[230670]: Connection closed by authenticating user root 143.14.121.41 port 33082 [preauth]
Nov 29 07:34:10 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:34:10 compute-0 sudo[231102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wctvjawgohjniksehbmcexzoctmbrbqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401650.5754716-278-101155365937075/AnsiballZ_lineinfile.py'
Nov 29 07:34:10 compute-0 sudo[231102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:34:11 compute-0 python3.9[231104]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:11 compute-0 sudo[231102]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:11 compute-0 sudo[231255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhzbxhznyjukjbdesfkqrsevucioazxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401651.340496-278-43936881751506/AnsiballZ_lineinfile.py'
Nov 29 07:34:11 compute-0 sudo[231255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:11 compute-0 python3.9[231257]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:11 compute-0 sudo[231255]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:11 compute-0 ceph-mon[75050]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:11 compute-0 ceph-mon[75050]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:11 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:34:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:12 compute-0 sudo[231407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhgnolroknsbasndpbkhvlbaqtpwtffz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401652.0151713-307-19692340888899/AnsiballZ_stat.py'
Nov 29 07:34:12 compute-0 sudo[231407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:12 compute-0 python3.9[231409]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:34:12 compute-0 sshd-session[231075]: Connection closed by authenticating user root 143.14.121.41 port 33088 [preauth]
Nov 29 07:34:12 compute-0 sudo[231407]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:13 compute-0 sudo[231562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdobraetlgmqbbyulrxmwzfitmssifox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401653.1024747-315-168013926555109/AnsiballZ_file.py'
Nov 29 07:34:13 compute-0 sudo[231562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:13 compute-0 python3.9[231565]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:13 compute-0 sudo[231562]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:34:13 compute-0 sudo[231609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:13 compute-0 sudo[231609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:13 compute-0 sudo[231609]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:14 compute-0 sudo[231661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:34:14 compute-0 sudo[231661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:14 compute-0 sudo[231661]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:14 compute-0 sudo[231709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:14 compute-0 sudo[231709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:14 compute-0 sudo[231709]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:14 compute-0 ceph-mon[75050]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:34:14 compute-0 sudo[231763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:34:14 compute-0 sudo[231763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:14 compute-0 sudo[231815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqpyyrvinreezeyfzzenlmetpoogpdgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401653.9499426-324-198095083492430/AnsiballZ_file.py'
Nov 29 07:34:14 compute-0 sudo[231815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:14 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 07:34:14 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 07:34:14 compute-0 python3.9[231817]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:14 compute-0 sudo[231815]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:14 compute-0 sudo[231763]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:34:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:34:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:34:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:34:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:34:14 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:34:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:34:15 compute-0 sudo[232002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwaccqbacbgfdfobnygccsqxxdmqkxjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401654.7084496-332-245726391260115/AnsiballZ_stat.py'
Nov 29 07:34:15 compute-0 sudo[232002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:15 compute-0 sshd-session[231561]: Connection closed by authenticating user root 143.14.121.41 port 51392 [preauth]
Nov 29 07:34:15 compute-0 python3.9[232004]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:15 compute-0 sudo[232002]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:15 compute-0 sudo[232081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rytlsxuxijscxnykzuzvglgzpbrmxbis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401654.7084496-332-245726391260115/AnsiballZ_file.py'
Nov 29 07:34:15 compute-0 sudo[232081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:15 compute-0 python3.9[232083]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:15 compute-0 sudo[232081]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:16 compute-0 sudo[232233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wagsusphrbrfkpvzjsmbobfsaqoutjeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401655.954816-332-117289136327773/AnsiballZ_stat.py'
Nov 29 07:34:16 compute-0 sudo[232233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:16 compute-0 python3.9[232235]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:16 compute-0 sudo[232233]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:16 compute-0 sudo[232312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmfybrdvrpzutvpffjhtepzszkgfzxik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401655.954816-332-117289136327773/AnsiballZ_file.py'
Nov 29 07:34:16 compute-0 sudo[232312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:17 compute-0 python3.9[232314]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:17 compute-0 sudo[232312]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:17 compute-0 sudo[232464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asipyvpxpubmgaehvivctamfyvujpllu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401657.2737737-355-165989019295240/AnsiballZ_file.py'
Nov 29 07:34:17 compute-0 sudo[232464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:17 compute-0 python3.9[232466]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:17 compute-0 sudo[232464]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:18 compute-0 ceph-mon[75050]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:34:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:34:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:18 compute-0 sudo[232616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyqjbgdgkulnlnrjdhobgcajblzcdoqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401658.0521643-363-92891864118542/AnsiballZ_stat.py'
Nov 29 07:34:18 compute-0 sudo[232616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:18 compute-0 python3.9[232618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:18 compute-0 sudo[232616]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:18 compute-0 sudo[232694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jswsiecccfpkyojsvbiecbrjowwsoigs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401658.0521643-363-92891864118542/AnsiballZ_file.py'
Nov 29 07:34:18 compute-0 sudo[232694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:19 compute-0 python3.9[232696]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:19 compute-0 sudo[232694]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:34:19 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 0d916c80-4611-4b57-985e-4bf6e03c6640 does not exist
Nov 29 07:34:19 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 9905d3c7-bab2-4008-9190-7e6150a16dd3 does not exist
Nov 29 07:34:19 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 185b3493-38e1-4546-b0ca-d92c50bd86fc does not exist
Nov 29 07:34:19 compute-0 sshd-session[232030]: Invalid user admin from 143.14.121.41 port 51402
Nov 29 07:34:19 compute-0 sudo[232846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afiihotaompdkpyvmqkzwzjhukuxszlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401659.4013476-375-122684865389050/AnsiballZ_stat.py'
Nov 29 07:34:19 compute-0 sudo[232846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:34:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:34:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:34:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:34:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:34:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:34:19 compute-0 python3.9[232848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:19 compute-0 sudo[232846]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:19 compute-0 sudo[232849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:19 compute-0 sudo[232849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:19 compute-0 sudo[232849]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:20 compute-0 sudo[232876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:34:20 compute-0 sudo[232876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:20 compute-0 sudo[232876]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:20 compute-0 sshd-session[232030]: Connection closed by invalid user admin 143.14.121.41 port 51402 [preauth]
Nov 29 07:34:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:20 compute-0 sudo[232924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:34:20 compute-0 sudo[232924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:20 compute-0 sudo[232924]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:20 compute-0 sudo[233020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvroaelbjxqymgghqushfpuyqdqxnpkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401659.4013476-375-122684865389050/AnsiballZ_file.py'
Nov 29 07:34:20 compute-0 sudo[233020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:20 compute-0 sudo[232982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:34:20 compute-0 sudo[232982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:34:20 compute-0 python3.9[233024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:20 compute-0 sudo[233020]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:20 compute-0 podman[233093]: 2025-11-29 07:34:20.658886896 +0000 UTC m=+0.035782562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:34:21 compute-0 sudo[233233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbugsvgpvbcfletyjeowymyvwcztwvmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401660.7243145-387-83012043876843/AnsiballZ_systemd.py'
Nov 29 07:34:21 compute-0 sudo[233233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:21 compute-0 python3.9[233235]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:34:21 compute-0 ceph-mon[75050]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:21 compute-0 ceph-mon[75050]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:21 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:34:21 compute-0 systemd[1]: Reloading.
Nov 29 07:34:21 compute-0 podman[233093]: 2025-11-29 07:34:21.626848588 +0000 UTC m=+1.003744284 container create c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_vaughan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:34:21 compute-0 systemd-rc-local-generator[233261]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:34:21 compute-0 systemd-sysv-generator[233266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:34:22 compute-0 sshd-session[233027]: Invalid user user from 143.14.121.41 port 51410
Nov 29 07:34:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:22 compute-0 sudo[233233]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:22 compute-0 sshd-session[233027]: Connection closed by invalid user user 143.14.121.41 port 51410 [preauth]
Nov 29 07:34:22 compute-0 sudo[233422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylnjezhejuzaofklccawenvwbzznuanj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401662.4439106-395-170714553895302/AnsiballZ_stat.py'
Nov 29 07:34:22 compute-0 sudo[233422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:22 compute-0 systemd[1]: Started libpod-conmon-c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93.scope.
Nov 29 07:34:23 compute-0 python3.9[233424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:34:23 compute-0 sudo[233422]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:23 compute-0 sudo[233505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rikcbpntrspegviugvapqhiczikbyzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401662.4439106-395-170714553895302/AnsiballZ_file.py'
Nov 29 07:34:23 compute-0 sudo[233505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:23 compute-0 python3.9[233507]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:23 compute-0 sudo[233505]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:24 compute-0 sudo[233658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-silfhiwksdpehteznlakotsivmqifzxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401663.8353467-407-5015725890585/AnsiballZ_stat.py'
Nov 29 07:34:24 compute-0 sudo[233658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:25 compute-0 python3.9[233660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:34:25 compute-0 sudo[233658]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:25 compute-0 sudo[233737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhcdvgpqgjfkicgkbjeghlktscwdxzqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401663.8353467-407-5015725890585/AnsiballZ_file.py'
Nov 29 07:34:25 compute-0 sudo[233737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:25 compute-0 podman[233093]: 2025-11-29 07:34:25.523242929 +0000 UTC m=+4.900138575 container init c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:34:25 compute-0 podman[233093]: 2025-11-29 07:34:25.533104409 +0000 UTC m=+4.910000035 container start c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:34:25 compute-0 systemd[1]: libpod-c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93.scope: Deactivated successfully.
Nov 29 07:34:25 compute-0 nice_vaughan[233427]: 167 167
Nov 29 07:34:25 compute-0 conmon[233427]: conmon c7ef8db399e729d1cd44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93.scope/container/memory.events
Nov 29 07:34:25 compute-0 python3.9[233739]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:34:25 compute-0 sudo[233737]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:26 compute-0 sudo[233903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srwlnoqfiqhbxnerruskvuaquinnhjcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401665.8070154-419-74346583549164/AnsiballZ_systemd.py'
Nov 29 07:34:26 compute-0 sudo[233903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:26 compute-0 sshd-session[233532]: Connection closed by authenticating user root 143.14.121.41 port 60036 [preauth]
Nov 29 07:34:26 compute-0 python3.9[233905]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:34:26 compute-0 systemd[1]: Reloading.
Nov 29 07:34:26 compute-0 systemd-rc-local-generator[233934]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:34:26 compute-0 systemd-sysv-generator[233939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:34:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:34:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:34:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:34:27 compute-0 ceph-mon[75050]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:27 compute-0 ceph-mon[75050]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:28 compute-0 podman[233093]: 2025-11-29 07:34:28.420363159 +0000 UTC m=+7.797258825 container attach c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_vaughan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:34:28 compute-0 podman[233093]: 2025-11-29 07:34:28.422158848 +0000 UTC m=+7.799054514 container died c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:34:28 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 07:34:28 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 07:34:28 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 07:34:28 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 07:34:28 compute-0 sudo[233903]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:29 compute-0 sudo[234098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nltcoufprcdfvwgvpujgzbahvkdbzuoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401669.0719528-429-148403887578388/AnsiballZ_file.py'
Nov 29 07:34:29 compute-0 sudo[234098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:29 compute-0 python3.9[234100]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:34:29 compute-0 sudo[234098]: pam_unix(sudo:session): session closed for user root
Nov 29 07:34:29 compute-0 sshd-session[233906]: Connection closed by authenticating user root 143.14.121.41 port 60048 [preauth]
Nov 29 07:34:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:34:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:30 compute-0 sudo[234252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgujqbnjdjuuulawydtgyhoyquoddqjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401670.0029993-437-86054446673264/AnsiballZ_stat.py'
Nov 29 07:34:30 compute-0 sudo[234252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:34:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:32 compute-0 sshd-session[234166]: Invalid user admin from 143.14.121.41 port 60060
Nov 29 07:34:32 compute-0 sshd-session[234166]: Connection closed by invalid user admin 143.14.121.41 port 60060 [preauth]
Nov 29 07:34:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:34:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:34:35 compute-0 sshd-session[234256]: Connection closed by authenticating user root 143.14.121.41 port 37374 [preauth]
Nov 29 07:34:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:36 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:34:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:38 compute-0 sshd-session[234278]: Connection closed by authenticating user root 143.14.121.41 port 37380 [preauth]
Nov 29 07:34:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:40 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:34:41 compute-0 sshd-session[234280]: Connection closed by authenticating user root 143.14.121.41 port 37384 [preauth]
Nov 29 07:34:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:43 compute-0 sshd-session[234282]: Invalid user testuser from 143.14.121.41 port 37386
Nov 29 07:34:43 compute-0 sshd-session[234282]: Connection closed by invalid user testuser 143.14.121.41 port 37386 [preauth]
Nov 29 07:34:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:44 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:34:45 compute-0 sshd-session[234284]: Connection closed by authenticating user root 143.14.121.41 port 43896 [preauth]
Nov 29 07:34:46 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf MDS connection to Monitors appears to be laggy; 17.2535s since last acked beacon
Nov 29 07:34:46 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:34:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:47 compute-0 sshd-session[234286]: Connection closed by authenticating user root 143.14.121.41 port 43904 [preauth]
Nov 29 07:34:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:48 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:34:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:51 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:34:51 compute-0 sshd-session[234288]: Invalid user backup from 143.14.121.41 port 43910
Nov 29 07:34:51 compute-0 sshd-session[234288]: Connection closed by invalid user backup 143.14.121.41 port 43910 [preauth]
Nov 29 07:34:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:52 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:34:54 compute-0 sshd-session[234290]: Invalid user admin from 143.14.121.41 port 45380
Nov 29 07:34:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:54 compute-0 sshd-session[234290]: Connection closed by invalid user admin 143.14.121.41 port 45380 [preauth]
Nov 29 07:34:56 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:34:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:56 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:34:56 compute-0 sshd-session[234292]: Connection closed by authenticating user root 143.14.121.41 port 45388 [preauth]
Nov 29 07:34:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:34:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:34:59.749 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:34:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:34:59.750 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:34:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:34:59.750 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:34:59 compute-0 sshd-session[234294]: Connection closed by authenticating user root 143.14.121.41 port 45394 [preauth]
Nov 29 07:35:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:00 compute-0 ceph-osd[91083]: osd.2 108 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:00 compute-0 ceph-osd[91083]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:00 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:35:00.478+0000 7fe156b5b640 -1 osd.2 108 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:00 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:35:01 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:01 compute-0 ceph-osd[91083]: osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:01 compute-0 ceph-osd[91083]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:01 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:35:01.482+0000 7fe156b5b640 -1 osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:02 compute-0 ceph-osd[91083]: osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:02 compute-0 ceph-osd[91083]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:02 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:35:02.513+0000 7fe156b5b640 -1 osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:03 compute-0 sshd-session[234296]: Connection closed by authenticating user root 143.14.121.41 port 45406 [preauth]
Nov 29 07:35:03 compute-0 ceph-osd[91083]: osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:03 compute-0 ceph-osd[91083]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:03 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:35:03.507+0000 7fe156b5b640 -1 osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:04 compute-0 ceph-osd[91083]: osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:04 compute-0 ceph-osd[91083]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:04 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:35:04.490+0000 7fe156b5b640 -1 osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:04 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:35:05
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'volumes', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'vms']
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:35:05 compute-0 ceph-osd[91083]: osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:05 compute-0 ceph-osd[91083]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:05 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:35:05.533+0000 7fe156b5b640 -1 osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:05 compute-0 sshd-session[234298]: Connection closed by authenticating user root 143.14.121.41 port 36436 [preauth]
Nov 29 07:35:06 compute-0 ceph-mds[102316]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:06 compute-0 python3.9[234254]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:35:06 compute-0 sudo[234252]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:06 compute-0 ceph-osd[91083]: osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:06 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-osd-2[91076]: 2025-11-29T07:35:06.555+0000 7fe156b5b640 -1 osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14261.0:870 9.18 9:1a734c59:::obj_delete_at_hint.0000000000:head [call lock.lock in=50b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e108)
Nov 29 07:35:06 compute-0 ceph-osd[91083]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:06 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 36.841472626s
Nov 29 07:35:06 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 36.841472626s
Nov 29 07:35:06 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 36.841690063s, txc = 0x560f41124900
Nov 29 07:35:06 compute-0 sudo[234422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwsbnbfaqbnbaoxbfwecrnmzgozclbif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401670.0029993-437-86054446673264/AnsiballZ_copy.py'
Nov 29 07:35:06 compute-0 sudo[234422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:06 compute-0 python3.9[234424]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401670.0029993-437-86054446673264/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:35:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:35:06 compute-0 sudo[234422]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:07 compute-0 sudo[234574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypzqczeisbqinnlbgiuvlsoudcglfkoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401707.1903389-454-263740607961980/AnsiballZ_file.py'
Nov 29 07:35:07 compute-0 sudo[234574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:07 compute-0 python3.9[234576]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:35:07 compute-0 sudo[234574]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 37.8024 seconds
Nov 29 07:35:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 get_health_metrics reporting 3 slow ops, oldest is log(1 entries from seq 729 at 2025-11-29T07:34:24.142389+0000)
Nov 29 07:35:07 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0[75046]: 2025-11-29T07:35:07.789+0000 7f27ac53a640 -1 mon.compute-0@0(leader) e1 get_health_metrics reporting 3 slow ops, oldest is log(1 entries from seq 729 at 2025-11-29T07:34:24.142389+0000)
Nov 29 07:35:07 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf  MDS is no longer laggy
Nov 29 07:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c86f10e9814190d6b5aeff1de377f1541d931827e429ac565c85b516985fb1e6-merged.mount: Deactivated successfully.
Nov 29 07:35:08 compute-0 ceph-mon[75050]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:08 compute-0 ceph-mon[75050]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:08 compute-0 ceph-mon[75050]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:08 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 37.745864868s, txc = 0x560f4187e000
Nov 29 07:35:08 compute-0 sudo[234726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etajlxwuamazhyvhylmlfexevkgpxrhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401707.9709373-462-28913364580083/AnsiballZ_stat.py'
Nov 29 07:35:08 compute-0 sudo[234726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:08 compute-0 python3.9[234728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:35:08 compute-0 sudo[234726]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:09 compute-0 sudo[234849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xelnnbxdtipehgnbvjjzttydkmfxekhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401707.9709373-462-28913364580083/AnsiballZ_copy.py'
Nov 29 07:35:09 compute-0 sudo[234849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:09 compute-0 python3.9[234851]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401707.9709373-462-28913364580083/.source.json _original_basename=.8l9nvk2x follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:35:09 compute-0 sudo[234849]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:09 compute-0 sshd-session[234300]: Invalid user linuxadmin from 143.14.121.41 port 36450
Nov 29 07:35:09 compute-0 sudo[235001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwzijuzbygtbwnvzfaygmrcktsxchvwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401709.5120125-477-223514755373422/AnsiballZ_file.py'
Nov 29 07:35:09 compute-0 sudo[235001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:09 compute-0 sshd-session[234300]: Connection closed by invalid user linuxadmin 143.14.121.41 port 36450 [preauth]
Nov 29 07:35:10 compute-0 python3.9[235003]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:35:10 compute-0 sudo[235001]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:10 compute-0 sudo[235155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqifugvaiaxjfrwlbcucbjyfyjikjlgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401710.3740256-485-87466855848891/AnsiballZ_stat.py'
Nov 29 07:35:10 compute-0 sudo[235155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:10 compute-0 sudo[235155]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:11 compute-0 sudo[235279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xizptjihxzscdmvgckvndwbnefcpabgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401710.3740256-485-87466855848891/AnsiballZ_copy.py'
Nov 29 07:35:11 compute-0 sudo[235279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:11 compute-0 sudo[235279]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check failed: 2 slow ops, oldest one blocked for 35 sec, osd.2 has slow ops (SLOW_OPS)
Nov 29 07:35:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'default.rgw.log' : 1 ])
Nov 29 07:35:12 compute-0 ceph-mon[75050]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:12 compute-0 ceph-mon[75050]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:12 compute-0 ceph-mon[75050]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 ceph-mon[75050]: 2 slow requests (by type [ 'started' : 2 ] most affected pool [ 'default.rgw.log' : 2 ])
Nov 29 07:35:12 compute-0 ceph-mon[75050]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:12 compute-0 podman[233093]: 2025-11-29 07:35:12.468234196 +0000 UTC m=+51.845129862 container remove c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:35:12 compute-0 systemd[1]: libpod-conmon-c7ef8db399e729d1cd4409996cddcbdf9efd4f3d98b973d91d566427cc16ac93.scope: Deactivated successfully.
Nov 29 07:35:12 compute-0 podman[234259]: 2025-11-29 07:35:12.522728081 +0000 UTC m=+37.881056387 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:35:12 compute-0 sudo[235445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwdlxcmjeqansrcovxqdcfhyaekfqoju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401712.0128167-502-169369125475770/AnsiballZ_container_config_data.py'
Nov 29 07:35:12 compute-0 sudo[235445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:12 compute-0 podman[234258]: 2025-11-29 07:35:12.573006743 +0000 UTC m=+37.934345171 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:35:12 compute-0 podman[235465]: 2025-11-29 07:35:12.626910923 +0000 UTC m=+0.024609137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:35:12 compute-0 python3.9[235453]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 07:35:12 compute-0 sudo[235445]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:13 compute-0 sudo[235629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbdkiobxyxarnewbrsopmbafybwyalcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401712.9870071-511-57584630224533/AnsiballZ_container_config_hash.py'
Nov 29 07:35:13 compute-0 sudo[235629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:13 compute-0 python3.9[235631]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:35:13 compute-0 sudo[235629]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:13 compute-0 sshd-session[235027]: Invalid user ftpuser from 143.14.121.41 port 36464
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:35:14 compute-0 sshd-session[235027]: Connection closed by invalid user ftpuser 143.14.121.41 port 36464 [preauth]
Nov 29 07:35:14 compute-0 sudo[235782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ersfcmfotfloiyrvafiyjghaicudfsfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401714.1078494-520-34637543390884/AnsiballZ_podman_container_info.py'
Nov 29 07:35:14 compute-0 sudo[235782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:35:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:35:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:16 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:35:16 compute-0 podman[235465]: 2025-11-29 07:35:16.976402108 +0000 UTC m=+4.374100332 container create 1819b669d0101f5ddacdcefb86516f712513475fb105460e5cec53a188b1a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:35:17 compute-0 sshd-session[235785]: Invalid user useradmin from 143.14.121.41 port 35410
Nov 29 07:35:17 compute-0 python3.9[235784]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 07:35:17 compute-0 sshd-session[235785]: Connection closed by invalid user useradmin 143.14.121.41 port 35410 [preauth]
Nov 29 07:35:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:18 compute-0 systemd[1]: Started libpod-conmon-1819b669d0101f5ddacdcefb86516f712513475fb105460e5cec53a188b1a716.scope.
Nov 29 07:35:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c7ebd03bc88e1921ace364ad63942229d3a559cd46138ecd7f68d51ad60028/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c7ebd03bc88e1921ace364ad63942229d3a559cd46138ecd7f68d51ad60028/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c7ebd03bc88e1921ace364ad63942229d3a559cd46138ecd7f68d51ad60028/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c7ebd03bc88e1921ace364ad63942229d3a559cd46138ecd7f68d51ad60028/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c7ebd03bc88e1921ace364ad63942229d3a559cd46138ecd7f68d51ad60028/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:19 compute-0 ceph-mon[75050]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:35:19 compute-0 ceph-mon[75050]: Health check failed: 2 slow ops, oldest one blocked for 35 sec, osd.2 has slow ops (SLOW_OPS)
Nov 29 07:35:19 compute-0 ceph-mon[75050]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:35:19 compute-0 ceph-mon[75050]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:35:19 compute-0 ceph-mon[75050]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:19 compute-0 podman[235465]: 2025-11-29 07:35:19.28014818 +0000 UTC m=+6.677846434 container init 1819b669d0101f5ddacdcefb86516f712513475fb105460e5cec53a188b1a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_snyder, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:35:19 compute-0 podman[235465]: 2025-11-29 07:35:19.292753042 +0000 UTC m=+6.690451236 container start 1819b669d0101f5ddacdcefb86516f712513475fb105460e5cec53a188b1a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:35:19 compute-0 podman[235465]: 2025-11-29 07:35:19.527297084 +0000 UTC m=+6.924995338 container attach 1819b669d0101f5ddacdcefb86516f712513475fb105460e5cec53a188b1a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_snyder, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:35:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 2 slow ops, oldest one blocked for 35 sec, osd.2 has slow ops)
Nov 29 07:35:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:35:19 compute-0 sudo[235782]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:20 compute-0 brave_snyder[235802]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:35:20 compute-0 brave_snyder[235802]: --> relative data size: 1.0
Nov 29 07:35:20 compute-0 brave_snyder[235802]: --> All data devices are unavailable
Nov 29 07:35:20 compute-0 systemd[1]: libpod-1819b669d0101f5ddacdcefb86516f712513475fb105460e5cec53a188b1a716.scope: Deactivated successfully.
Nov 29 07:35:20 compute-0 podman[235871]: 2025-11-29 07:35:20.394504275 +0000 UTC m=+0.034876316 container died 1819b669d0101f5ddacdcefb86516f712513475fb105460e5cec53a188b1a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:35:20 compute-0 ceph-mon[75050]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:20 compute-0 sudo[236007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbtmyngwbvleaethusrlydfnfjaaoywl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401720.4034593-533-134330809568958/AnsiballZ_edpm_container_manage.py'
Nov 29 07:35:20 compute-0 sudo[236007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:21 compute-0 python3[236009]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:35:21 compute-0 sshd-session[235798]: Connection closed by authenticating user root 143.14.121.41 port 35426 [preauth]
Nov 29 07:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-73c7ebd03bc88e1921ace364ad63942229d3a559cd46138ecd7f68d51ad60028-merged.mount: Deactivated successfully.
Nov 29 07:35:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:22 compute-0 ceph-mon[75050]: Health check cleared: SLOW_OPS (was: 2 slow ops, oldest one blocked for 35 sec, osd.2 has slow ops)
Nov 29 07:35:22 compute-0 ceph-mon[75050]: Cluster is now healthy
Nov 29 07:35:22 compute-0 ceph-mon[75050]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:35:23 compute-0 sshd-session[236024]: Connection closed by authenticating user root 143.14.121.41 port 35434 [preauth]
Nov 29 07:35:23 compute-0 podman[235871]: 2025-11-29 07:35:23.787800758 +0000 UTC m=+3.428172719 container remove 1819b669d0101f5ddacdcefb86516f712513475fb105460e5cec53a188b1a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 29 07:35:23 compute-0 systemd[1]: libpod-conmon-1819b669d0101f5ddacdcefb86516f712513475fb105460e5cec53a188b1a716.scope: Deactivated successfully.
Nov 29 07:35:23 compute-0 sudo[232982]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:23 compute-0 sudo[236040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:23 compute-0 sudo[236040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:23 compute-0 sudo[236040]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:24 compute-0 sudo[236065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:35:24 compute-0 sudo[236065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:24 compute-0 sudo[236065]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:24 compute-0 sudo[236090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:24 compute-0 sudo[236090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:24 compute-0 sudo[236090]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:24 compute-0 sudo[236115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:35:24 compute-0 sudo[236115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:24 compute-0 podman[236182]: 2025-11-29 07:35:24.484339866 +0000 UTC m=+0.026790187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:35:25 compute-0 podman[236182]: 2025-11-29 07:35:25.74011356 +0000 UTC m=+1.282563821 container create b7e2cf50155b1a25acf53b16006888c5f6da84d08270e443a7ffaba029ad0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 07:35:25 compute-0 ceph-mon[75050]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:25 compute-0 systemd[1]: Started libpod-conmon-b7e2cf50155b1a25acf53b16006888c5f6da84d08270e443a7ffaba029ad0483.scope.
Nov 29 07:35:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:35:26 compute-0 sshd-session[236037]: Invalid user ftptest from 143.14.121.41 port 60858
Nov 29 07:35:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:26 compute-0 sshd-session[236037]: Connection closed by invalid user ftptest 143.14.121.41 port 60858 [preauth]
Nov 29 07:35:26 compute-0 podman[236182]: 2025-11-29 07:35:26.918564531 +0000 UTC m=+2.461014832 container init b7e2cf50155b1a25acf53b16006888c5f6da84d08270e443a7ffaba029ad0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:35:26 compute-0 podman[236182]: 2025-11-29 07:35:26.970859248 +0000 UTC m=+2.513309469 container start b7e2cf50155b1a25acf53b16006888c5f6da84d08270e443a7ffaba029ad0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:35:26 compute-0 priceless_benz[236203]: 167 167
Nov 29 07:35:26 compute-0 systemd[1]: libpod-b7e2cf50155b1a25acf53b16006888c5f6da84d08270e443a7ffaba029ad0483.scope: Deactivated successfully.
Nov 29 07:35:27 compute-0 podman[236182]: 2025-11-29 07:35:27.786650125 +0000 UTC m=+3.329100356 container attach b7e2cf50155b1a25acf53b16006888c5f6da84d08270e443a7ffaba029ad0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:35:27 compute-0 podman[236182]: 2025-11-29 07:35:27.787883758 +0000 UTC m=+3.330334019 container died b7e2cf50155b1a25acf53b16006888c5f6da84d08270e443a7ffaba029ad0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 07:35:27 compute-0 ceph-mon[75050]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 07:35:27 compute-0 ceph-mon[75050]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:35:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d598d74e64ad36199dfff7f30eee9fe3ed6916484849e6482111c1566d9eeb1-merged.mount: Deactivated successfully.
Nov 29 07:35:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:30 compute-0 ceph-mon[75050]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:35:30 compute-0 sshd-session[236235]: Connection closed by authenticating user root 143.14.121.41 port 60868 [preauth]
Nov 29 07:35:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:35:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:33 compute-0 sshd-session[236238]: Connection closed by authenticating user root 143.14.121.41 port 60870 [preauth]
Nov 29 07:35:34 compute-0 ceph-mon[75050]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:35 compute-0 podman[236026]: 2025-11-29 07:35:35.009354596 +0000 UTC m=+13.351026670 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 07:35:35 compute-0 podman[236182]: 2025-11-29 07:35:35.313497175 +0000 UTC m=+10.855947396 container remove b7e2cf50155b1a25acf53b16006888c5f6da84d08270e443a7ffaba029ad0483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_benz, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:35:35 compute-0 systemd[1]: libpod-conmon-b7e2cf50155b1a25acf53b16006888c5f6da84d08270e443a7ffaba029ad0483.scope: Deactivated successfully.
Nov 29 07:35:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:35:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:35:35 compute-0 podman[236270]: 2025-11-29 07:35:35.496204683 +0000 UTC m=+0.026030295 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 07:35:35 compute-0 podman[236278]: 2025-11-29 07:35:35.504175189 +0000 UTC m=+0.020613930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:35:36 compute-0 sshd-session[236240]: Connection closed by authenticating user root 143.14.121.41 port 59488 [preauth]
Nov 29 07:35:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:38 compute-0 ceph-mon[75050]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:35:38 compute-0 ceph-mon[75050]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:38 compute-0 podman[236270]: 2025-11-29 07:35:38.421277044 +0000 UTC m=+2.951102606 container create 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 07:35:38 compute-0 python3[236009]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 07:35:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:39 compute-0 podman[236278]: 2025-11-29 07:35:39.245002037 +0000 UTC m=+3.761440788 container create 05324419cb0cb481c826abc4b911b79865f532be0295654ecf40f7eb7ec605de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:35:39 compute-0 sshd-session[236296]: Connection closed by authenticating user root 143.14.121.41 port 59492 [preauth]
Nov 29 07:35:39 compute-0 ceph-mon[75050]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:39 compute-0 ceph-mon[75050]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:39 compute-0 systemd[1]: Started libpod-conmon-05324419cb0cb481c826abc4b911b79865f532be0295654ecf40f7eb7ec605de.scope.
Nov 29 07:35:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0aa4000aca3d6eaa1d5763fb8c6693513174cdc36aa74bfc62f7e490fcbb6a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0aa4000aca3d6eaa1d5763fb8c6693513174cdc36aa74bfc62f7e490fcbb6a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0aa4000aca3d6eaa1d5763fb8c6693513174cdc36aa74bfc62f7e490fcbb6a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0aa4000aca3d6eaa1d5763fb8c6693513174cdc36aa74bfc62f7e490fcbb6a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:40 compute-0 podman[236278]: 2025-11-29 07:35:40.109930175 +0000 UTC m=+4.626368936 container init 05324419cb0cb481c826abc4b911b79865f532be0295654ecf40f7eb7ec605de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:35:40 compute-0 podman[236278]: 2025-11-29 07:35:40.117327105 +0000 UTC m=+4.633765836 container start 05324419cb0cb481c826abc4b911b79865f532be0295654ecf40f7eb7ec605de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:35:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 07:35:40 compute-0 podman[236278]: 2025-11-29 07:35:40.381862921 +0000 UTC m=+4.898301632 container attach 05324419cb0cb481c826abc4b911b79865f532be0295654ecf40f7eb7ec605de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:35:40 compute-0 sudo[236007]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]: {
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:     "0": [
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:         {
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "devices": [
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "/dev/loop3"
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             ],
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_name": "ceph_lv0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_size": "21470642176",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "name": "ceph_lv0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "tags": {
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.cluster_name": "ceph",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.crush_device_class": "",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.encrypted": "0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.osd_id": "0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.type": "block",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.vdo": "0"
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             },
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "type": "block",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "vg_name": "ceph_vg0"
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:         }
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:     ],
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:     "1": [
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:         {
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "devices": [
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "/dev/loop4"
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             ],
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_name": "ceph_lv1",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_size": "21470642176",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "name": "ceph_lv1",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "tags": {
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.cluster_name": "ceph",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.crush_device_class": "",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.encrypted": "0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.osd_id": "1",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.type": "block",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.vdo": "0"
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             },
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "type": "block",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "vg_name": "ceph_vg1"
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:         }
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:     ],
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:     "2": [
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:         {
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "devices": [
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "/dev/loop5"
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             ],
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_name": "ceph_lv2",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_size": "21470642176",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "name": "ceph_lv2",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "tags": {
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.cluster_name": "ceph",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.crush_device_class": "",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.encrypted": "0",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.osd_id": "2",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.type": "block",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:                 "ceph.vdo": "0"
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             },
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "type": "block",
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:             "vg_name": "ceph_vg2"
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:         }
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]:     ]
Nov 29 07:35:40 compute-0 sweet_mcclintock[236315]: }
Nov 29 07:35:40 compute-0 systemd[1]: libpod-05324419cb0cb481c826abc4b911b79865f532be0295654ecf40f7eb7ec605de.scope: Deactivated successfully.
Nov 29 07:35:40 compute-0 podman[236278]: 2025-11-29 07:35:40.980354712 +0000 UTC m=+5.496793423 container died 05324419cb0cb481c826abc4b911b79865f532be0295654ecf40f7eb7ec605de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:35:41 compute-0 sudo[236498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qujufxgvmtizeddxomofwyvvnuvbqsuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401740.799483-541-214127390460167/AnsiballZ_stat.py'
Nov 29 07:35:41 compute-0 sudo[236498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:41 compute-0 python3.9[236500]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:41 compute-0 sudo[236498]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:41 compute-0 sshd-session[236311]: Invalid user node from 143.14.121.41 port 59494
Nov 29 07:35:41 compute-0 ceph-mon[75050]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 07:35:41 compute-0 sshd-session[236311]: Connection closed by invalid user node 143.14.121.41 port 59494 [preauth]
Nov 29 07:35:41 compute-0 sudo[236652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsqkuhogyzilxfxvwqbzrgeimbjcmend ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401741.644153-550-161994162520591/AnsiballZ_file.py'
Nov 29 07:35:41 compute-0 sudo[236652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:42 compute-0 python3.9[236654]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:35:42 compute-0 sudo[236652]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:42 compute-0 sudo[236731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elwhjaomfbpqamejfpgbymafmhmxihsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401741.644153-550-161994162520591/AnsiballZ_stat.py'
Nov 29 07:35:42 compute-0 sudo[236731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:42 compute-0 python3.9[236733]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:42 compute-0 sudo[236731]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0aa4000aca3d6eaa1d5763fb8c6693513174cdc36aa74bfc62f7e490fcbb6a0-merged.mount: Deactivated successfully.
Nov 29 07:35:43 compute-0 sudo[236904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-karlmjypthpdjwvrtdewhwwtpgmdqudc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401742.751917-550-155953103868113/AnsiballZ_copy.py'
Nov 29 07:35:43 compute-0 sudo[236904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:43 compute-0 sshd-session[236655]: Invalid user builder from 143.14.121.41 port 43828
Nov 29 07:35:43 compute-0 ceph-mon[75050]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:43 compute-0 python3.9[236906]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401742.751917-550-155953103868113/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:35:43 compute-0 sudo[236904]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:43 compute-0 sshd-session[236655]: Connection closed by invalid user builder 143.14.121.41 port 43828 [preauth]
Nov 29 07:35:43 compute-0 sudo[236980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrmltzleppwjhmsqfrgdtfprqzqzjmtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401742.751917-550-155953103868113/AnsiballZ_systemd.py'
Nov 29 07:35:43 compute-0 sudo[236980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:44 compute-0 python3.9[236982]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:35:44 compute-0 systemd[1]: Reloading.
Nov 29 07:35:44 compute-0 systemd-rc-local-generator[237012]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:35:44 compute-0 systemd-sysv-generator[237016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:35:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:45 compute-0 podman[236278]: 2025-11-29 07:35:45.86566878 +0000 UTC m=+10.382107531 container remove 05324419cb0cb481c826abc4b911b79865f532be0295654ecf40f7eb7ec605de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:35:45 compute-0 sudo[236115]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:45 compute-0 podman[236735]: 2025-11-29 07:35:45.937070665 +0000 UTC m=+3.310307988 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:35:46 compute-0 systemd[1]: libpod-conmon-05324419cb0cb481c826abc4b911b79865f532be0295654ecf40f7eb7ec605de.scope: Deactivated successfully.
Nov 29 07:35:46 compute-0 podman[236734]: 2025-11-29 07:35:46.019483927 +0000 UTC m=+3.386405199 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 07:35:46 compute-0 sudo[237035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:46 compute-0 sudo[236980]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:46 compute-0 sudo[237035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:46 compute-0 sudo[237035]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:46 compute-0 sudo[237065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:35:46 compute-0 sudo[237065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:46 compute-0 sudo[237065]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:46 compute-0 sudo[237109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:46 compute-0 sudo[237109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:46 compute-0 sudo[237109]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:46 compute-0 sudo[237147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:35:46 compute-0 sudo[237147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:46 compute-0 sudo[237214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paiipenhyccgmlerbgtpmfaiphqgfcnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401742.751917-550-155953103868113/AnsiballZ_systemd.py'
Nov 29 07:35:46 compute-0 sudo[237214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:46 compute-0 podman[237257]: 2025-11-29 07:35:46.709160649 +0000 UTC m=+0.037706603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:35:46 compute-0 python3.9[237216]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:35:47 compute-0 systemd[1]: Reloading.
Nov 29 07:35:47 compute-0 systemd-rc-local-generator[237300]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:35:47 compute-0 systemd-sysv-generator[237303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:35:47 compute-0 sshd-session[236983]: Connection closed by authenticating user root 143.14.121.41 port 43838 [preauth]
Nov 29 07:35:48 compute-0 systemd[1]: Starting multipathd container...
Nov 29 07:35:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:48 compute-0 podman[237257]: 2025-11-29 07:35:48.305189431 +0000 UTC m=+1.633735335 container create bbe29992f12fcbf3897a5291f6c3f581bbf00b5eb6d090a57b2aa290fd0f7532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:35:48 compute-0 ceph-mon[75050]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:48 compute-0 systemd[1]: Started libpod-conmon-bbe29992f12fcbf3897a5291f6c3f581bbf00b5eb6d090a57b2aa290fd0f7532.scope.
Nov 29 07:35:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:35:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:49 compute-0 podman[237257]: 2025-11-29 07:35:49.110142785 +0000 UTC m=+2.438688679 container init bbe29992f12fcbf3897a5291f6c3f581bbf00b5eb6d090a57b2aa290fd0f7532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:35:49 compute-0 podman[237257]: 2025-11-29 07:35:49.122478678 +0000 UTC m=+2.451024552 container start bbe29992f12fcbf3897a5291f6c3f581bbf00b5eb6d090a57b2aa290fd0f7532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:35:49 compute-0 thirsty_bassi[237326]: 167 167
Nov 29 07:35:49 compute-0 systemd[1]: libpod-bbe29992f12fcbf3897a5291f6c3f581bbf00b5eb6d090a57b2aa290fd0f7532.scope: Deactivated successfully.
Nov 29 07:35:49 compute-0 podman[237257]: 2025-11-29 07:35:49.533078171 +0000 UTC m=+2.861624135 container attach bbe29992f12fcbf3897a5291f6c3f581bbf00b5eb6d090a57b2aa290fd0f7532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:35:49 compute-0 podman[237257]: 2025-11-29 07:35:49.534274283 +0000 UTC m=+2.862820217 container died bbe29992f12fcbf3897a5291f6c3f581bbf00b5eb6d090a57b2aa290fd0f7532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:35:49 compute-0 sshd-session[237308]: Invalid user postgres from 143.14.121.41 port 43854
Nov 29 07:35:49 compute-0 ceph-mon[75050]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:49 compute-0 ceph-mon[75050]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:50 compute-0 sshd-session[237308]: Connection closed by invalid user postgres 143.14.121.41 port 43854 [preauth]
Nov 29 07:35:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ececb761ff73e85238bd798fc37b9c625f8d472c520a53ab586835d84dd949f8-merged.mount: Deactivated successfully.
Nov 29 07:35:51 compute-0 ceph-mon[75050]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:52 compute-0 podman[237257]: 2025-11-29 07:35:52.088609963 +0000 UTC m=+5.417155847 container remove bbe29992f12fcbf3897a5291f6c3f581bbf00b5eb6d090a57b2aa290fd0f7532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:35:52 compute-0 systemd[1]: libpod-conmon-bbe29992f12fcbf3897a5291f6c3f581bbf00b5eb6d090a57b2aa290fd0f7532.scope: Deactivated successfully.
Nov 29 07:35:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:35:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108fb7d8e83c79594c0af0d1c5ac8c5ce218583bbb13c906515f4c93ccb0c7f7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108fb7d8e83c79594c0af0d1c5ac8c5ce218583bbb13c906515f4c93ccb0c7f7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f.
Nov 29 07:35:52 compute-0 podman[237311]: 2025-11-29 07:35:52.472123311 +0000 UTC m=+4.435469916 container init 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 29 07:35:52 compute-0 multipathd[237352]: + sudo -E kolla_set_configs
Nov 29 07:35:52 compute-0 podman[237311]: 2025-11-29 07:35:52.505031762 +0000 UTC m=+4.468378367 container start 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:35:52 compute-0 sudo[237375]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 07:35:52 compute-0 sudo[237375]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:35:52 compute-0 sudo[237375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:35:52 compute-0 multipathd[237352]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:35:52 compute-0 multipathd[237352]: INFO:__main__:Validating config file
Nov 29 07:35:52 compute-0 multipathd[237352]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:35:52 compute-0 multipathd[237352]: INFO:__main__:Writing out command to execute
Nov 29 07:35:52 compute-0 sudo[237375]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:52 compute-0 multipathd[237352]: ++ cat /run_command
Nov 29 07:35:52 compute-0 multipathd[237352]: + CMD='/usr/sbin/multipathd -d'
Nov 29 07:35:52 compute-0 multipathd[237352]: + ARGS=
Nov 29 07:35:52 compute-0 multipathd[237352]: + sudo kolla_copy_cacerts
Nov 29 07:35:52 compute-0 sudo[237391]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 07:35:52 compute-0 sudo[237391]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:35:52 compute-0 sudo[237391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:35:52 compute-0 sudo[237391]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:52 compute-0 multipathd[237352]: + [[ ! -n '' ]]
Nov 29 07:35:52 compute-0 multipathd[237352]: + . kolla_extend_start
Nov 29 07:35:52 compute-0 multipathd[237352]: Running command: '/usr/sbin/multipathd -d'
Nov 29 07:35:52 compute-0 multipathd[237352]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 07:35:52 compute-0 multipathd[237352]: + umask 0022
Nov 29 07:35:52 compute-0 multipathd[237352]: + exec /usr/sbin/multipathd -d
Nov 29 07:35:52 compute-0 multipathd[237352]: 4067.992311 | --------start up--------
Nov 29 07:35:52 compute-0 multipathd[237352]: 4067.992339 | read /etc/multipath.conf
Nov 29 07:35:52 compute-0 multipathd[237352]: 4068.001474 | path checkers start up
Nov 29 07:35:52 compute-0 sshd-session[237342]: Invalid user lab from 143.14.121.41 port 43858
Nov 29 07:35:52 compute-0 podman[237311]: multipathd
Nov 29 07:35:52 compute-0 systemd[1]: Started multipathd container.
Nov 29 07:35:52 compute-0 sudo[237214]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:52 compute-0 podman[237360]: 2025-11-29 07:35:52.899082996 +0000 UTC m=+0.691743749 container create 44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:35:52 compute-0 sshd-session[237342]: Connection closed by invalid user lab 143.14.121.41 port 43858 [preauth]
Nov 29 07:35:52 compute-0 podman[237360]: 2025-11-29 07:35:52.863133422 +0000 UTC m=+0.655794205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:35:53 compute-0 systemd[1]: Started libpod-conmon-44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901.scope.
Nov 29 07:35:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb813bad4ba76fa853ba7601cb281dc180f2241f231551e0d1521da67eee91d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb813bad4ba76fa853ba7601cb281dc180f2241f231551e0d1521da67eee91d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb813bad4ba76fa853ba7601cb281dc180f2241f231551e0d1521da67eee91d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb813bad4ba76fa853ba7601cb281dc180f2241f231551e0d1521da67eee91d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:53 compute-0 podman[237376]: 2025-11-29 07:35:53.138925403 +0000 UTC m=+0.613360925 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 07:35:53 compute-0 ceph-mon[75050]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:35:53 compute-0 podman[237360]: 2025-11-29 07:35:53.288892834 +0000 UTC m=+1.081553537 container init 44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:35:53 compute-0 podman[237360]: 2025-11-29 07:35:53.296455749 +0000 UTC m=+1.089116452 container start 44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_tesla, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 07:35:53 compute-0 podman[237360]: 2025-11-29 07:35:53.309701348 +0000 UTC m=+1.102362081 container attach 44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:35:53 compute-0 python3.9[237571]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:35:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:54 compute-0 sudo[237736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebpxofzjqobnxrgagzhbgjlcylzphidx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401753.8557725-586-39727818203873/AnsiballZ_command.py'
Nov 29 07:35:54 compute-0 sudo[237736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:54 compute-0 priceless_tesla[237453]: {
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "osd_id": 2,
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "type": "bluestore"
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:     },
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "osd_id": 1,
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "type": "bluestore"
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:     },
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "osd_id": 0,
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:         "type": "bluestore"
Nov 29 07:35:54 compute-0 priceless_tesla[237453]:     }
Nov 29 07:35:54 compute-0 priceless_tesla[237453]: }
Nov 29 07:35:54 compute-0 systemd[1]: libpod-44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901.scope: Deactivated successfully.
Nov 29 07:35:54 compute-0 systemd[1]: libpod-44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901.scope: Consumed 1.030s CPU time.
Nov 29 07:35:54 compute-0 python3.9[237739]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:35:54 compute-0 podman[237754]: 2025-11-29 07:35:54.379771453 +0000 UTC m=+0.025599254 container died 44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:35:55 compute-0 ceph-mon[75050]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Nov 29 07:35:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb813bad4ba76fa853ba7601cb281dc180f2241f231551e0d1521da67eee91d2-merged.mount: Deactivated successfully.
Nov 29 07:35:55 compute-0 podman[237754]: 2025-11-29 07:35:55.998469269 +0000 UTC m=+1.644297160 container remove 44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:35:56 compute-0 systemd[1]: libpod-conmon-44c4f0ad33c1b5c2355318a871428173222d15bb0603af89a60e92c911397901.scope: Deactivated successfully.
Nov 29 07:35:56 compute-0 sudo[237147]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:35:56 compute-0 sudo[237736]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 07:35:56 compute-0 sshd-session[237497]: Invalid user sftpuser from 143.14.121.41 port 44298
Nov 29 07:35:56 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:35:56 compute-0 sudo[237929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okqeutmaariyvrkxskpiffugovlxbuhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401756.2802336-594-271521259860404/AnsiballZ_systemd.py'
Nov 29 07:35:56 compute-0 sudo[237929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:56 compute-0 sshd-session[237497]: Connection closed by invalid user sftpuser 143.14.121.41 port 44298 [preauth]
Nov 29 07:35:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:35:56 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:35:56 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 4b9550df-1b30-413f-9833-959d0aeb1b04 does not exist
Nov 29 07:35:56 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5f0e2cc1-3116-49c8-a42f-e5aa4a8f4067 does not exist
Nov 29 07:35:56 compute-0 sudo[237932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:35:56 compute-0 sudo[237932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:56 compute-0 sudo[237932]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:56 compute-0 sudo[237957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:35:56 compute-0 sudo[237957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:35:56 compute-0 sudo[237957]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:57 compute-0 python3.9[237931]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:35:57 compute-0 systemd[1]: Stopping multipathd container...
Nov 29 07:35:57 compute-0 multipathd[237352]: 4072.650810 | exit (signal)
Nov 29 07:35:57 compute-0 multipathd[237352]: 4072.651455 | --------shut down-------
Nov 29 07:35:57 compute-0 systemd[1]: libpod-53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f.scope: Deactivated successfully.
Nov 29 07:35:57 compute-0 podman[237986]: 2025-11-29 07:35:57.31018406 +0000 UTC m=+0.174891279 container died 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:35:57 compute-0 systemd[1]: 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f-3769fa11ace5d48d.timer: Deactivated successfully.
Nov 29 07:35:57 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f.
Nov 29 07:35:57 compute-0 ceph-mon[75050]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 07:35:57 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:35:57 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f-userdata-shm.mount: Deactivated successfully.
Nov 29 07:35:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-108fb7d8e83c79594c0af0d1c5ac8c5ce218583bbb13c906515f4c93ccb0c7f7-merged.mount: Deactivated successfully.
Nov 29 07:35:57 compute-0 podman[237986]: 2025-11-29 07:35:57.428490094 +0000 UTC m=+0.293197253 container cleanup 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 29 07:35:57 compute-0 podman[237986]: multipathd
Nov 29 07:35:57 compute-0 podman[238016]: multipathd
Nov 29 07:35:57 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 07:35:57 compute-0 systemd[1]: Stopped multipathd container.
Nov 29 07:35:57 compute-0 systemd[1]: Starting multipathd container...
Nov 29 07:35:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108fb7d8e83c79594c0af0d1c5ac8c5ce218583bbb13c906515f4c93ccb0c7f7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108fb7d8e83c79594c0af0d1c5ac8c5ce218583bbb13c906515f4c93ccb0c7f7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:35:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f.
Nov 29 07:35:57 compute-0 podman[238026]: 2025-11-29 07:35:57.654330602 +0000 UTC m=+0.133805736 container init 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:35:57 compute-0 multipathd[238041]: + sudo -E kolla_set_configs
Nov 29 07:35:57 compute-0 sudo[238047]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 07:35:57 compute-0 podman[238026]: 2025-11-29 07:35:57.683287796 +0000 UTC m=+0.162762930 container start 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:35:57 compute-0 sudo[238047]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:35:57 compute-0 sudo[238047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:35:57 compute-0 podman[238026]: multipathd
Nov 29 07:35:57 compute-0 systemd[1]: Started multipathd container.
Nov 29 07:35:57 compute-0 multipathd[238041]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:35:57 compute-0 sudo[237929]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:57 compute-0 multipathd[238041]: INFO:__main__:Validating config file
Nov 29 07:35:57 compute-0 multipathd[238041]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:35:57 compute-0 multipathd[238041]: INFO:__main__:Writing out command to execute
Nov 29 07:35:57 compute-0 sudo[238047]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:57 compute-0 multipathd[238041]: ++ cat /run_command
Nov 29 07:35:57 compute-0 multipathd[238041]: + CMD='/usr/sbin/multipathd -d'
Nov 29 07:35:57 compute-0 multipathd[238041]: + ARGS=
Nov 29 07:35:57 compute-0 multipathd[238041]: + sudo kolla_copy_cacerts
Nov 29 07:35:57 compute-0 podman[238048]: 2025-11-29 07:35:57.747984328 +0000 UTC m=+0.055102674 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:35:57 compute-0 sudo[238071]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 07:35:57 compute-0 sudo[238071]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 07:35:57 compute-0 sudo[238071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 07:35:57 compute-0 sudo[238071]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:57 compute-0 systemd[1]: 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f-6af3e4fb7442b0af.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 07:35:57 compute-0 systemd[1]: 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f-6af3e4fb7442b0af.service: Failed with result 'exit-code'.
Nov 29 07:35:57 compute-0 multipathd[238041]: + [[ ! -n '' ]]
Nov 29 07:35:57 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:35:57 compute-0 multipathd[238041]: + . kolla_extend_start
Nov 29 07:35:57 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:35:57 compute-0 multipathd[238041]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 07:35:57 compute-0 multipathd[238041]: Running command: '/usr/sbin/multipathd -d'
Nov 29 07:35:57 compute-0 multipathd[238041]: + umask 0022
Nov 29 07:35:57 compute-0 multipathd[238041]: + exec /usr/sbin/multipathd -d
Nov 29 07:35:57 compute-0 multipathd[238041]: 4073.142693 | --------start up--------
Nov 29 07:35:57 compute-0 multipathd[238041]: 4073.142706 | read /etc/multipath.conf
Nov 29 07:35:57 compute-0 multipathd[238041]: 4073.147690 | path checkers start up
Nov 29 07:35:58 compute-0 sudo[238231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urobpewamozmrbskeklogymbxqnqugjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401757.9056864-602-73142935839327/AnsiballZ_file.py'
Nov 29 07:35:58 compute-0 sudo[238231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 29 07:35:58 compute-0 python3.9[238233]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:35:58 compute-0 sudo[238231]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:35:59 compute-0 sudo[238383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqlznbmypiasducacedcnjmwyxzgxwwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401759.1252453-614-162334755623467/AnsiballZ_file.py'
Nov 29 07:35:59 compute-0 sudo[238383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:35:59 compute-0 ceph-mon[75050]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 29 07:35:59 compute-0 python3.9[238385]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 07:35:59 compute-0 sudo[238383]: pam_unix(sudo:session): session closed for user root
Nov 29 07:35:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:35:59.750 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:35:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:35:59.751 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:35:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:35:59.751 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:36:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Nov 29 07:36:00 compute-0 sudo[238535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeglmkoxrcjqllikfwqunglhjoxkexph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401759.9263868-622-12186218408813/AnsiballZ_modprobe.py'
Nov 29 07:36:00 compute-0 sudo[238535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:00 compute-0 sshd-session[237982]: Connection closed by authenticating user root 143.14.121.41 port 44312 [preauth]
Nov 29 07:36:00 compute-0 python3.9[238537]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 07:36:00 compute-0 kernel: Key type psk registered
Nov 29 07:36:00 compute-0 sudo[238535]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:00 compute-0 sudo[238700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giutacmxneowxfmhvwdqenoccihlruer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401760.6339934-630-236301018015663/AnsiballZ_stat.py'
Nov 29 07:36:00 compute-0 sudo[238700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:01 compute-0 python3.9[238702]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:36:01 compute-0 sudo[238700]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:01 compute-0 sudo[238823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blylydgqoahdykybwhixtizqttjqzpug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401760.6339934-630-236301018015663/AnsiballZ_copy.py'
Nov 29 07:36:01 compute-0 sudo[238823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:01 compute-0 ceph-mon[75050]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Nov 29 07:36:01 compute-0 python3.9[238825]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401760.6339934-630-236301018015663/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:01 compute-0 sudo[238823]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Nov 29 07:36:02 compute-0 sudo[238975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhlnbssowfbzvdnormbwaszybnpgrstj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401762.0382996-646-219593412846022/AnsiballZ_lineinfile.py'
Nov 29 07:36:02 compute-0 sudo[238975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:02 compute-0 python3.9[238977]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:02 compute-0 sudo[238975]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:03 compute-0 sudo[239127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddjxcgrfmftqnxneqlbhgbhyxehrblyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401762.766699-654-84272229285013/AnsiballZ_systemd.py'
Nov 29 07:36:03 compute-0 sudo[239127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:03 compute-0 python3.9[239129]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:36:03 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 07:36:03 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 29 07:36:03 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 29 07:36:03 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 07:36:03 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 07:36:03 compute-0 sudo[239127]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:03 compute-0 sshd-session[238570]: Connection closed by authenticating user root 143.14.121.41 port 44322 [preauth]
Nov 29 07:36:03 compute-0 ceph-mon[75050]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Nov 29 07:36:03 compute-0 sudo[239283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiweqpzhrzduuczoltceuhgvavupfydw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401763.6948357-662-239901266010409/AnsiballZ_dnf.py'
Nov 29 07:36:03 compute-0 sudo[239283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Nov 29 07:36:04 compute-0 python3.9[239285]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 07:36:04 compute-0 ceph-mon[75050]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:36:05
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr']
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:36:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:36:06 compute-0 systemd[1]: Reloading.
Nov 29 07:36:07 compute-0 systemd-rc-local-generator[239321]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:36:07 compute-0 systemd-sysv-generator[239326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:36:07 compute-0 systemd[1]: Reloading.
Nov 29 07:36:07 compute-0 systemd-rc-local-generator[239355]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:36:07 compute-0 systemd-sysv-generator[239358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:36:07 compute-0 ceph-mon[75050]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 29 07:36:07 compute-0 sshd-session[239287]: Invalid user 1234 from 143.14.121.41 port 52972
Nov 29 07:36:07 compute-0 systemd-logind[807]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 07:36:07 compute-0 systemd-logind[807]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 07:36:07 compute-0 lvm[239403]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 07:36:07 compute-0 lvm[239403]: VG ceph_vg2 finished
Nov 29 07:36:07 compute-0 lvm[239402]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 07:36:07 compute-0 lvm[239402]: VG ceph_vg0 finished
Nov 29 07:36:07 compute-0 lvm[239401]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 07:36:07 compute-0 lvm[239401]: VG ceph_vg1 finished
Nov 29 07:36:08 compute-0 sshd-session[239287]: Connection closed by invalid user 1234 143.14.121.41 port 52972 [preauth]
Nov 29 07:36:08 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 07:36:08 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 07:36:08 compute-0 systemd[1]: Reloading.
Nov 29 07:36:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Nov 29 07:36:08 compute-0 systemd-rc-local-generator[239459]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:36:08 compute-0 systemd-sysv-generator[239464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:36:08 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 07:36:09 compute-0 ceph-mon[75050]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Nov 29 07:36:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:09 compute-0 sudo[239283]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Nov 29 07:36:10 compute-0 sudo[240746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvdazfahtcpfqgsuclebjavtymxflmde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401770.1498642-670-7007569246231/AnsiballZ_systemd_service.py'
Nov 29 07:36:10 compute-0 sudo[240746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:10 compute-0 python3.9[240748]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:36:10 compute-0 iscsid[227937]: iscsid shutting down.
Nov 29 07:36:10 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 29 07:36:10 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 07:36:10 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 29 07:36:10 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 07:36:10 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 29 07:36:10 compute-0 systemd[1]: Started Open-iSCSI.
Nov 29 07:36:10 compute-0 sudo[240746]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:11 compute-0 ceph-mon[75050]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Nov 29 07:36:11 compute-0 sshd-session[239466]: Connection closed by authenticating user root 143.14.121.41 port 52986 [preauth]
Nov 29 07:36:11 compute-0 python3.9[240903]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 07:36:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 07:36:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 07:36:11 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.613s CPU time.
Nov 29 07:36:11 compute-0 systemd[1]: run-rb9e7735de4df41218fdda236fdc34bcc.service: Deactivated successfully.
Nov 29 07:36:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Nov 29 07:36:12 compute-0 sudo[241060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjsslxvrtgbgjeuhcwqzqxldxklrkvck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401772.0823054-688-168955562238125/AnsiballZ_file.py'
Nov 29 07:36:12 compute-0 sudo[241060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:12 compute-0 python3.9[241062]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:12 compute-0 sudo[241060]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:12 compute-0 ceph-mon[75050]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Nov 29 07:36:13 compute-0 sudo[241212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qecbfngbhdouewtxnqyfsvbkqzmhcpxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401772.877338-699-39407307209261/AnsiballZ_systemd_service.py'
Nov 29 07:36:13 compute-0 sudo[241212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:13 compute-0 python3.9[241214]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:36:13 compute-0 systemd[1]: Reloading.
Nov 29 07:36:13 compute-0 systemd-rc-local-generator[241242]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:36:13 compute-0 systemd-sysv-generator[241245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:36:13 compute-0 sudo[241212]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:36:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:14 compute-0 python3.9[241400]: ansible-ansible.builtin.service_facts Invoked
Nov 29 07:36:14 compute-0 network[241417]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 07:36:14 compute-0 network[241418]: 'network-scripts' will be removed from distribution in near future.
Nov 29 07:36:14 compute-0 network[241419]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 07:36:14 compute-0 sshd-session[240909]: Connection closed by authenticating user root 143.14.121.41 port 37202 [preauth]
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:36:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:36:15 compute-0 ceph-mon[75050]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 07:36:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:36:16 compute-0 podman[241432]: 2025-11-29 07:36:16.629112972 +0000 UTC m=+0.090645216 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:36:16 compute-0 podman[241430]: 2025-11-29 07:36:16.722901482 +0000 UTC m=+0.184061566 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:36:17 compute-0 ceph-mon[75050]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:36:18 compute-0 sshd-session[241425]: Invalid user admin from 143.14.121.41 port 37218
Nov 29 07:36:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:36:18 compute-0 sshd-session[241425]: Connection closed by invalid user admin 143.14.121.41 port 37218 [preauth]
Nov 29 07:36:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:20 compute-0 ceph-mon[75050]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Nov 29 07:36:20 compute-0 sshd-session[241527]: Connection closed by authenticating user root 143.14.121.41 port 37228 [preauth]
Nov 29 07:36:21 compute-0 ceph-mon[75050]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:22 compute-0 sudo[241743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhfvfitrzliuxcfmhpxbtsfixywsigbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401781.7023716-718-62657231764860/AnsiballZ_systemd_service.py'
Nov 29 07:36:22 compute-0 sudo[241743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:22 compute-0 python3.9[241745]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:22 compute-0 sudo[241743]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:22 compute-0 sudo[241896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbnchkxeegcziniqrfywhrnsdiywnrcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401782.4625688-718-252932211525170/AnsiballZ_systemd_service.py'
Nov 29 07:36:22 compute-0 sudo[241896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:23 compute-0 python3.9[241898]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:23 compute-0 sudo[241896]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:23 compute-0 ceph-mon[75050]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:23 compute-0 sudo[242049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnpafpgjthdhuzwjrwbgfyhlhurlzodp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401783.293554-718-146664604786331/AnsiballZ_systemd_service.py'
Nov 29 07:36:23 compute-0 sudo[242049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:23 compute-0 python3.9[242051]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:23 compute-0 sudo[242049]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:24 compute-0 sudo[242202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oypwrqjxrcutyacglvjmyjrjefpdunih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401784.0572574-718-35220449949942/AnsiballZ_systemd_service.py'
Nov 29 07:36:24 compute-0 sudo[242202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:24 compute-0 sshd-session[241587]: Connection closed by authenticating user root 143.14.121.41 port 37230 [preauth]
Nov 29 07:36:24 compute-0 python3.9[242204]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:24 compute-0 sudo[242202]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:25 compute-0 sudo[242356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvmmngcjnqqwflegnkgupezqlnorfld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401784.8614564-718-82620756535492/AnsiballZ_systemd_service.py'
Nov 29 07:36:25 compute-0 sudo[242356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:25 compute-0 python3.9[242358]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:25 compute-0 sudo[242356]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:26 compute-0 sudo[242510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osjncfsasuxgfgdwzdhvoixakpflchfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401785.7505262-718-143189689013470/AnsiballZ_systemd_service.py'
Nov 29 07:36:26 compute-0 sudo[242510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:27 compute-0 python3.9[242512]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:27 compute-0 ceph-mon[75050]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:27 compute-0 sudo[242510]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:27 compute-0 sudo[242663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkufszcuutzuxpsxfeguowpssvidroth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401787.2051558-718-91142611163689/AnsiballZ_systemd_service.py'
Nov 29 07:36:27 compute-0 sudo[242663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:27 compute-0 python3.9[242665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:27 compute-0 sudo[242663]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:27 compute-0 podman[242667]: 2025-11-29 07:36:27.955994744 +0000 UTC m=+0.070566212 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:36:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:28 compute-0 ceph-mon[75050]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:28 compute-0 sudo[242839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcxejkpftzfptzzffvcofsheubzncyse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401788.0399096-718-253615512719878/AnsiballZ_systemd_service.py'
Nov 29 07:36:28 compute-0 sudo[242839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:28 compute-0 python3.9[242841]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:36:28 compute-0 sshd-session[242206]: Connection closed by authenticating user root 143.14.121.41 port 47894 [preauth]
Nov 29 07:36:28 compute-0 sudo[242839]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:29 compute-0 sudo[242994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzdwvgccjayxtwcyjfzlxssegdrdoxba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401789.0485835-777-258254192672197/AnsiballZ_file.py'
Nov 29 07:36:29 compute-0 sudo[242994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:29 compute-0 python3.9[242996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:29 compute-0 sudo[242994]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:29 compute-0 ceph-mon[75050]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:29 compute-0 sudo[243146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlbxtkazolmgpqdaskkvbbgidhrqazsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401789.715292-777-17302837065095/AnsiballZ_file.py'
Nov 29 07:36:29 compute-0 sudo[243146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:30 compute-0 python3.9[243148]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:30 compute-0 sudo[243146]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:30 compute-0 sudo[243298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgleakmvejnugypajzlbwwzfcoxdvsly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401790.2765121-777-178759827881090/AnsiballZ_file.py'
Nov 29 07:36:30 compute-0 sudo[243298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:30 compute-0 python3.9[243300]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:30 compute-0 sudo[243298]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:30 compute-0 ceph-mon[75050]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:31 compute-0 sudo[243450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfypqxmeknijrywwispjztxdrnoysbhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401790.8814678-777-211487012722899/AnsiballZ_file.py'
Nov 29 07:36:31 compute-0 sudo[243450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:31 compute-0 sshd-session[242867]: Invalid user a from 143.14.121.41 port 47900
Nov 29 07:36:31 compute-0 python3.9[243452]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:31 compute-0 sudo[243450]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:31 compute-0 sshd-session[242867]: Connection closed by invalid user a 143.14.121.41 port 47900 [preauth]
Nov 29 07:36:31 compute-0 sudo[243602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdnozzbpfdhcjtzkmusnmgkboigslowc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401791.469149-777-252983835970853/AnsiballZ_file.py'
Nov 29 07:36:31 compute-0 sudo[243602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:31 compute-0 python3.9[243604]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:31 compute-0 sudo[243602]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:32 compute-0 sudo[243755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfgmutgjvbadjmrynsrrjczccacjvxgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401792.1268592-777-176258270951918/AnsiballZ_file.py'
Nov 29 07:36:32 compute-0 sudo[243755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:32 compute-0 python3.9[243757]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:32 compute-0 sudo[243755]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:33 compute-0 ceph-mon[75050]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:33 compute-0 sudo[243907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsxgzofcowsrocyoglxnylatdhpvljon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401792.8246155-777-85972661516842/AnsiballZ_file.py'
Nov 29 07:36:33 compute-0 sudo[243907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:33 compute-0 python3.9[243909]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:33 compute-0 sudo[243907]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:33 compute-0 sudo[244060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cphhsmpclyakmgdtmasigiexdqcaonba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401793.3980432-777-44164906609117/AnsiballZ_file.py'
Nov 29 07:36:33 compute-0 sudo[244060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:33 compute-0 python3.9[244062]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:33 compute-0 sudo[244060]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:34 compute-0 sudo[244212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygjmchdfdinpjjrwaxeambfgkjluzcnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401794.0389264-834-198463037959172/AnsiballZ_file.py'
Nov 29 07:36:34 compute-0 sudo[244212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:34 compute-0 python3.9[244214]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:34 compute-0 sudo[244212]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:34 compute-0 sshd-session[243605]: Invalid user test123 from 143.14.121.41 port 48882
Nov 29 07:36:34 compute-0 sudo[244364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geliindplonlydoxryyyasieyhovhbuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401794.626093-834-156317149434996/AnsiballZ_file.py'
Nov 29 07:36:34 compute-0 sudo[244364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:34 compute-0 sshd-session[243605]: Connection closed by invalid user test123 143.14.121.41 port 48882 [preauth]
Nov 29 07:36:35 compute-0 python3.9[244366]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:35 compute-0 sudo[244364]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:35 compute-0 sudo[244517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-purizyvtrquscskhqjgvfbjshhbotgja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401795.2109075-834-126166022466598/AnsiballZ_file.py'
Nov 29 07:36:35 compute-0 sudo[244517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:36:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:36:35 compute-0 python3.9[244519]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:35 compute-0 sudo[244517]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:35 compute-0 ceph-mon[75050]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:36 compute-0 sudo[244669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbnxnlsmcgurufglshxvdopxrmltitcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401795.8339-834-47603092683481/AnsiballZ_file.py'
Nov 29 07:36:36 compute-0 sudo[244669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:36 compute-0 python3.9[244671]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:36 compute-0 sudo[244669]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:36 compute-0 sudo[244822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttqqqxnwhrrverlmyfdgnljxyiixpzfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401796.496095-834-148558916084996/AnsiballZ_file.py'
Nov 29 07:36:36 compute-0 sudo[244822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:36 compute-0 python3.9[244824]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:36 compute-0 sudo[244822]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:37 compute-0 ceph-mon[75050]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:37 compute-0 sudo[244974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxyxvpacsshqdjvqexkgeehvybjbkpal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401797.1048288-834-241482983124403/AnsiballZ_file.py'
Nov 29 07:36:37 compute-0 sudo[244974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:37 compute-0 python3.9[244976]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:37 compute-0 sudo[244974]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:38 compute-0 sudo[245126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuekbzmdqxchjhqhztpehwdjayqqmgbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401797.8031723-834-189685425016929/AnsiballZ_file.py'
Nov 29 07:36:38 compute-0 sudo[245126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:38 compute-0 python3.9[245128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:38 compute-0 sudo[245126]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:38 compute-0 sudo[245278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baqkcgglwmskqeechrgxdkjtmsmqbiqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401798.4156568-834-210946782389935/AnsiballZ_file.py'
Nov 29 07:36:38 compute-0 sudo[245278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:38 compute-0 python3.9[245280]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:36:38 compute-0 sudo[245278]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:38 compute-0 sshd-session[244391]: Connection closed by authenticating user root 143.14.121.41 port 48886 [preauth]
Nov 29 07:36:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:39 compute-0 ceph-mon[75050]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:39 compute-0 sudo[245432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lliwcjjwuwzzatecafwndbmvfddwjnwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401799.2273574-892-43366134964422/AnsiballZ_command.py'
Nov 29 07:36:39 compute-0 sudo[245432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:39 compute-0 python3.9[245434]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:39 compute-0 sudo[245432]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:40 compute-0 python3.9[245586]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 07:36:41 compute-0 sudo[245736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srbtexerehqseshyrlpfmrwycpynlhrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401800.858594-910-45879569132749/AnsiballZ_systemd_service.py'
Nov 29 07:36:41 compute-0 sudo[245736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:41 compute-0 python3.9[245738]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:36:41 compute-0 systemd[1]: Reloading.
Nov 29 07:36:41 compute-0 ceph-mon[75050]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:41 compute-0 systemd-rc-local-generator[245768]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:36:41 compute-0 systemd-sysv-generator[245772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:36:41 compute-0 sudo[245736]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:42 compute-0 sudo[245924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-matlhutwvurheziwlbsyuuzihpdundwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401801.948863-918-75190886363398/AnsiballZ_command.py'
Nov 29 07:36:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:42 compute-0 sudo[245924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:42 compute-0 sshd-session[245305]: Connection closed by authenticating user root 143.14.121.41 port 48900 [preauth]
Nov 29 07:36:42 compute-0 python3.9[245926]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:42 compute-0 sudo[245924]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:42 compute-0 sudo[246078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drvvrqdwlbcjtacvykskpxlohzpbahfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401802.5705814-918-215638354290003/AnsiballZ_command.py'
Nov 29 07:36:42 compute-0 sudo[246078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:43 compute-0 python3.9[246080]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:43 compute-0 ceph-mon[75050]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:43 compute-0 sudo[246078]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:43 compute-0 sudo[246232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkfyzsqpwvqmwctrsctylwndzvmfcvtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401803.2678068-918-176948993909564/AnsiballZ_command.py'
Nov 29 07:36:43 compute-0 sudo[246232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:43 compute-0 python3.9[246234]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:43 compute-0 sudo[246232]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:44 compute-0 sudo[246385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdrvunjveknremvbkyurgfbayensirph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401803.9206047-918-95834669454450/AnsiballZ_command.py'
Nov 29 07:36:44 compute-0 sudo[246385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:44 compute-0 python3.9[246387]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:44 compute-0 sudo[246385]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:44 compute-0 sshd-session[246004]: Connection closed by authenticating user root 143.14.121.41 port 53410 [preauth]
Nov 29 07:36:44 compute-0 sudo[246538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrsavqmypglmzdjghezgudgoadnmejrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401804.4992337-918-168437930988008/AnsiballZ_command.py'
Nov 29 07:36:44 compute-0 sudo[246538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:44 compute-0 python3.9[246541]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:45 compute-0 sudo[246538]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:45 compute-0 sudo[246693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exzvucwdigbkswbntjsgmlktbvfiisqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401805.1651266-918-142288855319220/AnsiballZ_command.py'
Nov 29 07:36:45 compute-0 sudo[246693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:46 compute-0 ceph-mon[75050]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:47 compute-0 python3.9[246695]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:47 compute-0 sudo[246693]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:47 compute-0 podman[246698]: 2025-11-29 07:36:47.169826289 +0000 UTC m=+0.058794204 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:36:47 compute-0 podman[246697]: 2025-11-29 07:36:47.201435835 +0000 UTC m=+0.090136903 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:36:47 compute-0 sshd-session[246539]: Connection closed by authenticating user root 143.14.121.41 port 53412 [preauth]
Nov 29 07:36:47 compute-0 sudo[246889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtqzwndrjzttaanebihrdomjrwxuyvst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401807.2741709-918-52776243590328/AnsiballZ_command.py'
Nov 29 07:36:47 compute-0 sudo[246889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:47 compute-0 python3.9[246891]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:47 compute-0 sudo[246889]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:48 compute-0 ceph-mon[75050]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:48 compute-0 sudo[247044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otuqlfummpofckcrqjnoanjjrroszcjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401807.9354432-918-29986816106217/AnsiballZ_command.py'
Nov 29 07:36:48 compute-0 sudo[247044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:48 compute-0 python3.9[247046]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 07:36:48 compute-0 sudo[247044]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:49 compute-0 ceph-mon[75050]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:49 compute-0 sshd-session[246892]: Connection closed by authenticating user root 143.14.121.41 port 53424 [preauth]
Nov 29 07:36:49 compute-0 sudo[247197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzzccvfsidxnbqyfnpztemsnztoavlni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401809.413133-997-251518209204514/AnsiballZ_file.py'
Nov 29 07:36:49 compute-0 sudo[247197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:49 compute-0 python3.9[247199]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:49 compute-0 sudo[247197]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:50 compute-0 sudo[247351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hchqotsuuigivkazvbstlbnqvelkiiae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401810.184851-997-276064726957528/AnsiballZ_file.py'
Nov 29 07:36:50 compute-0 sudo[247351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:50 compute-0 python3.9[247353]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:50 compute-0 sudo[247351]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:50 compute-0 ceph-mon[75050]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:51 compute-0 sudo[247503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzkblknmdjloszbuqhdltjtsttoiwvdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401810.8742795-997-257441723182002/AnsiballZ_file.py'
Nov 29 07:36:51 compute-0 sudo[247503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:51 compute-0 python3.9[247505]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:51 compute-0 sudo[247503]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:51 compute-0 sudo[247655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnoukggrlsqrdibdltwnpddyfhhyudmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401811.6121898-1019-36210353162394/AnsiballZ_file.py'
Nov 29 07:36:51 compute-0 sudo[247655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:52 compute-0 sshd-session[247200]: Connection closed by authenticating user root 143.14.121.41 port 53426 [preauth]
Nov 29 07:36:52 compute-0 python3.9[247657]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:52 compute-0 sudo[247655]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:52 compute-0 sudo[247809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgmkkrqrlrocjghgqxqrnlczbuikatsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401812.4926279-1019-106064415608599/AnsiballZ_file.py'
Nov 29 07:36:52 compute-0 sudo[247809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:53 compute-0 python3.9[247811]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:53 compute-0 sudo[247809]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:53 compute-0 sudo[247961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-carchrrexrryqpqwobqfzziadkvmjrfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401813.2679605-1019-41852245184169/AnsiballZ_file.py'
Nov 29 07:36:53 compute-0 sudo[247961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:53 compute-0 ceph-mon[75050]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:53 compute-0 python3.9[247963]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:53 compute-0 sudo[247961]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:54 compute-0 sudo[248113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijefjiiskxbhrbjslxdyfmnrdfkdvgcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401813.9375052-1019-80456734319723/AnsiballZ_file.py'
Nov 29 07:36:54 compute-0 sudo[248113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:54 compute-0 python3.9[248115]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:54 compute-0 sudo[248113]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:54 compute-0 sudo[248265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlogyyijlrpabidnphnvykruimidjosc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401814.5666876-1019-21264048580069/AnsiballZ_file.py'
Nov 29 07:36:54 compute-0 sudo[248265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:54 compute-0 sshd-session[247672]: Connection closed by authenticating user root 143.14.121.41 port 44922 [preauth]
Nov 29 07:36:55 compute-0 python3.9[248267]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:55 compute-0 sudo[248265]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:55 compute-0 sudo[248417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihodbqjmaoeqxnanfojjzwhlywljgpao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401815.3230252-1019-197521617596112/AnsiballZ_file.py'
Nov 29 07:36:55 compute-0 sudo[248417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:55 compute-0 python3.9[248419]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:55 compute-0 sudo[248417]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:56 compute-0 ceph-mon[75050]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:56 compute-0 sudo[248570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgowgrlmezmpwnyzxkmffverwxwesyno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401816.0105877-1019-162383987847774/AnsiballZ_file.py'
Nov 29 07:36:56 compute-0 sudo[248570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:36:56 compute-0 python3.9[248572]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:36:56 compute-0 sudo[248570]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:57 compute-0 sudo[248598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:57 compute-0 sudo[248598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:57 compute-0 sudo[248598]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:57 compute-0 sudo[248623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:36:57 compute-0 sudo[248623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:57 compute-0 sudo[248623]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:57 compute-0 sudo[248648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:36:57 compute-0 sudo[248648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:57 compute-0 sudo[248648]: pam_unix(sudo:session): session closed for user root
Nov 29 07:36:57 compute-0 sudo[248673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:36:57 compute-0 sudo[248673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:36:57 compute-0 ceph-mon[75050]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:58 compute-0 podman[248769]: 2025-11-29 07:36:58.337497579 +0000 UTC m=+0.565775787 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:36:58 compute-0 podman[248789]: 2025-11-29 07:36:58.577294085 +0000 UTC m=+0.138397600 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:36:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:36:59 compute-0 podman[248769]: 2025-11-29 07:36:59.33910617 +0000 UTC m=+1.567384368 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:36:59 compute-0 sshd-session[248542]: Connection closed by authenticating user root 143.14.121.41 port 44928 [preauth]
Nov 29 07:36:59 compute-0 podman[248802]: 2025-11-29 07:36:59.567226988 +0000 UTC m=+0.936654881 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 07:36:59 compute-0 ceph-mon[75050]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:36:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:36:59.751 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:36:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:36:59.751 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:36:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:36:59.752 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:37:00 compute-0 sudo[248673]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:37:00 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:37:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:00 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:00 compute-0 sudo[248951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:00 compute-0 sudo[248951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:00 compute-0 sudo[248951]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:00 compute-0 sudo[248976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:37:00 compute-0 sudo[248976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:00 compute-0 sudo[248976]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:00 compute-0 sudo[249001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:00 compute-0 sudo[249001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:00 compute-0 sudo[249001]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:00 compute-0 sudo[249026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:37:00 compute-0 sudo[249026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:00 compute-0 sudo[249026]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:37:01 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:37:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:37:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:37:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:37:01 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:37:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:37:01 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:01 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7c678593-315b-42e4-90bc-96f87c368810 does not exist
Nov 29 07:37:01 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 57d2cf39-eb0a-4280-af29-1a4e0e42d55c does not exist
Nov 29 07:37:01 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7a516331-a987-4fc5-818c-695eb86da650 does not exist
Nov 29 07:37:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:37:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:37:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:37:01 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:37:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:37:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:37:01 compute-0 sudo[249106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:01 compute-0 sudo[249106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:01 compute-0 sudo[249106]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:01 compute-0 ceph-mon[75050]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:37:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:37:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:37:01 compute-0 sudo[249159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:37:01 compute-0 sudo[249159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:01 compute-0 sudo[249159]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:01 compute-0 sudo[249184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:01 compute-0 sudo[249184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:01 compute-0 sudo[249184]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:01 compute-0 sudo[249209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:37:01 compute-0 sudo[249209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:02 compute-0 sudo[249320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akynqfornfjrpwdkewaqbvvdkeqmcfft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401821.5477679-1208-183183205007858/AnsiballZ_getent.py'
Nov 29 07:37:02 compute-0 sudo[249320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:02 compute-0 python3.9[249324]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 07:37:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:02 compute-0 sshd-session[248860]: Connection closed by authenticating user root 143.14.121.41 port 44934 [preauth]
Nov 29 07:37:02 compute-0 sudo[249320]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:02 compute-0 podman[249351]: 2025-11-29 07:37:02.280509903 +0000 UTC m=+0.096455634 container create 412b0e5e8bdc81125d97cdb760bc94ef7a1ab6b4fb4636bbc90002df9e810990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:37:02 compute-0 podman[249351]: 2025-11-29 07:37:02.254741196 +0000 UTC m=+0.070686977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:02 compute-0 systemd[1]: Started libpod-conmon-412b0e5e8bdc81125d97cdb760bc94ef7a1ab6b4fb4636bbc90002df9e810990.scope.
Nov 29 07:37:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:02 compute-0 sudo[249523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapjfumdlntgkvlrnioryxwvxohzvopr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401822.4577262-1216-263746152954847/AnsiballZ_group.py'
Nov 29 07:37:02 compute-0 sudo[249523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:03 compute-0 python3.9[249525]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 07:37:03 compute-0 podman[249351]: 2025-11-29 07:37:03.292300261 +0000 UTC m=+1.108246082 container init 412b0e5e8bdc81125d97cdb760bc94ef7a1ab6b4fb4636bbc90002df9e810990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wiles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:03 compute-0 podman[249351]: 2025-11-29 07:37:03.305812606 +0000 UTC m=+1.121758337 container start 412b0e5e8bdc81125d97cdb760bc94ef7a1ab6b4fb4636bbc90002df9e810990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:37:03 compute-0 jovial_wiles[249416]: 167 167
Nov 29 07:37:03 compute-0 systemd[1]: libpod-412b0e5e8bdc81125d97cdb760bc94ef7a1ab6b4fb4636bbc90002df9e810990.scope: Deactivated successfully.
Nov 29 07:37:03 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:03 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:37:03 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:37:03 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:37:03 compute-0 podman[249351]: 2025-11-29 07:37:03.629475523 +0000 UTC m=+1.445421274 container attach 412b0e5e8bdc81125d97cdb760bc94ef7a1ab6b4fb4636bbc90002df9e810990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wiles, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:03 compute-0 podman[249351]: 2025-11-29 07:37:03.630731417 +0000 UTC m=+1.446677158 container died 412b0e5e8bdc81125d97cdb760bc94ef7a1ab6b4fb4636bbc90002df9e810990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wiles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:04 compute-0 groupadd[249526]: group added to /etc/group: name=nova, GID=42436
Nov 29 07:37:05 compute-0 sshd-session[249448]: Connection closed by authenticating user root 143.14.121.41 port 47072 [preauth]
Nov 29 07:37:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:37:05
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'volumes', 'images']
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:06 compute-0 groupadd[249526]: group added to /etc/gshadow: name=nova
Nov 29 07:37:06 compute-0 groupadd[249526]: new group: name=nova, GID=42436
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:37:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:37:06 compute-0 sudo[249523]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d6fc67e19a8011cc5f6c7ea1c523c404467925061495a43cdea1bf46e10879f-merged.mount: Deactivated successfully.
Nov 29 07:37:07 compute-0 ceph-mon[75050]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:07 compute-0 ceph-mon[75050]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:07 compute-0 sudo[249697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikdovxiiceooofjhvwfwmcpltliycydp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401827.0528646-1224-145385870530351/AnsiballZ_user.py'
Nov 29 07:37:07 compute-0 sudo[249697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:07 compute-0 podman[249351]: 2025-11-29 07:37:07.691944633 +0000 UTC m=+5.507890394 container remove 412b0e5e8bdc81125d97cdb760bc94ef7a1ab6b4fb4636bbc90002df9e810990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:37:07 compute-0 systemd[1]: libpod-conmon-412b0e5e8bdc81125d97cdb760bc94ef7a1ab6b4fb4636bbc90002df9e810990.scope: Deactivated successfully.
Nov 29 07:37:07 compute-0 python3.9[249699]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 07:37:07 compute-0 podman[249707]: 2025-11-29 07:37:07.907861612 +0000 UTC m=+0.045543917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:08 compute-0 podman[249707]: 2025-11-29 07:37:08.226488656 +0000 UTC m=+0.364170971 container create fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackwell, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:37:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:08 compute-0 ceph-mon[75050]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:08 compute-0 systemd[1]: Started libpod-conmon-fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097.scope.
Nov 29 07:37:08 compute-0 useradd[249716]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 29 07:37:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afa5599cfc33edfc29c96d10b6666955f4f2da220a5a2c2d1f10ba052e8a0e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afa5599cfc33edfc29c96d10b6666955f4f2da220a5a2c2d1f10ba052e8a0e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afa5599cfc33edfc29c96d10b6666955f4f2da220a5a2c2d1f10ba052e8a0e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afa5599cfc33edfc29c96d10b6666955f4f2da220a5a2c2d1f10ba052e8a0e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6afa5599cfc33edfc29c96d10b6666955f4f2da220a5a2c2d1f10ba052e8a0e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:08 compute-0 useradd[249716]: add 'nova' to group 'libvirt'
Nov 29 07:37:08 compute-0 useradd[249716]: add 'nova' to shadow group 'libvirt'
Nov 29 07:37:09 compute-0 podman[249707]: 2025-11-29 07:37:09.193352352 +0000 UTC m=+1.331034697 container init fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackwell, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:37:09 compute-0 podman[249707]: 2025-11-29 07:37:09.20659854 +0000 UTC m=+1.344280855 container start fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:37:09 compute-0 sshd-session[249540]: Connection closed by authenticating user root 143.14.121.41 port 47082 [preauth]
Nov 29 07:37:09 compute-0 podman[249707]: 2025-11-29 07:37:09.875405297 +0000 UTC m=+2.013087592 container attach fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackwell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:37:10 compute-0 ceph-mon[75050]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:10 compute-0 jovial_blackwell[249726]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:37:10 compute-0 jovial_blackwell[249726]: --> relative data size: 1.0
Nov 29 07:37:10 compute-0 jovial_blackwell[249726]: --> All data devices are unavailable
Nov 29 07:37:10 compute-0 systemd[1]: libpod-fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097.scope: Deactivated successfully.
Nov 29 07:37:10 compute-0 systemd[1]: libpod-fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097.scope: Consumed 1.254s CPU time.
Nov 29 07:37:10 compute-0 podman[249763]: 2025-11-29 07:37:10.607205266 +0000 UTC m=+0.066922763 container died fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackwell, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:37:10 compute-0 sudo[249697]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6afa5599cfc33edfc29c96d10b6666955f4f2da220a5a2c2d1f10ba052e8a0e7-merged.mount: Deactivated successfully.
Nov 29 07:37:11 compute-0 ceph-mon[75050]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:11 compute-0 sshd-session[249802]: Accepted publickey for zuul from 192.168.122.30 port 42860 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 07:37:11 compute-0 systemd-logind[807]: New session 50 of user zuul.
Nov 29 07:37:11 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 29 07:37:11 compute-0 sshd-session[249802]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 07:37:11 compute-0 sshd-session[249805]: Received disconnect from 192.168.122.30 port 42860:11: disconnected by user
Nov 29 07:37:11 compute-0 sshd-session[249805]: Disconnected from user zuul 192.168.122.30 port 42860
Nov 29 07:37:11 compute-0 sshd-session[249802]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:37:11 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 07:37:11 compute-0 systemd-logind[807]: Session 50 logged out. Waiting for processes to exit.
Nov 29 07:37:11 compute-0 systemd-logind[807]: Removed session 50.
Nov 29 07:37:12 compute-0 podman[249763]: 2025-11-29 07:37:12.007237777 +0000 UTC m=+1.466955274 container remove fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackwell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:37:12 compute-0 systemd[1]: libpod-conmon-fbc08070c097cb5a4e86a940278b35faa35c773c3e28507cd4e03bd40663e097.scope: Deactivated successfully.
Nov 29 07:37:12 compute-0 sudo[249209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:12 compute-0 sudo[249853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:12 compute-0 sudo[249853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:12 compute-0 sudo[249853]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:12 compute-0 sudo[249907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:37:12 compute-0 sudo[249907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:12 compute-0 sshd-session[249731]: Connection closed by authenticating user root 143.14.121.41 port 47096 [preauth]
Nov 29 07:37:12 compute-0 sudo[249907]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:12 compute-0 sudo[249955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:12 compute-0 sudo[249955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:12 compute-0 sudo[249955]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:12 compute-0 sudo[249980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:37:12 compute-0 sudo[249980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:12 compute-0 python3.9[250055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:12 compute-0 podman[250098]: 2025-11-29 07:37:12.677309187 +0000 UTC m=+0.027734571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:13 compute-0 podman[250098]: 2025-11-29 07:37:13.013516251 +0000 UTC m=+0.363941585 container create bc964f23b54dfdc1bd664b39baf15d0f405611e03b36c5245a3903793e9c1141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_euler, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:13 compute-0 python3.9[250232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401832.0786943-1249-96694514574623/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:13 compute-0 ceph-mon[75050]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:13 compute-0 systemd[1]: Started libpod-conmon-bc964f23b54dfdc1bd664b39baf15d0f405611e03b36c5245a3903793e9c1141.scope.
Nov 29 07:37:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:13 compute-0 podman[250098]: 2025-11-29 07:37:13.444521982 +0000 UTC m=+0.794947346 container init bc964f23b54dfdc1bd664b39baf15d0f405611e03b36c5245a3903793e9c1141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:37:13 compute-0 podman[250098]: 2025-11-29 07:37:13.452707252 +0000 UTC m=+0.803132626 container start bc964f23b54dfdc1bd664b39baf15d0f405611e03b36c5245a3903793e9c1141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_euler, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:37:13 compute-0 unruffled_euler[250242]: 167 167
Nov 29 07:37:13 compute-0 systemd[1]: libpod-bc964f23b54dfdc1bd664b39baf15d0f405611e03b36c5245a3903793e9c1141.scope: Deactivated successfully.
Nov 29 07:37:13 compute-0 python3.9[250398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:14 compute-0 podman[250098]: 2025-11-29 07:37:14.256052922 +0000 UTC m=+1.606478286 container attach bc964f23b54dfdc1bd664b39baf15d0f405611e03b36c5245a3903793e9c1141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_euler, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:37:14 compute-0 podman[250098]: 2025-11-29 07:37:14.258110905 +0000 UTC m=+1.608536259 container died bc964f23b54dfdc1bd664b39baf15d0f405611e03b36c5245a3903793e9c1141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:37:14 compute-0 python3.9[250474]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d62ec24ff927f93d388d9b8ecffbf797c7ddcd2085b6ba0d53804c0ea1b0bf8f-merged.mount: Deactivated successfully.
Nov 29 07:37:14 compute-0 podman[250098]: 2025-11-29 07:37:14.472260366 +0000 UTC m=+1.822685700 container remove bc964f23b54dfdc1bd664b39baf15d0f405611e03b36c5245a3903793e9c1141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_euler, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:37:14 compute-0 systemd[1]: libpod-conmon-bc964f23b54dfdc1bd664b39baf15d0f405611e03b36c5245a3903793e9c1141.scope: Deactivated successfully.
Nov 29 07:37:14 compute-0 podman[250583]: 2025-11-29 07:37:14.739118525 +0000 UTC m=+0.095285120 container create 8d7bdb528e678acdbcd1cd31befdb2c60907b344300a5ec21052b542ebad010e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:37:14 compute-0 podman[250583]: 2025-11-29 07:37:14.664138746 +0000 UTC m=+0.020305361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:37:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:37:14 compute-0 python3.9[250647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:15 compute-0 systemd[1]: Started libpod-conmon-8d7bdb528e678acdbcd1cd31befdb2c60907b344300a5ec21052b542ebad010e.scope.
Nov 29 07:37:15 compute-0 python3.9[250769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401834.4547231-1249-87549695710000/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d11f2703e671bc8a1c8d04894f486f71b77c59e0d9d4c416d55b3f3d6ec7de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d11f2703e671bc8a1c8d04894f486f71b77c59e0d9d4c416d55b3f3d6ec7de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d11f2703e671bc8a1c8d04894f486f71b77c59e0d9d4c416d55b3f3d6ec7de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d11f2703e671bc8a1c8d04894f486f71b77c59e0d9d4c416d55b3f3d6ec7de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:15 compute-0 ceph-mon[75050]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:16 compute-0 podman[250583]: 2025-11-29 07:37:16.020717846 +0000 UTC m=+1.376884471 container init 8d7bdb528e678acdbcd1cd31befdb2c60907b344300a5ec21052b542ebad010e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:37:16 compute-0 podman[250583]: 2025-11-29 07:37:16.031295616 +0000 UTC m=+1.387462241 container start 8d7bdb528e678acdbcd1cd31befdb2c60907b344300a5ec21052b542ebad010e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:37:16 compute-0 podman[250583]: 2025-11-29 07:37:16.221676078 +0000 UTC m=+1.577842693 container attach 8d7bdb528e678acdbcd1cd31befdb2c60907b344300a5ec21052b542ebad010e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:16 compute-0 python3.9[250926]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:16 compute-0 sshd-session[250545]: Invalid user git from 143.14.121.41 port 40472
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]: {
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:     "0": [
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:         {
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "devices": [
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "/dev/loop3"
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             ],
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_name": "ceph_lv0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_size": "21470642176",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "name": "ceph_lv0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "tags": {
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.cluster_name": "ceph",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.crush_device_class": "",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.encrypted": "0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.osd_id": "0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.type": "block",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.vdo": "0"
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             },
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "type": "block",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "vg_name": "ceph_vg0"
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:         }
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:     ],
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:     "1": [
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:         {
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "devices": [
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "/dev/loop4"
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             ],
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_name": "ceph_lv1",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_size": "21470642176",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "name": "ceph_lv1",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "tags": {
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.cluster_name": "ceph",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.crush_device_class": "",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.encrypted": "0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.osd_id": "1",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.type": "block",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.vdo": "0"
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             },
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "type": "block",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "vg_name": "ceph_vg1"
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:         }
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:     ],
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:     "2": [
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:         {
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "devices": [
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "/dev/loop5"
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             ],
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_name": "ceph_lv2",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_size": "21470642176",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "name": "ceph_lv2",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "tags": {
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.cluster_name": "ceph",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.crush_device_class": "",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.encrypted": "0",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.osd_id": "2",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.type": "block",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:                 "ceph.vdo": "0"
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             },
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "type": "block",
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:             "vg_name": "ceph_vg2"
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:         }
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]:     ]
Nov 29 07:37:16 compute-0 fervent_maxwell[250772]: }
Nov 29 07:37:16 compute-0 systemd[1]: libpod-8d7bdb528e678acdbcd1cd31befdb2c60907b344300a5ec21052b542ebad010e.scope: Deactivated successfully.
Nov 29 07:37:16 compute-0 podman[250583]: 2025-11-29 07:37:16.860724254 +0000 UTC m=+2.216890879 container died 8d7bdb528e678acdbcd1cd31befdb2c60907b344300a5ec21052b542ebad010e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:37:16 compute-0 sshd-session[250545]: Connection closed by invalid user git 143.14.121.41 port 40472 [preauth]
Nov 29 07:37:16 compute-0 ceph-mon[75050]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:17 compute-0 python3.9[251051]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401835.898904-1249-58455256854400/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d11f2703e671bc8a1c8d04894f486f71b77c59e0d9d4c416d55b3f3d6ec7de-merged.mount: Deactivated successfully.
Nov 29 07:37:17 compute-0 podman[250583]: 2025-11-29 07:37:17.536236553 +0000 UTC m=+2.892403138 container remove 8d7bdb528e678acdbcd1cd31befdb2c60907b344300a5ec21052b542ebad010e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_maxwell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:37:17 compute-0 podman[251190]: 2025-11-29 07:37:17.580549757 +0000 UTC m=+0.109679529 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:17 compute-0 sudo[249980]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:17 compute-0 podman[251189]: 2025-11-29 07:37:17.626611666 +0000 UTC m=+0.155263445 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:37:17 compute-0 systemd[1]: libpod-conmon-8d7bdb528e678acdbcd1cd31befdb2c60907b344300a5ec21052b542ebad010e.scope: Deactivated successfully.
Nov 29 07:37:17 compute-0 sudo[251256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:17 compute-0 sudo[251256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:17 compute-0 sudo[251256]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:17 compute-0 sudo[251284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:37:17 compute-0 sudo[251284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:17 compute-0 sudo[251284]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:17 compute-0 python3.9[251233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:17 compute-0 sudo[251309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:17 compute-0 sudo[251309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:17 compute-0 sudo[251309]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:17 compute-0 sudo[251339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:37:17 compute-0 sudo[251339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:18 compute-0 python3.9[251507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401837.1927838-1249-72341095262200/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:18 compute-0 podman[251520]: 2025-11-29 07:37:18.250126364 +0000 UTC m=+0.038338473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:18 compute-0 podman[251520]: 2025-11-29 07:37:18.469341254 +0000 UTC m=+0.257553273 container create bacb6af0abd61554dec2612b577845422ba8df70ce95e18d1ea6b2b1b2c914a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:37:18 compute-0 systemd[1]: Started libpod-conmon-bacb6af0abd61554dec2612b577845422ba8df70ce95e18d1ea6b2b1b2c914a5.scope.
Nov 29 07:37:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:18 compute-0 podman[251520]: 2025-11-29 07:37:18.868774116 +0000 UTC m=+0.656986225 container init bacb6af0abd61554dec2612b577845422ba8df70ce95e18d1ea6b2b1b2c914a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:37:18 compute-0 podman[251520]: 2025-11-29 07:37:18.879759977 +0000 UTC m=+0.667971996 container start bacb6af0abd61554dec2612b577845422ba8df70ce95e18d1ea6b2b1b2c914a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:37:18 compute-0 youthful_kare[251669]: 167 167
Nov 29 07:37:18 compute-0 systemd[1]: libpod-bacb6af0abd61554dec2612b577845422ba8df70ce95e18d1ea6b2b1b2c914a5.scope: Deactivated successfully.
Nov 29 07:37:18 compute-0 python3.9[251688]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:19 compute-0 podman[251520]: 2025-11-29 07:37:19.065630465 +0000 UTC m=+0.853842524 container attach bacb6af0abd61554dec2612b577845422ba8df70ce95e18d1ea6b2b1b2c914a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:37:19 compute-0 podman[251520]: 2025-11-29 07:37:19.066166579 +0000 UTC m=+0.854378638 container died bacb6af0abd61554dec2612b577845422ba8df70ce95e18d1ea6b2b1b2c914a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:37:19 compute-0 python3.9[251821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401838.454101-1249-198256591539664/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-925d56eced831c8ef6c20a1e726e5e22e52cdbd5fac1c0ef9aeadc3c800f2073-merged.mount: Deactivated successfully.
Nov 29 07:37:19 compute-0 sshd-session[251088]: Invalid user admin from 143.14.121.41 port 40488
Nov 29 07:37:19 compute-0 ceph-mon[75050]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:20 compute-0 sshd-session[251088]: Connection closed by invalid user admin 143.14.121.41 port 40488 [preauth]
Nov 29 07:37:20 compute-0 sudo[251972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evvxdzbzcpwvrzzcqqqnahrcdjfwluju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401839.7529163-1332-71195170464399/AnsiballZ_file.py'
Nov 29 07:37:20 compute-0 sudo[251972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:20 compute-0 podman[251520]: 2025-11-29 07:37:20.257775596 +0000 UTC m=+2.045987655 container remove bacb6af0abd61554dec2612b577845422ba8df70ce95e18d1ea6b2b1b2c914a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:37:20 compute-0 systemd[1]: libpod-conmon-bacb6af0abd61554dec2612b577845422ba8df70ce95e18d1ea6b2b1b2c914a5.scope: Deactivated successfully.
Nov 29 07:37:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:20 compute-0 python3.9[251974]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:20 compute-0 sudo[251972]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:20 compute-0 podman[251983]: 2025-11-29 07:37:20.428082265 +0000 UTC m=+0.028743007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:37:20 compute-0 podman[251983]: 2025-11-29 07:37:20.821281668 +0000 UTC m=+0.421942360 container create 46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:37:20 compute-0 systemd[1]: Started libpod-conmon-46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236.scope.
Nov 29 07:37:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273904cfb9f1881ec24f509aab2244efda4d58608b26cb9153bb6652e15dac24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273904cfb9f1881ec24f509aab2244efda4d58608b26cb9153bb6652e15dac24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273904cfb9f1881ec24f509aab2244efda4d58608b26cb9153bb6652e15dac24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273904cfb9f1881ec24f509aab2244efda4d58608b26cb9153bb6652e15dac24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:37:20 compute-0 ceph-mon[75050]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:20 compute-0 podman[251983]: 2025-11-29 07:37:20.965590122 +0000 UTC m=+0.566250834 container init 46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:37:20 compute-0 podman[251983]: 2025-11-29 07:37:20.973378061 +0000 UTC m=+0.574038743 container start 46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:37:21 compute-0 sudo[252154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kabjvtdsuysmiedterqckcmqwzciegrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401840.6972709-1340-89006368612495/AnsiballZ_copy.py'
Nov 29 07:37:21 compute-0 podman[251983]: 2025-11-29 07:37:21.008437528 +0000 UTC m=+0.609098180 container attach 46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:37:21 compute-0 sudo[252154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:21 compute-0 python3.9[252156]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:37:21 compute-0 sudo[252154]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:21 compute-0 sudo[252310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eewcegwglbhlzcjphodqsuiqfdmqyxan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401841.3932567-1348-103119700792363/AnsiballZ_stat.py'
Nov 29 07:37:21 compute-0 sudo[252310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:21 compute-0 python3.9[252313]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:37:21 compute-0 sudo[252310]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]: {
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "osd_id": 2,
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "type": "bluestore"
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:     },
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "osd_id": 1,
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "type": "bluestore"
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:     },
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "osd_id": 0,
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:         "type": "bluestore"
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]:     }
Nov 29 07:37:22 compute-0 compassionate_feynman[252103]: }
Nov 29 07:37:22 compute-0 systemd[1]: libpod-46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236.scope: Deactivated successfully.
Nov 29 07:37:22 compute-0 conmon[252103]: conmon 46f748bafe7204a60006 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236.scope/container/memory.events
Nov 29 07:37:22 compute-0 systemd[1]: libpod-46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236.scope: Consumed 1.102s CPU time.
Nov 29 07:37:22 compute-0 podman[251983]: 2025-11-29 07:37:22.076699719 +0000 UTC m=+1.677360441 container died 46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:37:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:22 compute-0 sshd-session[251975]: Invalid user user from 143.14.121.41 port 40504
Nov 29 07:37:22 compute-0 sudo[252498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocfrtoztdbnnhjtonbaifzvoltnsfkty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401842.1256237-1356-138039680903454/AnsiballZ_stat.py'
Nov 29 07:37:22 compute-0 sudo[252498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-273904cfb9f1881ec24f509aab2244efda4d58608b26cb9153bb6652e15dac24-merged.mount: Deactivated successfully.
Nov 29 07:37:22 compute-0 python3.9[252500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:22 compute-0 sudo[252498]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:22 compute-0 podman[251983]: 2025-11-29 07:37:22.801627591 +0000 UTC m=+2.402288283 container remove 46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:37:22 compute-0 systemd[1]: libpod-conmon-46f748bafe7204a60006de76c1dabb60ac5b60dc50e166594e426f753e755236.scope: Deactivated successfully.
Nov 29 07:37:22 compute-0 sshd-session[251975]: Connection closed by invalid user user 143.14.121.41 port 40504 [preauth]
Nov 29 07:37:22 compute-0 sudo[251339]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:37:23 compute-0 sudo[252623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhybejzygfpsaklawbakrjzkizzzlzsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401842.1256237-1356-138039680903454/AnsiballZ_copy.py'
Nov 29 07:37:23 compute-0 sudo[252623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:23 compute-0 python3.9[252626]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764401842.1256237-1356-138039680903454/.source _original_basename=.85mbld8y follow=False checksum=6623b9896084c3e685356b31b6e6730f3bd83be3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 07:37:23 compute-0 sudo[252623]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:37:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:23 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7f389994-2fa6-4d49-8acb-19981d587c9e does not exist
Nov 29 07:37:23 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5270a81f-1a7e-4091-8b2e-6ca92f101df2 does not exist
Nov 29 07:37:23 compute-0 sudo[252684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:37:23 compute-0 sudo[252684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:23 compute-0 sudo[252684]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:23 compute-0 sudo[252737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:37:23 compute-0 sudo[252737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:37:23 compute-0 sudo[252737]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:23 compute-0 ceph-mon[75050]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:24 compute-0 python3.9[252828]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:37:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:24 compute-0 python3.9[252980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:25 compute-0 python3.9[253101]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401844.3795214-1382-235802717584097/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:25 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:37:25 compute-0 ceph-mon[75050]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:26 compute-0 python3.9[253251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 07:37:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:26 compute-0 sshd-session[252596]: Connection closed by authenticating user root 143.14.121.41 port 55982 [preauth]
Nov 29 07:37:26 compute-0 python3.9[253372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401845.61219-1397-11152023568198/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 07:37:27 compute-0 sudo[253523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzdmuxvsotniyxoeahbcijpownqucuff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401846.986576-1414-38532321103494/AnsiballZ_container_config_data.py'
Nov 29 07:37:27 compute-0 sudo[253523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:27 compute-0 python3.9[253525]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 07:37:27 compute-0 sudo[253523]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:27 compute-0 ceph-mon[75050]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:28 compute-0 sudo[253676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmkbihlxesfjrhmrxwwqptsbfpqibled ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401847.7448046-1423-238883856310413/AnsiballZ_container_config_hash.py'
Nov 29 07:37:28 compute-0 sudo[253676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:28 compute-0 python3.9[253678]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:37:28 compute-0 sudo[253676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:37:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:28 compute-0 sudo[253828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwpfebukstulkpdfdveadnsgpdxmqydv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401848.5531871-1433-231502015797922/AnsiballZ_edpm_container_manage.py'
Nov 29 07:37:28 compute-0 sudo[253828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:37:29 compute-0 ceph-mon[75050]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:29 compute-0 python3[253830]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:37:29 compute-0 sshd-session[253373]: Connection closed by authenticating user root 143.14.121.41 port 55986 [preauth]
Nov 29 07:37:29 compute-0 podman[253856]: 2025-11-29 07:37:29.723896345 +0000 UTC m=+0.078831709 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:37:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:31 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 29 07:37:31 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:31.037770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:37:31 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 29 07:37:31 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401851037872, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2283, "num_deletes": 506, "total_data_size": 3494844, "memory_usage": 3558624, "flush_reason": "Manual Compaction"}
Nov 29 07:37:31 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 29 07:37:32 compute-0 sshd-session[253857]: Connection closed by authenticating user root 143.14.121.41 port 55996 [preauth]
Nov 29 07:37:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401853842523, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 3419963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13069, "largest_seqno": 15351, "table_properties": {"data_size": 3409882, "index_size": 5935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 22988, "raw_average_key_size": 19, "raw_value_size": 3387775, "raw_average_value_size": 2813, "num_data_blocks": 268, "num_entries": 1204, "num_filter_entries": 1204, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401528, "oldest_key_time": 1764401528, "file_creation_time": 1764401851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 2804841 microseconds, and 7568 cpu microseconds.
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:37:33 compute-0 ceph-mon[75050]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:33.842627) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 3419963 bytes OK
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:33.842654) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:33.942906) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:33.943031) EVENT_LOG_v1 {"time_micros": 1764401853943014, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:33.943081) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3484093, prev total WAL file size 3485248, number of live WAL files 2.
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:33.944935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(3339KB)], [32(6917KB)]
Nov 29 07:37:33 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401853945045, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10503376, "oldest_snapshot_seqno": -1}
Nov 29 07:37:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:34 compute-0 sshd-session[253890]: Connection closed by authenticating user root 143.14.121.41 port 40534 [preauth]
Nov 29 07:37:35 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4089 keys, 8386949 bytes, temperature: kUnknown
Nov 29 07:37:35 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401855004893, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8386949, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8356979, "index_size": 18660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 100524, "raw_average_key_size": 24, "raw_value_size": 8280323, "raw_average_value_size": 2025, "num_data_blocks": 792, "num_entries": 4089, "num_filter_entries": 4089, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764401853, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:37:35 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:37:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:37:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:37:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:37 compute-0 sshd-session[253909]: Connection closed by authenticating user root 143.14.121.41 port 40536 [preauth]
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:35.005324) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8386949 bytes
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:37.991641) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 9.9 rd, 7.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.8 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 5114, records dropped: 1025 output_compression: NoCompression
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:37.991688) EVENT_LOG_v1 {"time_micros": 1764401857991670, "job": 14, "event": "compaction_finished", "compaction_time_micros": 1060003, "compaction_time_cpu_micros": 22821, "output_level": 6, "num_output_files": 1, "total_output_size": 8386949, "num_input_records": 5114, "num_output_records": 4089, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401857993257, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401857995263, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:33.944783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:37.995302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:37.995307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:37.995308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:37.995310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:37:37 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:37:37.995311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:37:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:39 compute-0 sshd-session[253912]: Connection closed by authenticating user root 143.14.121.41 port 40540 [preauth]
Nov 29 07:37:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:42 compute-0 sshd-session[253914]: Connection closed by authenticating user root 143.14.121.41 port 40554 [preauth]
Nov 29 07:37:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:44 compute-0 sshd-session[253916]: Invalid user deployer from 143.14.121.41 port 60264
Nov 29 07:37:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:45 compute-0 sshd-session[253916]: Connection closed by invalid user deployer 143.14.121.41 port 60264 [preauth]
Nov 29 07:37:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:47 compute-0 sshd-session[253918]: Invalid user 1 from 143.14.121.41 port 60280
Nov 29 07:37:47 compute-0 sshd-session[253918]: Connection closed by invalid user 1 143.14.121.41 port 60280 [preauth]
Nov 29 07:37:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:49 compute-0 podman[253921]: 2025-11-29 07:37:49.346578531 +0000 UTC m=+1.715768724 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 07:37:49 compute-0 podman[253932]: 2025-11-29 07:37:49.386237496 +0000 UTC m=+0.762671230 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:37:49 compute-0 sshd-session[253920]: Invalid user test1 from 143.14.121.41 port 60290
Nov 29 07:37:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:50 compute-0 sshd-session[253920]: Connection closed by invalid user test1 143.14.121.41 port 60290 [preauth]
Nov 29 07:37:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:53 compute-0 sshd-session[253967]: Connection closed by authenticating user root 143.14.121.41 port 60292 [preauth]
Nov 29 07:37:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:37:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:57 compute-0 sshd-session[253969]: Connection closed by authenticating user root 143.14.121.41 port 54126 [preauth]
Nov 29 07:37:57 compute-0 ceph-mon[75050]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:37:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:37:59.751 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:37:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:37:59.752 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:37:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:37:59.752 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:38:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:00 compute-0 sshd-session[253985]: Connection closed by authenticating user root 143.14.121.41 port 54138 [preauth]
Nov 29 07:38:01 compute-0 anacron[30860]: Job `cron.weekly' started
Nov 29 07:38:02 compute-0 anacron[30860]: Job `cron.weekly' terminated
Nov 29 07:38:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:04 compute-0 sshd-session[253999]: Connection closed by authenticating user root 143.14.121.41 port 54150 [preauth]
Nov 29 07:38:04 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:38:05
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:38:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:38:07 compute-0 sshd-session[254005]: Connection closed by authenticating user root 143.14.121.41 port 35350 [preauth]
Nov 29 07:38:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.4925 seconds
Nov 29 07:38:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 get_health_metrics reporting 2 slow ops, oldest is log(1 entries from seq 824 at 2025-11-29T07:37:34.259626+0000)
Nov 29 07:38:07 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0[75046]: 2025-11-29T07:38:07.550+0000 7f27ac53a640 -1 mon.compute-0@0(leader) e1 get_health_metrics reporting 2 slow ops, oldest is log(1 entries from seq 824 at 2025-11-29T07:37:34.259626+0000)
Nov 29 07:38:07 compute-0 podman[253988]: 2025-11-29 07:38:07.627829858 +0000 UTC m=+7.005681160 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:07 compute-0 ceph-mon[75050]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:08 compute-0 podman[253843]: 2025-11-29 07:38:08.316256806 +0000 UTC m=+39.012701291 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 07:38:08 compute-0 podman[254040]: 2025-11-29 07:38:08.464296275 +0000 UTC m=+0.022201089 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 07:38:09 compute-0 ceph-mon[75050]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:09 compute-0 ceph-mon[75050]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:09 compute-0 ceph-mon[75050]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:09 compute-0 podman[254040]: 2025-11-29 07:38:09.199757369 +0000 UTC m=+0.757662173 container create 9d12469c89efdfdc666ec174c176d4bb33ba378d5869ad0c5c6d9b408e5fac58 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:38:09 compute-0 python3[253830]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 07:38:09 compute-0 sudo[253828]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:09 compute-0 sudo[254229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vihwthqsaiqpzdsxxfougarblypyelzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401889.5612988-1441-269185306775099/AnsiballZ_stat.py'
Nov 29 07:38:09 compute-0 sudo[254229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:10 compute-0 sshd-session[254007]: Connection closed by authenticating user root 143.14.121.41 port 35364 [preauth]
Nov 29 07:38:10 compute-0 python3.9[254231]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:38:10 compute-0 ceph-mon[75050]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:10 compute-0 sudo[254229]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:10 compute-0 sudo[254385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cidrpxhmyofntystjfsnlmyjjvxtglga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401890.672711-1453-269342168888114/AnsiballZ_container_config_data.py'
Nov 29 07:38:10 compute-0 sudo[254385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:11 compute-0 python3.9[254387]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 07:38:11 compute-0 sudo[254385]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:11 compute-0 ceph-mon[75050]: pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:11 compute-0 sudo[254537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbjrkahthzrmfxlqpjbbsdenezzwonvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401891.4262886-1462-18100188561939/AnsiballZ_container_config_hash.py'
Nov 29 07:38:11 compute-0 sudo[254537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:11 compute-0 python3.9[254539]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 07:38:11 compute-0 sudo[254537]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [WRN] : Health check failed: 2 slow ops, oldest one blocked for 32 sec, mon.compute-0 has slow ops (SLOW_OPS)
Nov 29 07:38:12 compute-0 ceph-mon[75050]: Health check failed: 2 slow ops, oldest one blocked for 32 sec, mon.compute-0 has slow ops (SLOW_OPS)
Nov 29 07:38:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:12 compute-0 sudo[254689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywohtqbbsnqrqjvnrjhurawfnefxcfpa ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764401892.2462528-1472-237027039660577/AnsiballZ_edpm_container_manage.py'
Nov 29 07:38:12 compute-0 sudo[254689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:12 compute-0 python3[254691]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 07:38:13 compute-0 podman[254728]: 2025-11-29 07:38:13.07507188 +0000 UTC m=+0.066197445 container create c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:38:13 compute-0 podman[254728]: 2025-11-29 07:38:13.031538126 +0000 UTC m=+0.022663721 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 07:38:13 compute-0 python3[254691]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 07:38:13 compute-0 sudo[254689]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:13 compute-0 ceph-mon[75050]: pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:13 compute-0 sudo[254916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpzuxyplrrkfyzdjgksllanoczoxcmje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401893.4173224-1480-251435819794019/AnsiballZ_stat.py'
Nov 29 07:38:13 compute-0 sudo[254916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:13 compute-0 python3.9[254918]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:38:13 compute-0 sudo[254916]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:14 compute-0 sshd-session[254258]: Connection closed by authenticating user root 143.14.121.41 port 35366 [preauth]
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:14 compute-0 sudo[255071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plgfqujjtndgezzlqmoybpjfyxkoabhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401894.16709-1489-91793632298263/AnsiballZ_file.py'
Nov 29 07:38:14 compute-0 sudo[255071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:14 compute-0 python3.9[255073]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:14 compute-0 sudo[255071]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:38:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:38:15 compute-0 sudo[255223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziyeprejkngwslmbqwfycijbbobrxbtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401894.8033278-1489-9380774896358/AnsiballZ_copy.py'
Nov 29 07:38:15 compute-0 sudo[255223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:15 compute-0 python3.9[255225]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401894.8033278-1489-9380774896358/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 07:38:15 compute-0 sudo[255223]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:15 compute-0 ceph-mon[75050]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:15 compute-0 sudo[255299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfahryslnhfpsbxrgxkxxfqqwauqdkkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401894.8033278-1489-9380774896358/AnsiballZ_systemd.py'
Nov 29 07:38:15 compute-0 sudo[255299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:16 compute-0 python3.9[255301]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 07:38:16 compute-0 systemd[1]: Reloading.
Nov 29 07:38:16 compute-0 systemd-sysv-generator[255332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:38:16 compute-0 systemd-rc-local-generator[255327]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:38:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:16 compute-0 sudo[255299]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:16 compute-0 sudo[255409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbwtlluzijiqpswzlakhsbxygddrsfvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401894.8033278-1489-9380774896358/AnsiballZ_systemd.py'
Nov 29 07:38:16 compute-0 sudo[255409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:17 compute-0 python3.9[255411]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 07:38:17 compute-0 systemd[1]: Reloading.
Nov 29 07:38:17 compute-0 systemd-rc-local-generator[255439]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 07:38:17 compute-0 systemd-sysv-generator[255444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 07:38:17 compute-0 systemd[1]: Starting nova_compute container...
Nov 29 07:38:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:17 compute-0 ceph-mon[75050]: pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:17 compute-0 sshd-session[255032]: Connection closed by authenticating user root 143.14.121.41 port 48286 [preauth]
Nov 29 07:38:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:17 compute-0 podman[255451]: 2025-11-29 07:38:17.652417489 +0000 UTC m=+0.108000615 container init c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute)
Nov 29 07:38:17 compute-0 podman[255451]: 2025-11-29 07:38:17.661765109 +0000 UTC m=+0.117348225 container start c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=nova_compute)
Nov 29 07:38:17 compute-0 podman[255451]: nova_compute
Nov 29 07:38:17 compute-0 systemd[1]: Started nova_compute container.
Nov 29 07:38:17 compute-0 nova_compute[255466]: + sudo -E kolla_set_configs
Nov 29 07:38:17 compute-0 sudo[255409]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Validating config file
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying service configuration files
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Deleting /etc/ceph
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Creating directory /etc/ceph
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Writing out command to execute
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:38:17 compute-0 nova_compute[255466]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:38:17 compute-0 nova_compute[255466]: ++ cat /run_command
Nov 29 07:38:17 compute-0 nova_compute[255466]: + CMD=nova-compute
Nov 29 07:38:17 compute-0 nova_compute[255466]: + ARGS=
Nov 29 07:38:17 compute-0 nova_compute[255466]: + sudo kolla_copy_cacerts
Nov 29 07:38:17 compute-0 nova_compute[255466]: + [[ ! -n '' ]]
Nov 29 07:38:17 compute-0 nova_compute[255466]: + . kolla_extend_start
Nov 29 07:38:17 compute-0 nova_compute[255466]: Running command: 'nova-compute'
Nov 29 07:38:17 compute-0 nova_compute[255466]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 07:38:17 compute-0 nova_compute[255466]: + umask 0022
Nov 29 07:38:17 compute-0 nova_compute[255466]: + exec nova-compute
Nov 29 07:38:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 2 slow ops, oldest one blocked for 32 sec, mon.compute-0 has slow ops)
Nov 29 07:38:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 07:38:18 compute-0 python3.9[255629]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:38:18 compute-0 ceph-mon[75050]: pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:18 compute-0 ceph-mon[75050]: Health check cleared: SLOW_OPS (was: 2 slow ops, oldest one blocked for 32 sec, mon.compute-0 has slow ops)
Nov 29 07:38:18 compute-0 ceph-mon[75050]: Cluster is now healthy
Nov 29 07:38:19 compute-0 python3.9[255780]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:38:19 compute-0 nova_compute[255466]: 2025-11-29 07:38:19.984 255470 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:38:19 compute-0 nova_compute[255466]: 2025-11-29 07:38:19.984 255470 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:38:19 compute-0 nova_compute[255466]: 2025-11-29 07:38:19.984 255470 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:38:19 compute-0 nova_compute[255466]: 2025-11-29 07:38:19.984 255470 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 07:38:20 compute-0 nova_compute[255466]: 2025-11-29 07:38:20.122 255470 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:38:20 compute-0 nova_compute[255466]: 2025-11-29 07:38:20.152 255470 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:38:20 compute-0 nova_compute[255466]: 2025-11-29 07:38:20.152 255470 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 07:38:20 compute-0 python3.9[255932]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 07:38:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:20 compute-0 sshd-session[255502]: Connection closed by authenticating user root 143.14.121.41 port 48294 [preauth]
Nov 29 07:38:21 compute-0 sudo[256086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiaueplyihrrspuyntijfuwrbuyzpftv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401900.4695137-1549-136396829382644/AnsiballZ_podman_container.py'
Nov 29 07:38:21 compute-0 sudo[256086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:21 compute-0 python3.9[256088]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 07:38:21 compute-0 sudo[256086]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:21 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:38:21 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.635 255470 INFO nova.virt.driver [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.792 255470 INFO nova.compute.provider_config [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.813 255470 DEBUG oslo_concurrency.lockutils [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.813 255470 DEBUG oslo_concurrency.lockutils [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.813 255470 DEBUG oslo_concurrency.lockutils [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.814 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.814 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.814 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.814 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.814 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.814 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.815 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.815 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.815 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.815 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.815 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.816 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.816 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.816 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.816 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.816 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.817 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.817 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.817 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.817 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.817 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.817 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.818 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.818 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.818 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.818 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.818 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.819 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.819 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.819 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.819 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.819 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.820 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.820 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.820 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.820 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.820 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.821 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.821 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.821 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.822 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.822 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.822 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.822 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.822 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.823 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.823 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.823 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.823 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.823 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.824 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.824 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.824 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.824 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.824 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.825 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.825 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.825 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.825 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.825 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.825 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.826 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.826 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.826 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.826 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.826 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.827 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.827 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.827 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.827 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.827 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.827 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.828 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.828 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.828 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.828 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.828 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.829 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.829 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.829 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.829 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.829 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.830 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.830 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.830 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.830 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.830 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.831 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.831 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.831 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.831 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.831 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.831 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.832 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.832 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.832 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.832 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.833 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.833 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.833 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.833 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.834 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.834 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.834 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.834 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.835 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.835 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.835 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.835 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.836 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.836 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.836 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.836 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.836 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.837 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.837 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.837 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.837 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.837 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.837 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.838 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.838 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.838 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.838 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.838 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.838 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.838 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.838 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.839 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.839 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.839 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.839 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.839 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.839 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.839 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.839 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.840 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.840 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.840 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.840 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.840 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.840 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.840 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.841 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.841 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.841 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.841 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.841 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.841 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.841 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.841 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.842 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.842 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.842 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.842 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.842 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.842 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.843 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.843 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.843 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.843 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.843 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.843 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.843 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.843 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.844 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.844 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.844 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.844 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.844 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.844 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.844 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.845 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.845 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.845 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.845 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.845 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.845 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.845 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.846 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.846 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.846 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.846 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.846 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.846 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.846 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.846 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.847 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.847 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.847 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.847 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.847 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.847 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.847 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.848 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.848 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.848 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.848 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.848 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.848 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.848 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.849 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.849 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.849 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.849 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.849 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.849 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.849 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.849 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.850 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.850 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.850 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.850 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.850 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.850 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.850 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.851 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.851 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.851 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.851 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.851 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.851 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.851 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.851 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.852 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.852 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.852 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.852 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.852 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.852 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.852 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.853 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.853 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.853 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.853 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.853 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.853 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.853 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.853 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.854 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.854 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.854 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.854 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.854 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.854 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.854 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.855 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.855 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.855 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.855 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.855 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.855 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.855 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.855 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.856 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.856 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.856 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.856 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.856 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.856 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.856 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.857 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.857 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.857 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.857 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.857 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.857 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.857 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.857 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.858 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.858 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.858 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.858 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.858 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.858 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.858 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.859 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.859 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.859 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.859 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.859 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.859 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.859 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.859 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.860 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.860 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.860 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.860 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.860 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.860 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.861 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.861 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.861 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.861 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.861 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.861 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.861 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.862 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.862 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.862 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.862 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.862 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.862 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.862 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.863 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.863 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.863 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.863 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.863 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.863 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.863 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.864 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.864 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.864 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.864 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.864 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.864 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.865 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.865 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.865 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.865 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.865 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.865 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.866 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.866 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.866 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.866 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.866 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.866 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.866 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.867 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.867 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.867 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.867 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.867 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.867 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.867 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.868 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.868 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.868 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.868 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.868 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.868 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.868 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.869 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.869 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.869 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.869 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.869 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.869 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.869 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.870 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.870 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.870 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.870 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.870 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.870 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.871 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.871 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.871 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.871 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.871 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.871 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.871 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.872 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.872 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.872 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.872 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.872 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.872 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.872 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.873 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.873 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.873 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.873 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.873 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.873 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.873 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.874 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.874 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.874 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.874 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.874 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.874 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.874 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.874 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.875 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.875 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.875 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.875 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.875 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.875 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.875 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.876 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.876 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.876 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.876 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.876 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.876 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.876 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.877 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.877 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.877 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.877 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.877 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.877 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.878 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.878 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.878 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.878 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.878 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.878 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.878 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.879 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.879 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.879 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.879 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.880 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.880 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.880 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.880 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.880 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.881 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.881 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.881 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.881 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.881 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.882 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.882 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.882 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.882 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.882 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.883 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.883 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.883 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.883 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.883 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.884 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.884 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.884 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.884 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.884 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.885 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.885 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.885 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.885 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.886 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.886 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.886 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.886 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.886 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.887 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.887 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.887 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.887 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.887 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.888 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.888 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.888 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.888 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.888 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.888 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.889 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.889 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.889 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.889 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.889 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.890 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.890 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.890 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.890 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.890 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.891 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.891 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.891 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.891 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.891 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.892 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.892 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.892 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.892 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.892 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.893 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.893 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.893 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.893 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.894 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.894 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.894 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.894 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.894 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.895 255470 WARNING oslo_config.cfg [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 07:38:21 compute-0 nova_compute[255466]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 07:38:21 compute-0 nova_compute[255466]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 07:38:21 compute-0 nova_compute[255466]: and ``live_migration_inbound_addr`` respectively.
Nov 29 07:38:21 compute-0 nova_compute[255466]: ).  Its value may be silently ignored in the future.
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.895 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.895 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.896 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.896 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.896 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.896 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.896 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.897 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.897 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.897 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.897 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.897 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.898 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.898 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.898 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.898 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.899 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.899 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.899 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rbd_secret_uuid        = 14ff1f30-5059-58f1-9a23-69871bb275a1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.899 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.899 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.900 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.900 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.900 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.900 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.900 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.900 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.901 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.901 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.901 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.901 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.901 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.902 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.902 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.902 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.902 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.902 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.903 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.903 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.903 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.903 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.903 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.904 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.904 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.904 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.904 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.904 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.905 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.905 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.905 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.905 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.905 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.906 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.906 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.906 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.906 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.906 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.907 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.907 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.907 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.907 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.908 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.908 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.908 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.908 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.908 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.908 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.909 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.909 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.909 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.909 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.909 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.910 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.910 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.910 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.910 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.910 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.910 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.911 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.911 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.911 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.911 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.911 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.912 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.912 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.912 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.912 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.913 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.913 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.913 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.913 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.913 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.914 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.914 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.914 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.914 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.914 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.914 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.915 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.915 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.915 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.915 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.915 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.915 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.916 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.916 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.916 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.916 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.916 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.916 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.917 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.917 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.917 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.917 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.917 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.917 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.917 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.918 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.918 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.918 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.918 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.918 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.918 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.919 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.919 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.919 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.919 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.919 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.919 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.920 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.920 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.920 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.920 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.920 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.920 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.921 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.921 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.921 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.921 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.922 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.922 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.922 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.922 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.922 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.923 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.923 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.923 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.923 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.923 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.924 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.924 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.924 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.924 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.924 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.925 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.925 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.925 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.925 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.925 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.926 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.926 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.926 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.926 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.926 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.927 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.927 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.927 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.927 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.927 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.928 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.928 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.928 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.928 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.928 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.929 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.929 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.929 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.929 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.929 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.930 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.930 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.930 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.930 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.931 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.931 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.931 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.931 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.931 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.932 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.932 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.932 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.932 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.932 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.933 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.933 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.933 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.933 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.933 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.934 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.934 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.934 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.934 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.934 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.935 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.935 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.935 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.935 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.935 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.935 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.935 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.936 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.936 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.936 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.936 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.936 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.936 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.937 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.937 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.937 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.937 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.937 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.937 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.937 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.938 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.938 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.938 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.938 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.938 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.938 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.938 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.938 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.939 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.939 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.939 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.939 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.939 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.939 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.940 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.940 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.940 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.940 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.940 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.940 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.940 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.941 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.941 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.941 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.941 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.941 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.942 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.942 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.942 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.942 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.942 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.942 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.942 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.943 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.943 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.943 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.943 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.943 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.943 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.943 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.944 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.944 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.944 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.944 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.944 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.944 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.944 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.945 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.945 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.945 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.945 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.945 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.945 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.945 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.946 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.946 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.946 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.946 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.946 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.946 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.947 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.947 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.947 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.947 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.947 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.947 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.947 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.948 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.948 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.948 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.948 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.948 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.948 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.948 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.949 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.949 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.949 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.949 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.949 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.949 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.950 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.950 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.950 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.950 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.950 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.950 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.951 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.951 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.951 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.951 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.951 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.951 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.951 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.951 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.952 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.952 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.952 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.952 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.952 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.952 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.952 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.953 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.953 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.953 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.953 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.953 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.953 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.954 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.954 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.954 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.954 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.954 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.954 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.954 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.955 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.955 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.955 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.955 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.955 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.955 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.955 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.956 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.956 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.956 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.956 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.956 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.956 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.957 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.957 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.957 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.957 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.957 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.957 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.957 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.958 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.958 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.958 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.958 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.958 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.958 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.958 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.959 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.959 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.959 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.959 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.959 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.959 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.959 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.960 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.960 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.960 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.960 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.960 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.960 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.960 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.961 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.961 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.961 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.961 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.961 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.961 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.961 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.962 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.962 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.962 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.962 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.962 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.962 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.962 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.963 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.963 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.963 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.963 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.963 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.963 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.964 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.964 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.964 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.964 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.964 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.964 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.964 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.965 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.965 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.965 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.965 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.965 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.965 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.965 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.966 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.966 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.966 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.966 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.966 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.966 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.966 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.966 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.967 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.967 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.967 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.967 255470 DEBUG oslo_service.service [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.968 255470 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.988 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.989 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.989 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 07:38:21 compute-0 nova_compute[255466]: 2025-11-29 07:38:21.989 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 07:38:22 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 07:38:22 compute-0 sudo[256282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfxsltzgiunqjzescxlrbcoostupisid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401901.7123945-1557-143727128596427/AnsiballZ_systemd.py'
Nov 29 07:38:22 compute-0 sudo[256282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:22 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 29 07:38:22 compute-0 nova_compute[255466]: 2025-11-29 07:38:22.135 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f10a54180a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 07:38:22 compute-0 nova_compute[255466]: 2025-11-29 07:38:22.139 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f10a54180a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 07:38:22 compute-0 nova_compute[255466]: 2025-11-29 07:38:22.140 255470 INFO nova.virt.libvirt.driver [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Connection event '1' reason 'None'
Nov 29 07:38:22 compute-0 nova_compute[255466]: 2025-11-29 07:38:22.160 255470 WARNING nova.virt.libvirt.driver [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 07:38:22 compute-0 nova_compute[255466]: 2025-11-29 07:38:22.161 255470 DEBUG nova.virt.libvirt.volume.mount [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 07:38:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:22 compute-0 python3.9[256299]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 07:38:22 compute-0 sshd-session[256011]: Invalid user alex from 143.14.121.41 port 48308
Nov 29 07:38:22 compute-0 systemd[1]: Stopping nova_compute container...
Nov 29 07:38:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:22 compute-0 sshd-session[256011]: Connection closed by invalid user alex 143.14.121.41 port 48308 [preauth]
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.353 255470 INFO nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]: 
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <host>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <uuid>a4431209-b14d-4d8f-894a-1aed0bd2dae7</uuid>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <arch>x86_64</arch>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model>EPYC-Rome-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <vendor>AMD</vendor>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <microcode version='16777317'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <signature family='23' model='49' stepping='0'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='x2apic'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='tsc-deadline'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='osxsave'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='hypervisor'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='tsc_adjust'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='spec-ctrl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='stibp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='arch-capabilities'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='cmp_legacy'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='topoext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='virt-ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='lbrv'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='tsc-scale'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='vmcb-clean'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='pause-filter'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='pfthreshold'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='svme-addr-chk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='rdctl-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='mds-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature name='pschange-mc-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <pages unit='KiB' size='4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <pages unit='KiB' size='2048'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <pages unit='KiB' size='1048576'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <power_management>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <suspend_mem/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </power_management>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <iommu support='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <migration_features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <live/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <uri_transports>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <uri_transport>tcp</uri_transport>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <uri_transport>rdma</uri_transport>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </uri_transports>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </migration_features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <topology>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <cells num='1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <cell id='0'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:           <memory unit='KiB'>7864328</memory>
Nov 29 07:38:23 compute-0 nova_compute[255466]:           <pages unit='KiB' size='4'>1966082</pages>
Nov 29 07:38:23 compute-0 nova_compute[255466]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 07:38:23 compute-0 nova_compute[255466]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 07:38:23 compute-0 nova_compute[255466]:           <distances>
Nov 29 07:38:23 compute-0 nova_compute[255466]:             <sibling id='0' value='10'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:           </distances>
Nov 29 07:38:23 compute-0 nova_compute[255466]:           <cpus num='8'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:           </cpus>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         </cell>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </cells>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </topology>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <cache>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </cache>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <secmodel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model>selinux</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <doi>0</doi>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </secmodel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <secmodel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model>dac</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <doi>0</doi>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </secmodel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </host>
Nov 29 07:38:23 compute-0 nova_compute[255466]: 
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <guest>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <os_type>hvm</os_type>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <arch name='i686'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <wordsize>32</wordsize>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <domain type='qemu'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <domain type='kvm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </arch>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <pae/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <nonpae/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <acpi default='on' toggle='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <apic default='on' toggle='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <cpuselection/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <deviceboot/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <externalSnapshot/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </guest>
Nov 29 07:38:23 compute-0 nova_compute[255466]: 
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <guest>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <os_type>hvm</os_type>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <arch name='x86_64'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <wordsize>64</wordsize>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <domain type='qemu'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <domain type='kvm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </arch>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <acpi default='on' toggle='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <apic default='on' toggle='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <cpuselection/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <deviceboot/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <externalSnapshot/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </guest>
Nov 29 07:38:23 compute-0 nova_compute[255466]: 
Nov 29 07:38:23 compute-0 nova_compute[255466]: </capabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]: 
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.365 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.393 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 07:38:23 compute-0 nova_compute[255466]: <domainCapabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <domain>kvm</domain>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <arch>i686</arch>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <vcpu max='240'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <iothreads supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <os supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <enum name='firmware'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <loader supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>rom</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pflash</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='readonly'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>yes</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>no</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='secure'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>no</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </loader>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </os>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>on</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>off</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='maximum' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='maximumMigratable'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>on</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>off</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='host-model' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <vendor>AMD</vendor>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='x2apic'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='stibp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='succor'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='lbrv'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='custom' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Dhyana-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Genoa'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='auto-ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='auto-ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-128'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-256'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-512'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='KnightsMill'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512er'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512pf'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='KnightsMill-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512er'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512pf'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tbm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tbm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SierraForest'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cmpccxadd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SierraForest-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cmpccxadd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='athlon'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='athlon-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='core2duo'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='core2duo-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='coreduo'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='coreduo-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='n270'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='n270-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='phenom'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='phenom-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <memoryBacking supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <enum name='sourceType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>file</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>anonymous</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>memfd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </memoryBacking>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <devices>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <disk supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='diskDevice'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>disk</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>cdrom</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>floppy</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>lun</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='bus'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ide</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>fdc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>scsi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>sata</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-non-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </disk>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <graphics supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vnc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>egl-headless</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dbus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </graphics>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <video supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='modelType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vga</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>cirrus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>none</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>bochs</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ramfb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </video>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <hostdev supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='mode'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>subsystem</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='startupPolicy'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>default</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>mandatory</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>requisite</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>optional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='subsysType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pci</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>scsi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='capsType'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='pciBackend'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </hostdev>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <rng supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-non-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>random</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>egd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>builtin</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </rng>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <filesystem supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='driverType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>path</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>handle</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtiofs</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </filesystem>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <tpm supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tpm-tis</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tpm-crb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>emulator</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>external</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendVersion'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>2.0</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </tpm>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <redirdev supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='bus'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </redirdev>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <channel supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pty</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>unix</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </channel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <crypto supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>qemu</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>builtin</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </crypto>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <interface supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>default</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>passt</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </interface>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <panic supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>isa</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>hyperv</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </panic>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <console supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>null</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pty</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dev</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>file</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pipe</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>stdio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>udp</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tcp</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>unix</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>qemu-vdagent</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dbus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </console>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </devices>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <gic supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <vmcoreinfo supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <genid supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <backingStoreInput supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <backup supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <async-teardown supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <ps2 supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <sev supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <sgx supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <hyperv supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='features'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>relaxed</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vapic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>spinlocks</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vpindex</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>runtime</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>synic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>stimer</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>reset</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vendor_id</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>frequencies</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>reenlightenment</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tlbflush</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ipi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>avic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>emsr_bitmap</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>xmm_input</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <defaults>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <spinlocks>4095</spinlocks>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <stimer_direct>on</stimer_direct>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </defaults>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </hyperv>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <launchSecurity supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='sectype'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tdx</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </launchSecurity>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </features>
Nov 29 07:38:23 compute-0 nova_compute[255466]: </domainCapabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.402 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 07:38:23 compute-0 nova_compute[255466]: <domainCapabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <domain>kvm</domain>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <arch>i686</arch>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <vcpu max='4096'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <iothreads supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <os supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <enum name='firmware'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <loader supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>rom</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pflash</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='readonly'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>yes</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>no</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='secure'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>no</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </loader>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </os>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>on</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>off</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='maximum' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='maximumMigratable'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>on</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>off</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='host-model' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <vendor>AMD</vendor>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='x2apic'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='stibp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='succor'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='lbrv'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='custom' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Dhyana-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Genoa'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='auto-ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='auto-ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-128'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-256'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-512'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='KnightsMill'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512er'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512pf'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='KnightsMill-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512er'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512pf'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tbm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tbm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SierraForest'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cmpccxadd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SierraForest-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cmpccxadd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='athlon'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='athlon-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='core2duo'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='core2duo-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='coreduo'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='coreduo-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='n270'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='n270-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='phenom'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='phenom-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <memoryBacking supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <enum name='sourceType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>file</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>anonymous</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>memfd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </memoryBacking>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <devices>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <disk supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='diskDevice'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>disk</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>cdrom</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>floppy</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>lun</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='bus'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>fdc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>scsi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>sata</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-non-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </disk>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <graphics supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vnc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>egl-headless</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dbus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </graphics>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <video supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='modelType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vga</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>cirrus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>none</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>bochs</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ramfb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </video>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <hostdev supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='mode'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>subsystem</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='startupPolicy'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>default</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>mandatory</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>requisite</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>optional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='subsysType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pci</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>scsi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='capsType'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='pciBackend'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </hostdev>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <rng supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-non-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>random</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>egd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>builtin</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </rng>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <filesystem supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='driverType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>path</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>handle</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtiofs</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </filesystem>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <tpm supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tpm-tis</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tpm-crb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>emulator</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>external</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendVersion'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>2.0</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </tpm>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <redirdev supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='bus'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </redirdev>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <channel supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pty</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>unix</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </channel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <crypto supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>qemu</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>builtin</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </crypto>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <interface supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>default</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>passt</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </interface>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <panic supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>isa</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>hyperv</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </panic>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <console supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>null</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pty</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dev</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>file</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pipe</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>stdio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>udp</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tcp</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>unix</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>qemu-vdagent</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dbus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </console>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </devices>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <gic supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <vmcoreinfo supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <genid supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <backingStoreInput supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <backup supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <async-teardown supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <ps2 supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <sev supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <sgx supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <hyperv supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='features'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>relaxed</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vapic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>spinlocks</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vpindex</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>runtime</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>synic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>stimer</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>reset</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vendor_id</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>frequencies</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>reenlightenment</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tlbflush</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ipi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>avic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>emsr_bitmap</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>xmm_input</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <defaults>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <spinlocks>4095</spinlocks>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <stimer_direct>on</stimer_direct>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </defaults>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </hyperv>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <launchSecurity supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='sectype'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tdx</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </launchSecurity>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </features>
Nov 29 07:38:23 compute-0 nova_compute[255466]: </domainCapabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.457 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.463 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 07:38:23 compute-0 nova_compute[255466]: <domainCapabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <domain>kvm</domain>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <arch>x86_64</arch>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <vcpu max='240'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <iothreads supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <os supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <enum name='firmware'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <loader supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>rom</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pflash</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='readonly'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>yes</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>no</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='secure'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>no</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </loader>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </os>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>on</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>off</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='maximum' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='maximumMigratable'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>on</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>off</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='host-model' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <vendor>AMD</vendor>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='x2apic'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='stibp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='succor'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='lbrv'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='custom' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Dhyana-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Genoa'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='auto-ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='auto-ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-128'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-256'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-512'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='KnightsMill'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512er'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512pf'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='KnightsMill-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512er'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512pf'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tbm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tbm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SierraForest'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cmpccxadd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SierraForest-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cmpccxadd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='athlon'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='athlon-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='core2duo'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='core2duo-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='coreduo'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='coreduo-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='n270'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='n270-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='phenom'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='phenom-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <memoryBacking supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <enum name='sourceType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>file</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>anonymous</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>memfd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </memoryBacking>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <devices>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <disk supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='diskDevice'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>disk</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>cdrom</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>floppy</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>lun</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='bus'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ide</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>fdc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>scsi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>sata</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-non-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </disk>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <graphics supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vnc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>egl-headless</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dbus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </graphics>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <video supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='modelType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vga</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>cirrus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>none</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>bochs</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ramfb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </video>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <hostdev supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='mode'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>subsystem</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='startupPolicy'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>default</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>mandatory</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>requisite</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>optional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='subsysType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pci</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>scsi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='capsType'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='pciBackend'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </hostdev>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <rng supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-non-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>random</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>egd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>builtin</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </rng>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <filesystem supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='driverType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>path</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>handle</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtiofs</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </filesystem>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <tpm supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tpm-tis</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tpm-crb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>emulator</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>external</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendVersion'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>2.0</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </tpm>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <redirdev supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='bus'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </redirdev>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <channel supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pty</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>unix</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </channel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <crypto supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>qemu</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>builtin</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </crypto>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <interface supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>default</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>passt</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </interface>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <panic supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>isa</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>hyperv</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </panic>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <console supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>null</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pty</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dev</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>file</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pipe</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>stdio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>udp</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tcp</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>unix</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>qemu-vdagent</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dbus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </console>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </devices>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <gic supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <vmcoreinfo supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <genid supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <backingStoreInput supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <backup supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <async-teardown supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <ps2 supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <sev supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <sgx supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <hyperv supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='features'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>relaxed</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vapic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>spinlocks</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vpindex</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>runtime</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>synic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>stimer</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>reset</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vendor_id</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>frequencies</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>reenlightenment</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tlbflush</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ipi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>avic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>emsr_bitmap</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>xmm_input</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <defaults>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <spinlocks>4095</spinlocks>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <stimer_direct>on</stimer_direct>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </defaults>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </hyperv>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <launchSecurity supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='sectype'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tdx</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </launchSecurity>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </features>
Nov 29 07:38:23 compute-0 nova_compute[255466]: </domainCapabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.535 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 07:38:23 compute-0 nova_compute[255466]: <domainCapabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <domain>kvm</domain>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <arch>x86_64</arch>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <vcpu max='4096'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <iothreads supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <os supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <enum name='firmware'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>efi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <loader supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>rom</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pflash</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='readonly'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>yes</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>no</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='secure'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>yes</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>no</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </loader>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </os>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>on</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>off</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='maximum' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='maximumMigratable'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>on</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>off</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='host-model' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <vendor>AMD</vendor>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='x2apic'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='stibp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='succor'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='lbrv'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <mode name='custom' supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Broadwell-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Cooperlake-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Denverton-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Dhyana-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Genoa'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='auto-ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='auto-ibrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amd-psfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='stibp-always-on'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='EPYC-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-128'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-256'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx10-512'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='prefetchiti'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Haswell-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='IvyBridge-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='KnightsMill'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512er'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512pf'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='KnightsMill-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512er'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512pf'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tbm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fma4'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tbm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xop'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='amx-tile'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-bf16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-fp16'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bitalg'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrc'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fzrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='la57'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='taa-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xfd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SierraForest'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cmpccxadd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='SierraForest-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ifma'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cmpccxadd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fbsdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='fsrs'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ibrs-all'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mcdt-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pbrsb-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='psdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='serialize'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vaes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='hle'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='rtm'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512bw'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512cd'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512dq'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512f'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='avx512vl'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='invpcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pcid'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='pku'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='mpx'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v2'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v3'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='core-capability'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='split-lock-detect'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='Snowridge-v4'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='cldemote'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='erms'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='gfni'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdir64b'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='movdiri'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='xsaves'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='athlon'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='athlon-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='core2duo'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='core2duo-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='coreduo'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='coreduo-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='n270'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='n270-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='ss'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='phenom'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <blockers model='phenom-v1'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnow'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <feature name='3dnowext'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </blockers>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </mode>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </cpu>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <memoryBacking supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <enum name='sourceType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>file</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>anonymous</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <value>memfd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </memoryBacking>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <devices>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <disk supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='diskDevice'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>disk</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>cdrom</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>floppy</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>lun</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='bus'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>fdc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>scsi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>sata</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-non-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </disk>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <graphics supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vnc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>egl-headless</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dbus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </graphics>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <video supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='modelType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vga</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>cirrus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>none</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>bochs</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ramfb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </video>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <hostdev supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='mode'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>subsystem</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='startupPolicy'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>default</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>mandatory</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>requisite</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>optional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='subsysType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pci</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>scsi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='capsType'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='pciBackend'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </hostdev>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <rng supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtio-non-transitional</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>random</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>egd</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>builtin</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </rng>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <filesystem supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='driverType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>path</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>handle</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>virtiofs</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </filesystem>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <tpm supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tpm-tis</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tpm-crb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>emulator</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>external</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendVersion'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>2.0</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </tpm>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <redirdev supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='bus'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>usb</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </redirdev>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <channel supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pty</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>unix</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </channel>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <crypto supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>qemu</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendModel'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>builtin</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </crypto>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <interface supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='backendType'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>default</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>passt</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </interface>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <panic supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='model'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>isa</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>hyperv</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </panic>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <console supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='type'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>null</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vc</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pty</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dev</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>file</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>pipe</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>stdio</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>udp</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tcp</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>unix</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>qemu-vdagent</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>dbus</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </console>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </devices>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   <features>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <gic supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <vmcoreinfo supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <genid supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <backingStoreInput supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <backup supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <async-teardown supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <ps2 supported='yes'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <sev supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <sgx supported='no'/>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <hyperv supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='features'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>relaxed</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vapic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>spinlocks</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vpindex</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>runtime</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>synic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>stimer</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>reset</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>vendor_id</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>frequencies</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>reenlightenment</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tlbflush</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>ipi</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>avic</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>emsr_bitmap</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>xmm_input</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <defaults>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <spinlocks>4095</spinlocks>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <stimer_direct>on</stimer_direct>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </defaults>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </hyperv>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     <launchSecurity supported='yes'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       <enum name='sectype'>
Nov 29 07:38:23 compute-0 nova_compute[255466]:         <value>tdx</value>
Nov 29 07:38:23 compute-0 nova_compute[255466]:       </enum>
Nov 29 07:38:23 compute-0 nova_compute[255466]:     </launchSecurity>
Nov 29 07:38:23 compute-0 nova_compute[255466]:   </features>
Nov 29 07:38:23 compute-0 nova_compute[255466]: </domainCapabilities>
Nov 29 07:38:23 compute-0 nova_compute[255466]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.602 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.602 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.603 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.603 255470 INFO nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Secure Boot support detected
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.605 255470 INFO nova.virt.libvirt.driver [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.605 255470 INFO nova.virt.libvirt.driver [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.618 255470 DEBUG nova.virt.libvirt.driver [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.653 255470 INFO nova.virt.node [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Determined node identity ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from /var/lib/nova/compute_id
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.676 255470 WARNING nova.compute.manager [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Compute nodes ['ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.728 255470 INFO nova.compute.manager [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.774 255470 WARNING nova.compute.manager [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.774 255470 DEBUG oslo_concurrency.lockutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.775 255470 DEBUG oslo_concurrency.lockutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.775 255470 DEBUG oslo_concurrency.lockutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.775 255470 DEBUG nova.compute.resource_tracker [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:38:23 compute-0 nova_compute[255466]: 2025-11-29 07:38:23.776 255470 DEBUG oslo_concurrency.processutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:38:23 compute-0 sudo[256346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:23 compute-0 sudo[256346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:23 compute-0 sudo[256346]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:24 compute-0 sudo[256390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:38:24 compute-0 sudo[256390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:24 compute-0 sudo[256390]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:24 compute-0 sudo[256415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:24 compute-0 sudo[256415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:24 compute-0 sudo[256415]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:24 compute-0 sudo[256440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:38:24 compute-0 sudo[256440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:38:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146135965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:24 compute-0 nova_compute[255466]: 2025-11-29 07:38:24.216 255470 DEBUG oslo_concurrency.processutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:38:24 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 07:38:24 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 29 07:38:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:24 compute-0 nova_compute[255466]: 2025-11-29 07:38:24.605 255470 WARNING nova.virt.libvirt.driver [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:38:24 compute-0 nova_compute[255466]: 2025-11-29 07:38:24.608 255470 DEBUG nova.compute.resource_tracker [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:38:24 compute-0 nova_compute[255466]: 2025-11-29 07:38:24.608 255470 DEBUG oslo_concurrency.lockutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:38:24 compute-0 nova_compute[255466]: 2025-11-29 07:38:24.609 255470 DEBUG oslo_concurrency.lockutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:38:24 compute-0 sshd-session[256339]: Invalid user sammy from 143.14.121.41 port 52384
Nov 29 07:38:24 compute-0 nova_compute[255466]: 2025-11-29 07:38:24.632 255470 WARNING nova.compute.resource_tracker [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] No compute node record for compute-0.ctlplane.example.com:ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f could not be found.
Nov 29 07:38:24 compute-0 nova_compute[255466]: 2025-11-29 07:38:24.661 255470 INFO nova.compute.resource_tracker [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f
Nov 29 07:38:24 compute-0 nova_compute[255466]: 2025-11-29 07:38:24.758 255470 DEBUG nova.compute.resource_tracker [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:38:24 compute-0 nova_compute[255466]: 2025-11-29 07:38:24.759 255470 DEBUG nova.compute.resource_tracker [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:38:24 compute-0 sshd-session[256339]: Connection closed by invalid user sammy 143.14.121.41 port 52384 [preauth]
Nov 29 07:38:25 compute-0 nova_compute[255466]: 2025-11-29 07:38:25.776 255470 INFO nova.scheduler.client.report [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] [req-c733c88c-1a48-47ef-ac80-302d299b4444] Created resource provider record via placement API for resource provider with UUID ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f and name compute-0.ctlplane.example.com.
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.115 255470 DEBUG oslo_concurrency.processutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:38:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:38:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2215895840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.612 255470 DEBUG oslo_concurrency.processutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.621 255470 DEBUG nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 07:38:26 compute-0 nova_compute[255466]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.621 255470 INFO nova.virt.libvirt.host [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] kernel doesn't support AMD SEV
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.623 255470 DEBUG nova.compute.provider_tree [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.623 255470 DEBUG nova.virt.libvirt.driver [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.676 255470 DEBUG nova.scheduler.client.report [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Updated inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.677 255470 DEBUG nova.compute.provider_tree [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Updating resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.677 255470 DEBUG nova.compute.provider_tree [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.774 255470 DEBUG nova.compute.provider_tree [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Updating resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.798 255470 DEBUG nova.compute.resource_tracker [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.798 255470 DEBUG oslo_concurrency.lockutils [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.799 255470 DEBUG nova.service [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.892 255470 DEBUG nova.service [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 29 07:38:26 compute-0 nova_compute[255466]: 2025-11-29 07:38:26.893 255470 DEBUG nova.servicegroup.drivers.db [None req-e6f317fd-007b-4645-be3c-aa7212868cbd - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 29 07:38:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:27 compute-0 sshd-session[256502]: Connection closed by authenticating user root 143.14.121.41 port 52394 [preauth]
Nov 29 07:38:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:30 compute-0 ceph-mon[75050]: pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:30 compute-0 sshd-session[256526]: Connection closed by authenticating user root 143.14.121.41 port 52410 [preauth]
Nov 29 07:38:31 compute-0 nova_compute[255466]: 2025-11-29 07:38:31.023 255470 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Nov 29 07:38:31 compute-0 nova_compute[255466]: 2025-11-29 07:38:31.030 255470 DEBUG oslo_concurrency.lockutils [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:38:31 compute-0 nova_compute[255466]: 2025-11-29 07:38:31.030 255470 DEBUG oslo_concurrency.lockutils [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:38:31 compute-0 nova_compute[255466]: 2025-11-29 07:38:31.031 255470 DEBUG oslo_concurrency.lockutils [None req-8173ce0e-2650-4dda-936a-2ab9314dbdf0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:38:31 compute-0 sudo[256440]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:31 compute-0 virtqemud[256259]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 07:38:31 compute-0 virtqemud[256259]: hostname: compute-0
Nov 29 07:38:31 compute-0 virtqemud[256259]: End of file while reading data: Input/output error
Nov 29 07:38:31 compute-0 systemd[1]: libpod-c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809.scope: Deactivated successfully.
Nov 29 07:38:31 compute-0 podman[256318]: 2025-11-29 07:38:31.473400984 +0000 UTC m=+9.049282962 container died c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 07:38:31 compute-0 systemd[1]: libpod-c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809.scope: Consumed 4.517s CPU time.
Nov 29 07:38:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:38:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:38:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:38:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:38:31 compute-0 ceph-mon[75050]: pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3146135965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:31 compute-0 ceph-mon[75050]: pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:31 compute-0 ceph-mon[75050]: pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2215895840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:31 compute-0 ceph-mon[75050]: pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:31 compute-0 ceph-mon[75050]: pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:31 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:38:32 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 4dd59034-161d-4bac-9b33-69367efd83c5 does not exist
Nov 29 07:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809-userdata-shm.mount: Deactivated successfully.
Nov 29 07:38:32 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev add6bd0f-2165-4ebe-a3b8-c5a45549fb98 does not exist
Nov 29 07:38:32 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 6ff560c0-868e-4fa8-817d-3e1889e06d75 does not exist
Nov 29 07:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67-merged.mount: Deactivated successfully.
Nov 29 07:38:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:38:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:38:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:38:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:38:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:38:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:32 compute-0 sudo[256565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:32 compute-0 sudo[256565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:32 compute-0 sudo[256565]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:32 compute-0 sudo[256590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:38:32 compute-0 sudo[256590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:32 compute-0 sudo[256590]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:32 compute-0 sudo[256615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:32 compute-0 sudo[256615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:32 compute-0 sudo[256615]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:32 compute-0 sudo[256640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:38:32 compute-0 sudo[256640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:38:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:38:35 compute-0 sshd-session[256549]: Connection closed by authenticating user root 143.14.121.41 port 52424 [preauth]
Nov 29 07:38:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:38:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:38:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:38:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:38:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:38:36 compute-0 ceph-mon[75050]: pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:36 compute-0 podman[256318]: 2025-11-29 07:38:36.324449767 +0000 UTC m=+13.900331735 container cleanup c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 29 07:38:36 compute-0 podman[256318]: nova_compute
Nov 29 07:38:36 compute-0 podman[256689]: nova_compute
Nov 29 07:38:36 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 07:38:36 compute-0 systemd[1]: Stopped nova_compute container.
Nov 29 07:38:36 compute-0 systemd[1]: Starting nova_compute container...
Nov 29 07:38:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b240e9e861c4101e042d2e4e8719ec006fe754e1814911523c67492f39bb67/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:36 compute-0 podman[256702]: 2025-11-29 07:38:36.841248355 +0000 UTC m=+0.405909830 container init c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:38:36 compute-0 podman[256702]: 2025-11-29 07:38:36.853005516 +0000 UTC m=+0.417666981 container start c15969223c00c2b30a6778fcaf267330fb4765e47f851bc1febe18461e097809 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:38:36 compute-0 nova_compute[256729]: + sudo -E kolla_set_configs
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Validating config file
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying service configuration files
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /etc/ceph
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Creating directory /etc/ceph
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Writing out command to execute
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:38:36 compute-0 nova_compute[256729]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 07:38:36 compute-0 nova_compute[256729]: ++ cat /run_command
Nov 29 07:38:36 compute-0 nova_compute[256729]: + CMD=nova-compute
Nov 29 07:38:36 compute-0 nova_compute[256729]: + ARGS=
Nov 29 07:38:36 compute-0 nova_compute[256729]: + sudo kolla_copy_cacerts
Nov 29 07:38:37 compute-0 nova_compute[256729]: + [[ ! -n '' ]]
Nov 29 07:38:37 compute-0 nova_compute[256729]: + . kolla_extend_start
Nov 29 07:38:37 compute-0 nova_compute[256729]: Running command: 'nova-compute'
Nov 29 07:38:37 compute-0 nova_compute[256729]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 07:38:37 compute-0 nova_compute[256729]: + umask 0022
Nov 29 07:38:37 compute-0 nova_compute[256729]: + exec nova-compute
Nov 29 07:38:37 compute-0 podman[256702]: nova_compute
Nov 29 07:38:37 compute-0 systemd[1]: Started nova_compute container.
Nov 29 07:38:37 compute-0 sudo[256282]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:37 compute-0 podman[256744]: 2025-11-29 07:38:37.119116196 +0000 UTC m=+0.034481393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:38:37 compute-0 podman[256744]: 2025-11-29 07:38:37.438277115 +0000 UTC m=+0.353642262 container create 65a37ebd6cd2dffea0bb09321eb66f40d434b06f7946922f0dbeaf3adc438904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:38:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:37 compute-0 ceph-mon[75050]: pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:37 compute-0 ceph-mon[75050]: pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:37 compute-0 systemd[1]: Started libpod-conmon-65a37ebd6cd2dffea0bb09321eb66f40d434b06f7946922f0dbeaf3adc438904.scope.
Nov 29 07:38:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:37 compute-0 podman[256871]: 2025-11-29 07:38:37.732245648 +0000 UTC m=+0.091142974 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:38:37 compute-0 sudo[256968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofyaixktdorpkpwrldfwztwjvapsdbos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764401917.3433132-1566-80024020519655/AnsiballZ_podman_container.py'
Nov 29 07:38:37 compute-0 sudo[256968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 07:38:37 compute-0 podman[256744]: 2025-11-29 07:38:37.997697712 +0000 UTC m=+0.913062849 container init 65a37ebd6cd2dffea0bb09321eb66f40d434b06f7946922f0dbeaf3adc438904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:38:37 compute-0 podman[256899]: 2025-11-29 07:38:37.99839251 +0000 UTC m=+0.299137597 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:38:38 compute-0 podman[256744]: 2025-11-29 07:38:38.008071297 +0000 UTC m=+0.923436414 container start 65a37ebd6cd2dffea0bb09321eb66f40d434b06f7946922f0dbeaf3adc438904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:38:38 compute-0 nervous_chatelet[256908]: 167 167
Nov 29 07:38:38 compute-0 systemd[1]: libpod-65a37ebd6cd2dffea0bb09321eb66f40d434b06f7946922f0dbeaf3adc438904.scope: Deactivated successfully.
Nov 29 07:38:38 compute-0 python3.9[256973]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 07:38:38 compute-0 podman[256744]: 2025-11-29 07:38:38.041436551 +0000 UTC m=+0.956801668 container attach 65a37ebd6cd2dffea0bb09321eb66f40d434b06f7946922f0dbeaf3adc438904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:38:38 compute-0 podman[256744]: 2025-11-29 07:38:38.04293271 +0000 UTC m=+0.958297867 container died 65a37ebd6cd2dffea0bb09321eb66f40d434b06f7946922f0dbeaf3adc438904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:38:38 compute-0 sshd-session[256676]: Invalid user postgres from 143.14.121.41 port 55580
Nov 29 07:38:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:38 compute-0 sshd-session[256676]: Connection closed by invalid user postgres 143.14.121.41 port 55580 [preauth]
Nov 29 07:38:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c568952d3fc626748a1ce068a8e4afe37815f29dac5f24a6bcf3724d4228b703-merged.mount: Deactivated successfully.
Nov 29 07:38:38 compute-0 nova_compute[256729]: 2025-11-29 07:38:38.981 256736 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:38:38 compute-0 nova_compute[256729]: 2025-11-29 07:38:38.981 256736 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:38:38 compute-0 nova_compute[256729]: 2025-11-29 07:38:38.982 256736 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 07:38:38 compute-0 nova_compute[256729]: 2025-11-29 07:38:38.982 256736 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 07:38:39 compute-0 ceph-mon[75050]: pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.110 256736 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.137 256736 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.138 256736 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 07:38:39 compute-0 podman[256744]: 2025-11-29 07:38:39.194223685 +0000 UTC m=+2.109588842 container remove 65a37ebd6cd2dffea0bb09321eb66f40d434b06f7946922f0dbeaf3adc438904 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:38:39 compute-0 systemd[1]: libpod-conmon-65a37ebd6cd2dffea0bb09321eb66f40d434b06f7946922f0dbeaf3adc438904.scope: Deactivated successfully.
Nov 29 07:38:39 compute-0 podman[256863]: 2025-11-29 07:38:39.291939406 +0000 UTC m=+1.662112550 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 07:38:39 compute-0 systemd[1]: Started libpod-conmon-9d12469c89efdfdc666ec174c176d4bb33ba378d5869ad0c5c6d9b408e5fac58.scope.
Nov 29 07:38:39 compute-0 podman[257039]: 2025-11-29 07:38:39.406249742 +0000 UTC m=+0.047296752 container create d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:38:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbb5f22fb7a94280fda5519475e8f27010513e59a3a6a32ed3d3d04ea91bafc/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbb5f22fb7a94280fda5519475e8f27010513e59a3a6a32ed3d3d04ea91bafc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbb5f22fb7a94280fda5519475e8f27010513e59a3a6a32ed3d3d04ea91bafc/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:39 compute-0 systemd[1]: Started libpod-conmon-d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb.scope.
Nov 29 07:38:39 compute-0 podman[257021]: 2025-11-29 07:38:39.468544246 +0000 UTC m=+0.165168508 container init 9d12469c89efdfdc666ec174c176d4bb33ba378d5869ad0c5c6d9b408e5fac58 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init)
Nov 29 07:38:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:39 compute-0 podman[257021]: 2025-11-29 07:38:39.474687393 +0000 UTC m=+0.171311645 container start 9d12469c89efdfdc666ec174c176d4bb33ba378d5869ad0c5c6d9b408e5fac58 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfd059a7f02f9eb61033a48f8ea4852aeb7436ecfd95359e3e35c69c1f4c5bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfd059a7f02f9eb61033a48f8ea4852aeb7436ecfd95359e3e35c69c1f4c5bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfd059a7f02f9eb61033a48f8ea4852aeb7436ecfd95359e3e35c69c1f4c5bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfd059a7f02f9eb61033a48f8ea4852aeb7436ecfd95359e3e35c69c1f4c5bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfd059a7f02f9eb61033a48f8ea4852aeb7436ecfd95359e3e35c69c1f4c5bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:39 compute-0 podman[257039]: 2025-11-29 07:38:39.3890144 +0000 UTC m=+0.030061420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:38:39 compute-0 python3.9[256973]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 07:38:39 compute-0 nova_compute_init[257066]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 07:38:39 compute-0 systemd[1]: libpod-9d12469c89efdfdc666ec174c176d4bb33ba378d5869ad0c5c6d9b408e5fac58.scope: Deactivated successfully.
Nov 29 07:38:39 compute-0 podman[257039]: 2025-11-29 07:38:39.553760937 +0000 UTC m=+0.194807977 container init d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:38:39 compute-0 podman[257039]: 2025-11-29 07:38:39.564170423 +0000 UTC m=+0.205217433 container start d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:38:39 compute-0 podman[257067]: 2025-11-29 07:38:39.572758423 +0000 UTC m=+0.052082204 container died 9d12469c89efdfdc666ec174c176d4bb33ba378d5869ad0c5c6d9b408e5fac58 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute_init)
Nov 29 07:38:39 compute-0 podman[257039]: 2025-11-29 07:38:39.615782864 +0000 UTC m=+0.256829874 container attach d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.678 256736 INFO nova.virt.driver [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 07:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d12469c89efdfdc666ec174c176d4bb33ba378d5869ad0c5c6d9b408e5fac58-userdata-shm.mount: Deactivated successfully.
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.781 256736 INFO nova.compute.provider_config [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.794 256736 DEBUG oslo_concurrency.lockutils [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.795 256736 DEBUG oslo_concurrency.lockutils [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.795 256736 DEBUG oslo_concurrency.lockutils [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.796 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.796 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.796 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.796 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.796 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.797 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.797 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fbb5f22fb7a94280fda5519475e8f27010513e59a3a6a32ed3d3d04ea91bafc-merged.mount: Deactivated successfully.
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.797 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.797 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.797 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.798 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.798 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.798 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.798 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.799 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.799 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.799 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.799 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.799 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.800 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.800 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.800 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.800 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.800 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.800 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.801 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.801 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.801 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.801 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.801 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.801 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.802 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.802 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.802 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.802 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.802 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.802 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.803 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.803 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.803 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.803 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.803 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.803 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.804 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.804 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.804 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.804 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.804 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.804 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.805 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.805 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.805 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.805 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.805 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.805 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.806 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.806 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.806 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.806 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.806 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.806 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.807 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.807 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.807 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.807 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.807 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.807 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.807 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.808 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.808 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.808 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.808 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.808 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.808 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.808 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.809 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.809 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.809 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.809 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.809 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.809 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.809 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.810 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.810 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.810 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.810 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.810 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.810 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.811 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.811 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.811 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.811 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.811 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.811 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.811 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.812 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.812 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.812 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.812 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.812 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.812 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.812 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.813 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.813 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.813 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.813 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.813 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.813 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.813 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.814 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.814 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.814 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.814 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.814 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.814 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.814 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.815 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.815 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.815 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.815 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.815 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.815 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.815 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.816 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.816 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.816 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.816 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.816 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.816 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.816 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.816 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.817 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.817 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.817 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.817 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.817 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.817 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.817 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.818 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.818 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.818 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.818 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.818 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.818 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.818 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.819 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.819 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.819 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.819 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.819 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.819 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.819 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.820 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.820 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.820 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.820 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.820 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.820 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.821 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.821 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.821 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.821 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.821 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.821 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.821 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.822 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.822 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.822 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.822 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.822 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.822 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.822 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.823 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.823 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.823 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.823 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.823 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.823 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.823 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.824 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.824 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.824 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.824 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.824 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 podman[257076]: 2025-11-29 07:38:39.824157817 +0000 UTC m=+0.281132996 container cleanup 9d12469c89efdfdc666ec174c176d4bb33ba378d5869ad0c5c6d9b408e5fac58 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute_init)
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.824 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.825 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.825 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.825 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.825 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.825 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.826 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.826 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.826 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.826 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.826 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.826 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.827 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.827 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.827 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.827 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.827 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.827 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.827 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.828 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.828 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.828 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.828 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.828 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.828 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.828 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.829 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.829 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.829 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.829 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.829 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.829 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.830 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.830 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.830 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.830 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.831 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 systemd[1]: libpod-conmon-9d12469c89efdfdc666ec174c176d4bb33ba378d5869ad0c5c6d9b408e5fac58.scope: Deactivated successfully.
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.831 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.831 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.831 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.832 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.832 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.832 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.832 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.832 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.832 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.833 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.833 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.833 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.833 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.833 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.833 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.834 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.834 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.834 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.834 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.834 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.834 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.834 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.835 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.835 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.835 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.835 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.835 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.835 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.835 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.836 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.836 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.836 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.836 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.836 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.836 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.836 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.837 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.837 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.837 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.837 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.837 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.837 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.837 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.838 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.838 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.838 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.838 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.838 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.838 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.838 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.839 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.839 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.839 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.839 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.839 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.839 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.839 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.840 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.840 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.840 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.840 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.840 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.840 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.841 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.841 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.841 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.841 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.841 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.841 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.842 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.842 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.842 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.842 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.842 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.842 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.842 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.842 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.843 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.843 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.843 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.843 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.843 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.843 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.844 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.844 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.844 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.844 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.844 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.844 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.844 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.845 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.845 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.845 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.845 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.845 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.845 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.846 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.846 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.846 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.846 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.846 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.846 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.846 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.846 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.847 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.847 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.847 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.847 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.847 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.847 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.848 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.848 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.848 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.848 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.848 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.848 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.848 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.849 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.849 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.849 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.849 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.849 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.849 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.849 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.850 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.850 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.850 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.850 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.850 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.850 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.850 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.851 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.851 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.851 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.851 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.851 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.852 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 sudo[256968]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.852 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.852 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.852 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.852 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.852 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.852 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.853 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.853 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.853 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.853 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.853 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.853 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.853 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.854 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.854 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.854 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.854 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.854 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.854 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.854 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.855 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.855 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.855 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.855 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.855 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.855 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.855 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.856 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.856 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.856 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.856 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.856 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.857 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.857 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.857 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.857 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.857 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.857 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.857 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.858 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.858 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.858 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.858 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.858 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.858 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.858 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.859 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.859 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.859 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.859 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.859 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.859 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.859 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.860 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.860 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.860 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.860 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.860 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.860 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.861 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.861 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.861 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.861 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.861 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.861 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.861 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.862 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.862 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.862 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.862 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.862 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.862 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.862 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.863 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.863 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.863 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.863 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.863 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.863 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.863 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.864 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.864 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.864 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.864 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.864 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.864 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.864 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.865 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.865 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.865 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.865 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.865 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.865 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.865 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.866 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.866 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.866 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.866 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.866 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.866 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.866 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.867 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.867 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.867 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.867 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.867 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.867 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.867 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.868 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.868 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.868 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.868 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.868 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.868 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.868 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.869 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.869 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.869 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.869 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.869 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.869 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.870 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.870 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.870 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.870 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.870 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.870 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.870 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.871 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.871 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.871 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.871 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.871 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.871 256736 WARNING oslo_config.cfg [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 07:38:39 compute-0 nova_compute[256729]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 07:38:39 compute-0 nova_compute[256729]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 07:38:39 compute-0 nova_compute[256729]: and ``live_migration_inbound_addr`` respectively.
Nov 29 07:38:39 compute-0 nova_compute[256729]: ).  Its value may be silently ignored in the future.
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.872 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.872 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.872 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.872 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.872 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.873 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.873 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.873 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.873 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.873 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.873 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.874 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.874 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.874 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.874 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.874 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.874 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.874 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.875 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rbd_secret_uuid        = 14ff1f30-5059-58f1-9a23-69871bb275a1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.875 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.875 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.875 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.875 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.875 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.876 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.876 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.876 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.876 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.876 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.876 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.877 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.877 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.877 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.877 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.877 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.877 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.878 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.878 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.878 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.878 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.879 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.879 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.879 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.879 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.879 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.880 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.880 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.880 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.880 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.880 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.880 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.881 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.881 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.881 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.881 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.881 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.881 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.882 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.882 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.882 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.882 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.882 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.882 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.883 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.883 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.883 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.883 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.883 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.883 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.884 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.884 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.884 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.884 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.884 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.884 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.884 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.884 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.885 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.885 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.885 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.885 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.885 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.885 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.885 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.886 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.886 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.886 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.886 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.886 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.886 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.886 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.887 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.887 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.887 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.887 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.887 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.887 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.888 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.888 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.888 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.888 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.888 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.888 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.889 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.889 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.889 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.889 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.889 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.889 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.889 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.890 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.890 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.890 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.890 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.890 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.890 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.891 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.891 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.891 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.891 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.891 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.891 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.891 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.892 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.892 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.892 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.892 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.892 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.892 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.892 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.893 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.893 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.893 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.893 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.893 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.893 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.894 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.894 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.894 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.894 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.894 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.895 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.895 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.895 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.895 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.895 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.895 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.895 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.896 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.896 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.896 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.896 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.896 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.896 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.897 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.897 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.897 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.897 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.897 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.897 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.898 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.898 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.898 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.898 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.898 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.898 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.898 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.898 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.899 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.899 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.899 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.899 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.899 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.899 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.899 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.900 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.900 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.900 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.900 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.900 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.900 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.901 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.901 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.901 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.901 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.901 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.901 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.901 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.902 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.902 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.902 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.902 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.902 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.902 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.902 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.902 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.903 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.903 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.903 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.903 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.903 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.903 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.904 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.904 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.904 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.904 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.904 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.904 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.905 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.905 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.905 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.905 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.905 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.905 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.905 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.905 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.906 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.906 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.906 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.906 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.906 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.906 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.906 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.907 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.907 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.907 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.907 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.907 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.907 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.907 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.908 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.908 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.908 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.908 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.908 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.908 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.908 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.908 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.909 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.909 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.909 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.909 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.909 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.909 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.909 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.910 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.910 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.910 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.910 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.910 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.910 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.911 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.911 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.911 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.911 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.911 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.911 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.911 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.912 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.912 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.912 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.912 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.912 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.912 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.912 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.913 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.913 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.913 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.913 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.913 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.913 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.914 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.914 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.914 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.914 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.914 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.914 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.915 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.915 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.915 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.915 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.915 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.915 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.915 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.916 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.916 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.916 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.916 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.916 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.916 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.916 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.916 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.917 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.917 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.917 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.917 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.917 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.917 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.917 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.918 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.918 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.918 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.918 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.918 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.918 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.918 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.919 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.919 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.919 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.919 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.919 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.919 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.919 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.920 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.920 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.920 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.920 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.920 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.920 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.920 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.921 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.921 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.921 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.921 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.921 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.921 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.921 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.922 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.922 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.922 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.922 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.922 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.922 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.922 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.923 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.923 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.923 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.923 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.923 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.923 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.923 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.924 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.924 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.924 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.924 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.924 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.924 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.924 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.925 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.925 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.925 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.925 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.925 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.925 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.925 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.925 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.926 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.926 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.926 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.926 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.926 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.926 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.926 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.927 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.927 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.927 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.927 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.927 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.927 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.927 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.927 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.928 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.928 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.928 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.928 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.928 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.928 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.928 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.929 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.929 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.929 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.929 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.929 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.929 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.929 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.930 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.930 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.930 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.930 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.930 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.930 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.930 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.931 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.931 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.931 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.931 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.931 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.931 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.931 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.932 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.932 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.932 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.932 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.932 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.932 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.932 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.933 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.933 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.933 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.933 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.933 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.933 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.933 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.933 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.934 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.934 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.934 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.934 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.934 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.934 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.934 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.934 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.935 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.935 256736 DEBUG oslo_service.service [None req-9c634df4-ac30-44d9-a0f9-675a1dd3d641 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.936 256736 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.949 256736 INFO nova.virt.node [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Determined node identity ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from /var/lib/nova/compute_id
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.950 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.950 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.951 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.951 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.964 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f9528f3eac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.965 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f9528f3eac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.966 256736 INFO nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Connection event '1' reason 'None'
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.971 256736 INFO nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 07:38:39 compute-0 nova_compute[256729]: 
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <host>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <uuid>a4431209-b14d-4d8f-894a-1aed0bd2dae7</uuid>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <cpu>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <arch>x86_64</arch>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model>EPYC-Rome-v4</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <vendor>AMD</vendor>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <microcode version='16777317'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <signature family='23' model='49' stepping='0'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='x2apic'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='tsc-deadline'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='osxsave'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='hypervisor'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='tsc_adjust'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='spec-ctrl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='stibp'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='arch-capabilities'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='ssbd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='cmp_legacy'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='topoext'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='virt-ssbd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='lbrv'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='tsc-scale'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='vmcb-clean'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='pause-filter'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='pfthreshold'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='svme-addr-chk'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='rdctl-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='mds-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature name='pschange-mc-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <pages unit='KiB' size='4'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <pages unit='KiB' size='2048'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <pages unit='KiB' size='1048576'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </cpu>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <power_management>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <suspend_mem/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </power_management>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <iommu support='no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <migration_features>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <live/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <uri_transports>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <uri_transport>tcp</uri_transport>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <uri_transport>rdma</uri_transport>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </uri_transports>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </migration_features>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <topology>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <cells num='1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <cell id='0'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:           <memory unit='KiB'>7864328</memory>
Nov 29 07:38:39 compute-0 nova_compute[256729]:           <pages unit='KiB' size='4'>1966082</pages>
Nov 29 07:38:39 compute-0 nova_compute[256729]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 07:38:39 compute-0 nova_compute[256729]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 07:38:39 compute-0 nova_compute[256729]:           <distances>
Nov 29 07:38:39 compute-0 nova_compute[256729]:             <sibling id='0' value='10'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:           </distances>
Nov 29 07:38:39 compute-0 nova_compute[256729]:           <cpus num='8'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:           </cpus>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         </cell>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </cells>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </topology>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <cache>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </cache>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <secmodel>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model>selinux</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <doi>0</doi>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </secmodel>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <secmodel>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model>dac</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <doi>0</doi>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </secmodel>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   </host>
Nov 29 07:38:39 compute-0 nova_compute[256729]: 
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <guest>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <os_type>hvm</os_type>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <arch name='i686'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <wordsize>32</wordsize>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <domain type='qemu'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <domain type='kvm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </arch>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <features>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <pae/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <nonpae/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <acpi default='on' toggle='yes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <apic default='on' toggle='no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <cpuselection/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <deviceboot/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <externalSnapshot/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </features>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   </guest>
Nov 29 07:38:39 compute-0 nova_compute[256729]: 
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <guest>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <os_type>hvm</os_type>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <arch name='x86_64'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <wordsize>64</wordsize>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <domain type='qemu'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <domain type='kvm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </arch>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <features>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <acpi default='on' toggle='yes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <apic default='on' toggle='no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <cpuselection/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <deviceboot/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <disksnapshot default='on' toggle='no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <externalSnapshot/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </features>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   </guest>
Nov 29 07:38:39 compute-0 nova_compute[256729]: 
Nov 29 07:38:39 compute-0 nova_compute[256729]: </capabilities>
Nov 29 07:38:39 compute-0 nova_compute[256729]: 
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.978 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.980 256736 DEBUG nova.virt.libvirt.volume.mount [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 07:38:39 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.982 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 07:38:39 compute-0 nova_compute[256729]: <domainCapabilities>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <domain>kvm</domain>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <arch>i686</arch>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <vcpu max='4096'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <iothreads supported='yes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <os supported='yes'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <enum name='firmware'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <loader supported='yes'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <value>rom</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <value>pflash</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <enum name='readonly'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <value>yes</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <value>no</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <enum name='secure'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <value>no</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </loader>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   </os>
Nov 29 07:38:39 compute-0 nova_compute[256729]:   <cpu>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <value>on</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <value>off</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <mode name='maximum' supported='yes'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <enum name='maximumMigratable'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <value>on</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <value>off</value>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <mode name='host-model' supported='yes'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <vendor>AMD</vendor>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='x2apic'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='stibp'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='ssbd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='succor'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='ibrs'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='lbrv'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:39 compute-0 nova_compute[256729]:     <mode name='custom' supported='yes'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Broadwell'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v3'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v4'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cooperlake'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cooperlake-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Cooperlake-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Denverton'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Denverton-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Denverton-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Denverton-v3'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Dhyana-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-Genoa'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='auto-ibrs'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='auto-ibrs'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-v3'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='EPYC-v4'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx10'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx10-128'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx10-256'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx10-512'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Haswell'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Haswell-IBRS'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Haswell-noTSX'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Haswell-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Haswell-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Haswell-v3'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Haswell-v4'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='IvyBridge'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-v1'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-v2'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:38:39 compute-0 nova_compute[256729]:       <blockers model='KnightsMill'>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512er'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:39 compute-0 nova_compute[256729]:         <feature name='avx512pf'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='KnightsMill-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512er'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512pf'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tbm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tbm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SierraForest'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cmpccxadd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SierraForest-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cmpccxadd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='athlon'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='athlon-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='core2duo'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='core2duo-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='coreduo'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='coreduo-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='n270'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='n270-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='phenom'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='phenom-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <memoryBacking supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <enum name='sourceType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>file</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>anonymous</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>memfd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </memoryBacking>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <disk supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='diskDevice'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>disk</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>cdrom</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>floppy</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>lun</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='bus'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>fdc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>scsi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>sata</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-non-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <graphics supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vnc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>egl-headless</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dbus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </graphics>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <video supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='modelType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vga</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>cirrus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>none</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>bochs</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ramfb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </video>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <hostdev supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='mode'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>subsystem</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='startupPolicy'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>default</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>mandatory</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>requisite</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>optional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='subsysType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pci</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>scsi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='capsType'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='pciBackend'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </hostdev>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <rng supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-non-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>random</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>egd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>builtin</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <filesystem supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='driverType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>path</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>handle</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtiofs</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </filesystem>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <tpm supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tpm-tis</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tpm-crb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>emulator</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>external</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendVersion'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>2.0</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </tpm>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <redirdev supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='bus'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </redirdev>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <channel supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pty</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>unix</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </channel>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <crypto supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>qemu</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>builtin</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </crypto>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <interface supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>default</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>passt</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <panic supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>isa</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>hyperv</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </panic>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <console supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>null</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pty</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dev</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>file</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pipe</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>stdio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>udp</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tcp</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>unix</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>qemu-vdagent</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dbus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </console>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <features>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <gic supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <vmcoreinfo supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <genid supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <backingStoreInput supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <backup supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <async-teardown supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <ps2 supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <sev supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <sgx supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <hyperv supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='features'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>relaxed</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vapic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>spinlocks</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vpindex</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>runtime</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>synic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>stimer</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>reset</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vendor_id</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>frequencies</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>reenlightenment</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tlbflush</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ipi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>avic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>emsr_bitmap</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>xmm_input</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <defaults>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <spinlocks>4095</spinlocks>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <stimer_direct>on</stimer_direct>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </defaults>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </hyperv>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <launchSecurity supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='sectype'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tdx</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </launchSecurity>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </features>
Nov 29 07:38:40 compute-0 nova_compute[256729]: </domainCapabilities>
Nov 29 07:38:40 compute-0 nova_compute[256729]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:39.988 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 07:38:40 compute-0 nova_compute[256729]: <domainCapabilities>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <domain>kvm</domain>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <arch>i686</arch>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <vcpu max='240'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <iothreads supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <os supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <enum name='firmware'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <loader supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>rom</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pflash</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='readonly'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>yes</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>no</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='secure'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>no</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </loader>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </os>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <cpu>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>on</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>off</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='maximum' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='maximumMigratable'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>on</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>off</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='host-model' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <vendor>AMD</vendor>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='x2apic'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='stibp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='ssbd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='succor'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='ibrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='lbrv'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='custom' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cooperlake'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cooperlake-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cooperlake-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Dhyana-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Genoa'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='auto-ibrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='auto-ibrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10-128'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10-256'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10-512'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='KnightsMill'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512er'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512pf'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='KnightsMill-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512er'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512pf'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tbm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tbm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SierraForest'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cmpccxadd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SierraForest-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cmpccxadd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='athlon'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='athlon-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='core2duo'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='core2duo-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='coreduo'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='coreduo-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='n270'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='n270-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='phenom'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='phenom-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <memoryBacking supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <enum name='sourceType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>file</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>anonymous</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>memfd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </memoryBacking>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <disk supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='diskDevice'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>disk</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>cdrom</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>floppy</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>lun</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='bus'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ide</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>fdc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>scsi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>sata</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-non-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <graphics supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vnc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>egl-headless</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dbus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </graphics>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <video supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='modelType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vga</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>cirrus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>none</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>bochs</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ramfb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </video>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <hostdev supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='mode'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>subsystem</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='startupPolicy'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>default</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>mandatory</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>requisite</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>optional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='subsysType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pci</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>scsi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='capsType'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='pciBackend'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </hostdev>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <rng supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-non-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>random</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>egd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>builtin</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <filesystem supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='driverType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>path</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>handle</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtiofs</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </filesystem>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <tpm supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tpm-tis</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tpm-crb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>emulator</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>external</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendVersion'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>2.0</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </tpm>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <redirdev supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='bus'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </redirdev>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <channel supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pty</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>unix</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </channel>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <crypto supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>qemu</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>builtin</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </crypto>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <interface supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>default</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>passt</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <panic supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>isa</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>hyperv</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </panic>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <console supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>null</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pty</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dev</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>file</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pipe</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>stdio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>udp</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tcp</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>unix</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>qemu-vdagent</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dbus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </console>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <features>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <gic supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <vmcoreinfo supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <genid supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <backingStoreInput supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <backup supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <async-teardown supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <ps2 supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <sev supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <sgx supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <hyperv supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='features'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>relaxed</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vapic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>spinlocks</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vpindex</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>runtime</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>synic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>stimer</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>reset</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vendor_id</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>frequencies</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>reenlightenment</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tlbflush</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ipi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>avic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>emsr_bitmap</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>xmm_input</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <defaults>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <spinlocks>4095</spinlocks>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <stimer_direct>on</stimer_direct>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </defaults>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </hyperv>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <launchSecurity supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='sectype'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tdx</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </launchSecurity>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </features>
Nov 29 07:38:40 compute-0 nova_compute[256729]: </domainCapabilities>
Nov 29 07:38:40 compute-0 nova_compute[256729]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.022 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.026 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 07:38:40 compute-0 nova_compute[256729]: <domainCapabilities>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <domain>kvm</domain>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <arch>x86_64</arch>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <vcpu max='4096'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <iothreads supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <os supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <enum name='firmware'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>efi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <loader supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>rom</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pflash</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='readonly'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>yes</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>no</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='secure'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>yes</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>no</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </loader>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </os>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <cpu>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>on</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>off</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='maximum' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='maximumMigratable'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>on</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>off</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='host-model' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <vendor>AMD</vendor>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='x2apic'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='stibp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='ssbd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='succor'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='ibrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='lbrv'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='custom' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cooperlake'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cooperlake-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cooperlake-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Dhyana-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Genoa'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='auto-ibrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='auto-ibrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10-128'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10-256'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10-512'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='KnightsMill'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512er'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512pf'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='KnightsMill-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512er'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512pf'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tbm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tbm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SierraForest'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cmpccxadd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SierraForest-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cmpccxadd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='athlon'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='athlon-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='core2duo'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='core2duo-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='coreduo'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='coreduo-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='n270'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='n270-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='phenom'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='phenom-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <memoryBacking supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <enum name='sourceType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>file</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>anonymous</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>memfd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </memoryBacking>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <disk supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='diskDevice'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>disk</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>cdrom</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>floppy</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>lun</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='bus'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>fdc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>scsi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>sata</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-non-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <graphics supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vnc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>egl-headless</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dbus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </graphics>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <video supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='modelType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vga</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>cirrus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>none</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>bochs</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ramfb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </video>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <hostdev supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='mode'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>subsystem</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='startupPolicy'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>default</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>mandatory</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>requisite</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>optional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='subsysType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pci</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>scsi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='capsType'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='pciBackend'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </hostdev>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <rng supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-non-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>random</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>egd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>builtin</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <filesystem supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='driverType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>path</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>handle</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtiofs</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </filesystem>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <tpm supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tpm-tis</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tpm-crb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>emulator</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>external</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendVersion'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>2.0</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </tpm>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <redirdev supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='bus'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </redirdev>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <channel supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pty</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>unix</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </channel>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <crypto supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>qemu</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>builtin</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </crypto>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <interface supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>default</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>passt</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <panic supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>isa</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>hyperv</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </panic>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <console supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>null</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pty</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dev</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>file</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pipe</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>stdio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>udp</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tcp</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>unix</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>qemu-vdagent</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dbus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </console>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <features>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <gic supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <vmcoreinfo supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <genid supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <backingStoreInput supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <backup supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <async-teardown supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <ps2 supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <sev supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <sgx supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <hyperv supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='features'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>relaxed</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vapic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>spinlocks</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vpindex</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>runtime</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>synic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>stimer</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>reset</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vendor_id</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>frequencies</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>reenlightenment</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tlbflush</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ipi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>avic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>emsr_bitmap</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>xmm_input</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <defaults>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <spinlocks>4095</spinlocks>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <stimer_direct>on</stimer_direct>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </defaults>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </hyperv>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <launchSecurity supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='sectype'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tdx</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </launchSecurity>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </features>
Nov 29 07:38:40 compute-0 nova_compute[256729]: </domainCapabilities>
Nov 29 07:38:40 compute-0 nova_compute[256729]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.126 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 07:38:40 compute-0 nova_compute[256729]: <domainCapabilities>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <domain>kvm</domain>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <arch>x86_64</arch>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <vcpu max='240'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <iothreads supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <os supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <enum name='firmware'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <loader supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>rom</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pflash</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='readonly'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>yes</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>no</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='secure'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>no</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </loader>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </os>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <cpu>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='host-passthrough' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='hostPassthroughMigratable'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>on</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>off</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='maximum' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='maximumMigratable'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>on</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>off</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='host-model' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <vendor>AMD</vendor>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='x2apic'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='hypervisor'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='stibp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='ssbd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='overflow-recov'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='succor'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='ibrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='lbrv'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='tsc-scale'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='flushbyasid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='pause-filter'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='pfthreshold'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <feature policy='disable' name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <mode name='custom' supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Broadwell-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cooperlake'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cooperlake-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Cooperlake-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Denverton-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Dhyana-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Genoa'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='auto-ibrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='auto-ibrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Milan-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amd-psfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='no-nested-data-bp'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='null-sel-clr-base'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='stibp-always-on'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-Rome-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='EPYC-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='GraniteRapids-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10-128'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10-256'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx10-512'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='prefetchiti'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Haswell-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v6'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Icelake-Server-v7'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='IvyBridge-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='KnightsMill'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512er'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512pf'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='KnightsMill-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4fmaps'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-4vnniw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512er'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512pf'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G4-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tbm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Opteron_G5-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fma4'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tbm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xop'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SapphireRapids-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='amx-tile'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-bf16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-fp16'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512-vpopcntdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bitalg'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vbmi2'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrc'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fzrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='la57'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='taa-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='tsx-ldtrk'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xfd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SierraForest'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cmpccxadd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='SierraForest-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ifma'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-ne-convert'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx-vnni-int8'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='bus-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cmpccxadd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fbsdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='fsrs'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ibrs-all'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mcdt-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pbrsb-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='psdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='sbdr-ssdp-no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='serialize'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vaes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='vpclmulqdq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Client-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='hle'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='rtm'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Skylake-Server-v5'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512bw'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512cd'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512dq'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512f'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='avx512vl'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='invpcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pcid'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='pku'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='mpx'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v2'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v3'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='core-capability'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='split-lock-detect'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='Snowridge-v4'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='cldemote'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='erms'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='gfni'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdir64b'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='movdiri'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='xsaves'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='athlon'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='athlon-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='core2duo'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='core2duo-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='coreduo'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='coreduo-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='n270'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='n270-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='ss'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='phenom'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <blockers model='phenom-v1'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnow'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <feature name='3dnowext'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </blockers>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </mode>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <memoryBacking supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <enum name='sourceType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>file</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>anonymous</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <value>memfd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </memoryBacking>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <disk supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='diskDevice'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>disk</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>cdrom</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>floppy</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>lun</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='bus'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ide</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>fdc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>scsi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>sata</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-non-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <graphics supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vnc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>egl-headless</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dbus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </graphics>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <video supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='modelType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vga</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>cirrus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>none</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>bochs</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ramfb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </video>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <hostdev supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='mode'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>subsystem</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='startupPolicy'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>default</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>mandatory</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>requisite</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>optional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='subsysType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pci</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>scsi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='capsType'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='pciBackend'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </hostdev>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <rng supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtio-non-transitional</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>random</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>egd</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>builtin</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <filesystem supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='driverType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>path</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>handle</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>virtiofs</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </filesystem>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <tpm supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tpm-tis</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tpm-crb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>emulator</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>external</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendVersion'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>2.0</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </tpm>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <redirdev supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='bus'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>usb</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </redirdev>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <channel supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pty</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>unix</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </channel>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <crypto supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>qemu</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendModel'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>builtin</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </crypto>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <interface supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='backendType'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>default</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>passt</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <panic supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='model'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>isa</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>hyperv</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </panic>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <console supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='type'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>null</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vc</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pty</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dev</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>file</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>pipe</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>stdio</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>udp</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tcp</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>unix</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>qemu-vdagent</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>dbus</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </console>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   <features>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <gic supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <vmcoreinfo supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <genid supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <backingStoreInput supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <backup supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <async-teardown supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <ps2 supported='yes'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <sev supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <sgx supported='no'/>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <hyperv supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='features'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>relaxed</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vapic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>spinlocks</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vpindex</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>runtime</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>synic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>stimer</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>reset</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>vendor_id</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>frequencies</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>reenlightenment</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tlbflush</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>ipi</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>avic</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>emsr_bitmap</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>xmm_input</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <defaults>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <spinlocks>4095</spinlocks>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <stimer_direct>on</stimer_direct>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </defaults>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </hyperv>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     <launchSecurity supported='yes'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       <enum name='sectype'>
Nov 29 07:38:40 compute-0 nova_compute[256729]:         <value>tdx</value>
Nov 29 07:38:40 compute-0 nova_compute[256729]:       </enum>
Nov 29 07:38:40 compute-0 nova_compute[256729]:     </launchSecurity>
Nov 29 07:38:40 compute-0 nova_compute[256729]:   </features>
Nov 29 07:38:40 compute-0 nova_compute[256729]: </domainCapabilities>
Nov 29 07:38:40 compute-0 nova_compute[256729]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.199 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.199 256736 INFO nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Secure Boot support detected
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.202 256736 INFO nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.202 256736 INFO nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.215 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.236 256736 INFO nova.virt.node [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Determined node identity ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from /var/lib/nova/compute_id
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.257 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Verified node ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.281 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 07:38:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:40 compute-0 sshd-session[225540]: Connection closed by 192.168.122.30 port 58076
Nov 29 07:38:40 compute-0 sshd-session[225537]: pam_unix(sshd:session): session closed for user zuul
Nov 29 07:38:40 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 07:38:40 compute-0 systemd[1]: session-49.scope: Consumed 2min 36.260s CPU time.
Nov 29 07:38:40 compute-0 systemd-logind[807]: Session 49 logged out. Waiting for processes to exit.
Nov 29 07:38:40 compute-0 systemd-logind[807]: Removed session 49.
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.362 256736 DEBUG oslo_concurrency.lockutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.362 256736 DEBUG oslo_concurrency.lockutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.363 256736 DEBUG oslo_concurrency.lockutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.363 256736 DEBUG nova.compute.resource_tracker [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.363 256736 DEBUG oslo_concurrency.processutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:38:40 compute-0 musing_bohr[257060]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:38:40 compute-0 musing_bohr[257060]: --> relative data size: 1.0
Nov 29 07:38:40 compute-0 musing_bohr[257060]: --> All data devices are unavailable
Nov 29 07:38:40 compute-0 systemd[1]: libpod-d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb.scope: Deactivated successfully.
Nov 29 07:38:40 compute-0 systemd[1]: libpod-d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb.scope: Consumed 1.050s CPU time.
Nov 29 07:38:40 compute-0 podman[257039]: 2025-11-29 07:38:40.686123827 +0000 UTC m=+1.327170837 container died d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dfd059a7f02f9eb61033a48f8ea4852aeb7436ecfd95359e3e35c69c1f4c5bc-merged.mount: Deactivated successfully.
Nov 29 07:38:40 compute-0 podman[257039]: 2025-11-29 07:38:40.75967117 +0000 UTC m=+1.400718190 container remove d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:38:40 compute-0 systemd[1]: libpod-conmon-d874ea2f281be2f186d88290f2d940156d30af526ff6d85ed91b4d092a654dfb.scope: Deactivated successfully.
Nov 29 07:38:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:38:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1148805391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:40 compute-0 sudo[256640]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.794 256736 DEBUG oslo_concurrency.processutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:38:40 compute-0 sudo[257213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:40 compute-0 sudo[257213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:40 compute-0 sudo[257213]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:40 compute-0 sudo[257238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:38:40 compute-0 sudo[257238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:40 compute-0 sudo[257238]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.957 256736 WARNING nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.959 256736 DEBUG nova.compute.resource_tracker [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5194MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.959 256736 DEBUG oslo_concurrency.lockutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:38:40 compute-0 nova_compute[256729]: 2025-11-29 07:38:40.959 256736 DEBUG oslo_concurrency.lockutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:38:40 compute-0 sudo[257263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:40 compute-0 sudo[257263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:40 compute-0 sudo[257263]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:41 compute-0 sudo[257288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:38:41 compute-0 sudo[257288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.082 256736 DEBUG nova.compute.resource_tracker [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.082 256736 DEBUG nova.compute.resource_tracker [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.139 256736 DEBUG nova.scheduler.client.report [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Refreshing inventories for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.163 256736 DEBUG nova.scheduler.client.report [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Updating ProviderTree inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.163 256736 DEBUG nova.compute.provider_tree [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.179 256736 DEBUG nova.scheduler.client.report [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Refreshing aggregate associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.205 256736 DEBUG nova.scheduler.client.report [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Refreshing trait associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, traits: COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NODE,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.222 256736 DEBUG oslo_concurrency.processutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:38:41 compute-0 ceph-mon[75050]: pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1148805391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:41 compute-0 podman[257358]: 2025-11-29 07:38:41.442601218 +0000 UTC m=+0.068445992 container create 2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:38:41 compute-0 systemd[1]: Started libpod-conmon-2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12.scope.
Nov 29 07:38:41 compute-0 podman[257358]: 2025-11-29 07:38:41.397165655 +0000 UTC m=+0.023010459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:38:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:41 compute-0 podman[257358]: 2025-11-29 07:38:41.527521642 +0000 UTC m=+0.153366436 container init 2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kalam, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:38:41 compute-0 podman[257358]: 2025-11-29 07:38:41.533858214 +0000 UTC m=+0.159702988 container start 2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:38:41 compute-0 adoring_kalam[257390]: 167 167
Nov 29 07:38:41 compute-0 systemd[1]: libpod-2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12.scope: Deactivated successfully.
Nov 29 07:38:41 compute-0 conmon[257390]: conmon 2ff4edc9b6342a25629a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12.scope/container/memory.events
Nov 29 07:38:41 compute-0 podman[257358]: 2025-11-29 07:38:41.538796461 +0000 UTC m=+0.164641235 container attach 2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kalam, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:38:41 compute-0 podman[257358]: 2025-11-29 07:38:41.540230207 +0000 UTC m=+0.166074981 container died 2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kalam, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:38:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-42ea6f3da61aa4c6ec8c90f33b5e681b1ed60a4d441f30e1249169f2bf7858ed-merged.mount: Deactivated successfully.
Nov 29 07:38:41 compute-0 podman[257358]: 2025-11-29 07:38:41.578609639 +0000 UTC m=+0.204454423 container remove 2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:38:41 compute-0 systemd[1]: libpod-conmon-2ff4edc9b6342a25629a86cbf0ec57768bd9a188941081b4bd0b294e63a7bc12.scope: Deactivated successfully.
Nov 29 07:38:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:38:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773199480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.673 256736 DEBUG oslo_concurrency.processutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.680 256736 DEBUG nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 07:38:41 compute-0 nova_compute[256729]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.681 256736 INFO nova.virt.libvirt.host [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] kernel doesn't support AMD SEV
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.682 256736 DEBUG nova.compute.provider_tree [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.683 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.710 256736 DEBUG nova.scheduler.client.report [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.732 256736 DEBUG nova.compute.resource_tracker [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.732 256736 DEBUG oslo_concurrency.lockutils [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.732 256736 DEBUG nova.service [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 29 07:38:41 compute-0 podman[257416]: 2025-11-29 07:38:41.753452094 +0000 UTC m=+0.049796325 container create 48abe503264c612d9cc2af1de5d186b0b8f47095c8a52f6094bb5c4c53f2b0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.767 256736 DEBUG nova.service [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 29 07:38:41 compute-0 nova_compute[256729]: 2025-11-29 07:38:41.768 256736 DEBUG nova.servicegroup.drivers.db [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 29 07:38:41 compute-0 systemd[1]: Started libpod-conmon-48abe503264c612d9cc2af1de5d186b0b8f47095c8a52f6094bb5c4c53f2b0a7.scope.
Nov 29 07:38:41 compute-0 podman[257416]: 2025-11-29 07:38:41.724927354 +0000 UTC m=+0.021271605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:38:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7067a22cd0ceaf78e3813516c9f34776a4890333a385a41189f382f22593b8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7067a22cd0ceaf78e3813516c9f34776a4890333a385a41189f382f22593b8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7067a22cd0ceaf78e3813516c9f34776a4890333a385a41189f382f22593b8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7067a22cd0ceaf78e3813516c9f34776a4890333a385a41189f382f22593b8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:41 compute-0 podman[257416]: 2025-11-29 07:38:41.848375033 +0000 UTC m=+0.144719284 container init 48abe503264c612d9cc2af1de5d186b0b8f47095c8a52f6094bb5c4c53f2b0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 29 07:38:41 compute-0 podman[257416]: 2025-11-29 07:38:41.854767427 +0000 UTC m=+0.151111658 container start 48abe503264c612d9cc2af1de5d186b0b8f47095c8a52f6094bb5c4c53f2b0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:38:41 compute-0 podman[257416]: 2025-11-29 07:38:41.86112427 +0000 UTC m=+0.157468521 container attach 48abe503264c612d9cc2af1de5d186b0b8f47095c8a52f6094bb5c4c53f2b0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 07:38:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/773199480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:38:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:42 compute-0 modest_margulis[257433]: {
Nov 29 07:38:42 compute-0 modest_margulis[257433]:     "0": [
Nov 29 07:38:42 compute-0 modest_margulis[257433]:         {
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "devices": [
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "/dev/loop3"
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             ],
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_name": "ceph_lv0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_size": "21470642176",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "name": "ceph_lv0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "tags": {
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.cluster_name": "ceph",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.crush_device_class": "",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.encrypted": "0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.osd_id": "0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.type": "block",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.vdo": "0"
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             },
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "type": "block",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "vg_name": "ceph_vg0"
Nov 29 07:38:42 compute-0 modest_margulis[257433]:         }
Nov 29 07:38:42 compute-0 modest_margulis[257433]:     ],
Nov 29 07:38:42 compute-0 modest_margulis[257433]:     "1": [
Nov 29 07:38:42 compute-0 modest_margulis[257433]:         {
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "devices": [
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "/dev/loop4"
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             ],
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_name": "ceph_lv1",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_size": "21470642176",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "name": "ceph_lv1",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "tags": {
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.cluster_name": "ceph",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.crush_device_class": "",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.encrypted": "0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.osd_id": "1",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.type": "block",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.vdo": "0"
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             },
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "type": "block",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "vg_name": "ceph_vg1"
Nov 29 07:38:42 compute-0 modest_margulis[257433]:         }
Nov 29 07:38:42 compute-0 modest_margulis[257433]:     ],
Nov 29 07:38:42 compute-0 modest_margulis[257433]:     "2": [
Nov 29 07:38:42 compute-0 modest_margulis[257433]:         {
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "devices": [
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "/dev/loop5"
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             ],
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_name": "ceph_lv2",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_size": "21470642176",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "name": "ceph_lv2",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "tags": {
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.cluster_name": "ceph",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.crush_device_class": "",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.encrypted": "0",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.osd_id": "2",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.type": "block",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:                 "ceph.vdo": "0"
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             },
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "type": "block",
Nov 29 07:38:42 compute-0 modest_margulis[257433]:             "vg_name": "ceph_vg2"
Nov 29 07:38:42 compute-0 modest_margulis[257433]:         }
Nov 29 07:38:42 compute-0 modest_margulis[257433]:     ]
Nov 29 07:38:42 compute-0 modest_margulis[257433]: }
Nov 29 07:38:42 compute-0 systemd[1]: libpod-48abe503264c612d9cc2af1de5d186b0b8f47095c8a52f6094bb5c4c53f2b0a7.scope: Deactivated successfully.
Nov 29 07:38:42 compute-0 podman[257416]: 2025-11-29 07:38:42.730650504 +0000 UTC m=+1.026994735 container died 48abe503264c612d9cc2af1de5d186b0b8f47095c8a52f6094bb5c4c53f2b0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7067a22cd0ceaf78e3813516c9f34776a4890333a385a41189f382f22593b8e-merged.mount: Deactivated successfully.
Nov 29 07:38:42 compute-0 podman[257416]: 2025-11-29 07:38:42.790840704 +0000 UTC m=+1.087184935 container remove 48abe503264c612d9cc2af1de5d186b0b8f47095c8a52f6094bb5c4c53f2b0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:38:42 compute-0 systemd[1]: libpod-conmon-48abe503264c612d9cc2af1de5d186b0b8f47095c8a52f6094bb5c4c53f2b0a7.scope: Deactivated successfully.
Nov 29 07:38:42 compute-0 sudo[257288]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:42 compute-0 sudo[257453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:42 compute-0 sudo[257453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:42 compute-0 sudo[257453]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:42 compute-0 sudo[257478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:38:42 compute-0 sudo[257478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:42 compute-0 sudo[257478]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:43 compute-0 sudo[257503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:43 compute-0 sudo[257503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:43 compute-0 sudo[257503]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:43 compute-0 sudo[257528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:38:43 compute-0 sudo[257528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:43 compute-0 podman[257593]: 2025-11-29 07:38:43.457419064 +0000 UTC m=+0.034187895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:38:43 compute-0 sshd-session[256999]: Invalid user mysql from 143.14.121.41 port 55596
Nov 29 07:38:44 compute-0 ceph-mon[75050]: pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:44 compute-0 podman[257593]: 2025-11-29 07:38:44.033676532 +0000 UTC m=+0.610445393 container create e756ec82b19dc83a6bd372bbb222c20591bfe4cce61c234fb5862be0f6d4844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:38:44 compute-0 systemd[1]: Started libpod-conmon-e756ec82b19dc83a6bd372bbb222c20591bfe4cce61c234fb5862be0f6d4844f.scope.
Nov 29 07:38:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:44 compute-0 podman[257593]: 2025-11-29 07:38:44.346599271 +0000 UTC m=+0.923368112 container init e756ec82b19dc83a6bd372bbb222c20591bfe4cce61c234fb5862be0f6d4844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:38:44 compute-0 podman[257593]: 2025-11-29 07:38:44.354419141 +0000 UTC m=+0.931187992 container start e756ec82b19dc83a6bd372bbb222c20591bfe4cce61c234fb5862be0f6d4844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:38:44 compute-0 jovial_zhukovsky[257609]: 167 167
Nov 29 07:38:44 compute-0 systemd[1]: libpod-e756ec82b19dc83a6bd372bbb222c20591bfe4cce61c234fb5862be0f6d4844f.scope: Deactivated successfully.
Nov 29 07:38:44 compute-0 podman[257593]: 2025-11-29 07:38:44.421668573 +0000 UTC m=+0.998437444 container attach e756ec82b19dc83a6bd372bbb222c20591bfe4cce61c234fb5862be0f6d4844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:38:44 compute-0 podman[257593]: 2025-11-29 07:38:44.42275108 +0000 UTC m=+0.999519941 container died e756ec82b19dc83a6bd372bbb222c20591bfe4cce61c234fb5862be0f6d4844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:38:44 compute-0 sshd-session[256999]: Connection closed by invalid user mysql 143.14.121.41 port 55596 [preauth]
Nov 29 07:38:45 compute-0 sshd-session[257612]: Invalid user mc from 143.14.121.41 port 45436
Nov 29 07:38:45 compute-0 ceph-mon[75050]: pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cab42ae08c68d4860eabe80e008f4098ee0a3391179a52da6971678c3b7499e-merged.mount: Deactivated successfully.
Nov 29 07:38:45 compute-0 podman[257593]: 2025-11-29 07:38:45.989519689 +0000 UTC m=+2.566288520 container remove e756ec82b19dc83a6bd372bbb222c20591bfe4cce61c234fb5862be0f6d4844f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:38:46 compute-0 systemd[1]: libpod-conmon-e756ec82b19dc83a6bd372bbb222c20591bfe4cce61c234fb5862be0f6d4844f.scope: Deactivated successfully.
Nov 29 07:38:46 compute-0 sshd-session[257612]: Connection closed by invalid user mc 143.14.121.41 port 45436 [preauth]
Nov 29 07:38:46 compute-0 podman[257635]: 2025-11-29 07:38:46.159857288 +0000 UTC m=+0.047292030 container create dd189a59b0f0f78a8ed93f220be23dd035bd00b203756d884f9e180d19d4654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:38:46 compute-0 systemd[1]: Started libpod-conmon-dd189a59b0f0f78a8ed93f220be23dd035bd00b203756d884f9e180d19d4654f.scope.
Nov 29 07:38:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3167b6f11220005f7ae2685bdfe33a6fbf7d567f7f553fad72e0329675acc9fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3167b6f11220005f7ae2685bdfe33a6fbf7d567f7f553fad72e0329675acc9fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3167b6f11220005f7ae2685bdfe33a6fbf7d567f7f553fad72e0329675acc9fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3167b6f11220005f7ae2685bdfe33a6fbf7d567f7f553fad72e0329675acc9fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:38:46 compute-0 podman[257635]: 2025-11-29 07:38:46.22869856 +0000 UTC m=+0.116133312 container init dd189a59b0f0f78a8ed93f220be23dd035bd00b203756d884f9e180d19d4654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:38:46 compute-0 podman[257635]: 2025-11-29 07:38:46.140069632 +0000 UTC m=+0.027504394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:38:46 compute-0 podman[257635]: 2025-11-29 07:38:46.236905341 +0000 UTC m=+0.124340083 container start dd189a59b0f0f78a8ed93f220be23dd035bd00b203756d884f9e180d19d4654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:38:46 compute-0 podman[257635]: 2025-11-29 07:38:46.239669451 +0000 UTC m=+0.127104193 container attach dd189a59b0f0f78a8ed93f220be23dd035bd00b203756d884f9e180d19d4654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:38:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:46 compute-0 ceph-mon[75050]: pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:47 compute-0 pensive_hermann[257652]: {
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "osd_id": 2,
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "type": "bluestore"
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:     },
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "osd_id": 1,
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "type": "bluestore"
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:     },
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "osd_id": 0,
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:         "type": "bluestore"
Nov 29 07:38:47 compute-0 pensive_hermann[257652]:     }
Nov 29 07:38:47 compute-0 pensive_hermann[257652]: }
Nov 29 07:38:47 compute-0 systemd[1]: libpod-dd189a59b0f0f78a8ed93f220be23dd035bd00b203756d884f9e180d19d4654f.scope: Deactivated successfully.
Nov 29 07:38:47 compute-0 podman[257635]: 2025-11-29 07:38:47.177397631 +0000 UTC m=+1.064832373 container died dd189a59b0f0f78a8ed93f220be23dd035bd00b203756d884f9e180d19d4654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3167b6f11220005f7ae2685bdfe33a6fbf7d567f7f553fad72e0329675acc9fa-merged.mount: Deactivated successfully.
Nov 29 07:38:47 compute-0 podman[257635]: 2025-11-29 07:38:47.23595804 +0000 UTC m=+1.123392782 container remove dd189a59b0f0f78a8ed93f220be23dd035bd00b203756d884f9e180d19d4654f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hermann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:38:47 compute-0 systemd[1]: libpod-conmon-dd189a59b0f0f78a8ed93f220be23dd035bd00b203756d884f9e180d19d4654f.scope: Deactivated successfully.
Nov 29 07:38:47 compute-0 sudo[257528]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:38:47 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:38:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:38:47 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:38:47 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7b2577b4-3207-43f1-9ff5-a8da3bd524ae does not exist
Nov 29 07:38:47 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b54c64b0-0d91-4253-a120-ca09b0345f57 does not exist
Nov 29 07:38:47 compute-0 sudo[257698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:38:47 compute-0 sudo[257698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:47 compute-0 sudo[257698]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:47 compute-0 sudo[257723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:38:47 compute-0 sudo[257723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:38:47 compute-0 sudo[257723]: pam_unix(sudo:session): session closed for user root
Nov 29 07:38:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:48 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:38:48 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:38:48 compute-0 sshd-session[257748]: Invalid user tester from 143.14.121.41 port 45442
Nov 29 07:38:49 compute-0 sshd-session[257748]: Connection closed by invalid user tester 143.14.121.41 port 45442 [preauth]
Nov 29 07:38:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:50 compute-0 ceph-mon[75050]: pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:52 compute-0 sshd-session[257750]: Invalid user temp from 143.14.121.41 port 45460
Nov 29 07:38:53 compute-0 sshd-session[257750]: Connection closed by invalid user temp 143.14.121.41 port 45460 [preauth]
Nov 29 07:38:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:53 compute-0 ceph-mon[75050]: pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:55 compute-0 ceph-mon[75050]: pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:56 compute-0 sshd-session[257752]: Connection closed by authenticating user root 143.14.121.41 port 49436 [preauth]
Nov 29 07:38:57 compute-0 ceph-mon[75050]: pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:58 compute-0 ceph-mon[75050]: pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:38:59 compute-0 ceph-mon[75050]: pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:38:59 compute-0 sshd-session[257754]: Connection closed by authenticating user root 143.14.121.41 port 49450 [preauth]
Nov 29 07:38:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:38:59.754 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:38:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:38:59.755 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:38:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:38:59.755 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:39:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:01 compute-0 ceph-mon[75050]: pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:02 compute-0 sshd-session[257756]: Connection closed by authenticating user root 143.14.121.41 port 49466 [preauth]
Nov 29 07:39:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:03 compute-0 ceph-mon[75050]: pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:05 compute-0 ceph-mon[75050]: pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:39:05
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['images', '.mgr', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'volumes']
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:39:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:39:07 compute-0 ceph-mon[75050]: pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:07 compute-0 sshd-session[257758]: Connection closed by authenticating user root 143.14.121.41 port 46284 [preauth]
Nov 29 07:39:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:08 compute-0 podman[257762]: 2025-11-29 07:39:08.699861169 +0000 UTC m=+0.064426850 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 07:39:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:39:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3554555341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:39:08 compute-0 podman[257761]: 2025-11-29 07:39:08.703642776 +0000 UTC m=+0.068207967 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:39:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:39:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3554555341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:39:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:39:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3449822433' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:39:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:39:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3449822433' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:39:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:39:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2199408846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:39:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:39:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2199408846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:39:09 compute-0 ceph-mon[75050]: pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3554555341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:39:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3554555341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:39:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3449822433' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:39:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3449822433' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:39:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2199408846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:39:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2199408846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:39:09 compute-0 podman[257799]: 2025-11-29 07:39:09.768017578 +0000 UTC m=+0.135405677 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:39:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:10 compute-0 nova_compute[256729]: 2025-11-29 07:39:10.770 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:10 compute-0 nova_compute[256729]: 2025-11-29 07:39:10.804 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:10 compute-0 sshd-session[257760]: Connection closed by authenticating user root 143.14.121.41 port 46286 [preauth]
Nov 29 07:39:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:14 compute-0 ceph-mon[75050]: pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:39:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:39:16 compute-0 sshd-session[257826]: Connection closed by authenticating user root 143.14.121.41 port 46288 [preauth]
Nov 29 07:39:16 compute-0 ceph-mon[75050]: pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:16 compute-0 ceph-mon[75050]: pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:17 compute-0 ceph-mon[75050]: pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:19 compute-0 ceph-mon[75050]: pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:20 compute-0 sshd-session[257828]: Connection closed by authenticating user root 143.14.121.41 port 33136 [preauth]
Nov 29 07:39:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:21 compute-0 ceph-mon[75050]: pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:23 compute-0 sshd-session[257830]: Connection closed by authenticating user root 143.14.121.41 port 33138 [preauth]
Nov 29 07:39:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:24 compute-0 ceph-mon[75050]: pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:25 compute-0 ceph-mon[75050]: pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:25 compute-0 sshd-session[257832]: Connection closed by authenticating user root 143.14.121.41 port 51016 [preauth]
Nov 29 07:39:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:27 compute-0 ceph-mon[75050]: pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:28 compute-0 sshd-session[257834]: Connection closed by authenticating user root 143.14.121.41 port 51018 [preauth]
Nov 29 07:39:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:29 compute-0 ceph-mon[75050]: pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:30 compute-0 sshd-session[257836]: Connection closed by authenticating user root 143.14.121.41 port 51030 [preauth]
Nov 29 07:39:31 compute-0 ceph-mon[75050]: pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:33 compute-0 ceph-mon[75050]: pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:34 compute-0 sshd-session[257838]: Invalid user deploy from 143.14.121.41 port 51034
Nov 29 07:39:35 compute-0 ceph-mon[75050]: pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:35 compute-0 sshd-session[257838]: Connection closed by invalid user deploy 143.14.121.41 port 51034 [preauth]
Nov 29 07:39:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:39:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:39:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:37 compute-0 ceph-mon[75050]: pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:37 compute-0 sshd-session[257840]: Connection closed by authenticating user root 143.14.121.41 port 52286 [preauth]
Nov 29 07:39:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.151 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.152 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.153 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.153 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:39:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.174 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.175 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.175 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.176 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.176 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.177 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.177 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.177 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.178 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.204 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.204 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.204 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.205 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.205 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:39:39 compute-0 podman[257865]: 2025-11-29 07:39:39.708737133 +0000 UTC m=+0.067962344 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:39:39 compute-0 podman[257864]: 2025-11-29 07:39:39.724686686 +0000 UTC m=+0.085652851 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:39:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:39:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2168370988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:39 compute-0 nova_compute[256729]: 2025-11-29 07:39:39.882 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:39:39 compute-0 ceph-mon[75050]: pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.070 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.072 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5209MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.072 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.072 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.148 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.149 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.181 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:39:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:39:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973930334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.647 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.653 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.671 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.672 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:39:40 compute-0 nova_compute[256729]: 2025-11-29 07:39:40.673 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:39:40 compute-0 podman[257923]: 2025-11-29 07:39:40.737947683 +0000 UTC m=+0.106280718 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 07:39:41 compute-0 sshd-session[257842]: Connection closed by authenticating user root 143.14.121.41 port 52290 [preauth]
Nov 29 07:39:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2168370988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:41 compute-0 ceph-mon[75050]: pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3973930334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:39:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:42 compute-0 ceph-mon[75050]: pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:45 compute-0 ceph-mon[75050]: pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:45 compute-0 sshd-session[257950]: Connection closed by authenticating user root 143.14.121.41 port 52300 [preauth]
Nov 29 07:39:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:47 compute-0 sudo[257954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:47 compute-0 sudo[257954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:47 compute-0 sudo[257954]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:47 compute-0 sudo[257979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:39:47 compute-0 sudo[257979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:47 compute-0 sudo[257979]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:47 compute-0 sudo[258004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:47 compute-0 sudo[258004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:47 compute-0 sudo[258004]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:47 compute-0 sudo[258029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:39:47 compute-0 sudo[258029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:48 compute-0 sudo[258029]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:39:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:39:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:39:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:39:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:39:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:48 compute-0 ceph-mon[75050]: pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:39:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev aa11c7f7-067a-46f5-b44c-1e5ffbba4c1e does not exist
Nov 29 07:39:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 216bad8e-0c57-4cca-9b2b-424b78492690 does not exist
Nov 29 07:39:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c4bd8c3d-3a5d-4652-81be-ff5efd64432a does not exist
Nov 29 07:39:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:39:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:39:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:39:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:39:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:39:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:39:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:49 compute-0 sudo[258086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:49 compute-0 sudo[258086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:49 compute-0 sudo[258086]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:49 compute-0 sudo[258111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:39:49 compute-0 sudo[258111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:49 compute-0 sudo[258111]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:49 compute-0 sudo[258136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:49 compute-0 sudo[258136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:49 compute-0 sudo[258136]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:49 compute-0 sudo[258161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:39:49 compute-0 sudo[258161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:39:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:39:50 compute-0 ceph-mon[75050]: pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:39:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:39:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:39:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:39:50 compute-0 podman[258223]: 2025-11-29 07:39:50.384988339 +0000 UTC m=+0.037249008 container create 4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:39:50 compute-0 systemd[1]: Started libpod-conmon-4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b.scope.
Nov 29 07:39:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:39:50 compute-0 podman[258223]: 2025-11-29 07:39:50.36886314 +0000 UTC m=+0.021123819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:39:50 compute-0 podman[258223]: 2025-11-29 07:39:50.467833737 +0000 UTC m=+0.120094426 container init 4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 29 07:39:50 compute-0 podman[258223]: 2025-11-29 07:39:50.474257454 +0000 UTC m=+0.126518123 container start 4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bartik, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:39:50 compute-0 dreamy_bartik[258239]: 167 167
Nov 29 07:39:50 compute-0 systemd[1]: libpod-4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b.scope: Deactivated successfully.
Nov 29 07:39:50 compute-0 conmon[258239]: conmon 4a1c4524c45c06009fbf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b.scope/container/memory.events
Nov 29 07:39:50 compute-0 podman[258223]: 2025-11-29 07:39:50.481105381 +0000 UTC m=+0.133366070 container attach 4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:39:50 compute-0 podman[258223]: 2025-11-29 07:39:50.481581583 +0000 UTC m=+0.133842252 container died 4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 07:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-931d7cd3f6ed262683d8421ecbc15f35d550aa5a5a9e264ef753b7d0df6ab651-merged.mount: Deactivated successfully.
Nov 29 07:39:50 compute-0 podman[258223]: 2025-11-29 07:39:50.532249667 +0000 UTC m=+0.184510346 container remove 4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:39:50 compute-0 systemd[1]: libpod-conmon-4a1c4524c45c06009fbf6b5f54d93f8d36c92ff269fb08043eaa5baf721e6c1b.scope: Deactivated successfully.
Nov 29 07:39:50 compute-0 podman[258263]: 2025-11-29 07:39:50.690910092 +0000 UTC m=+0.045036529 container create 6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:39:50 compute-0 systemd[1]: Started libpod-conmon-6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f.scope.
Nov 29 07:39:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997b2a3d61bcb7955f61fb88b52b2c5e061ddb3846ed7181974a87d8b1f20b48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997b2a3d61bcb7955f61fb88b52b2c5e061ddb3846ed7181974a87d8b1f20b48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997b2a3d61bcb7955f61fb88b52b2c5e061ddb3846ed7181974a87d8b1f20b48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997b2a3d61bcb7955f61fb88b52b2c5e061ddb3846ed7181974a87d8b1f20b48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997b2a3d61bcb7955f61fb88b52b2c5e061ddb3846ed7181974a87d8b1f20b48/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:39:50 compute-0 podman[258263]: 2025-11-29 07:39:50.672059433 +0000 UTC m=+0.026185900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:39:50 compute-0 podman[258263]: 2025-11-29 07:39:50.765932768 +0000 UTC m=+0.120059225 container init 6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:39:50 compute-0 podman[258263]: 2025-11-29 07:39:50.77569746 +0000 UTC m=+0.129823897 container start 6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:39:50 compute-0 podman[258263]: 2025-11-29 07:39:50.778601286 +0000 UTC m=+0.132727743 container attach 6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:39:50 compute-0 sshd-session[257952]: Connection closed by authenticating user root 143.14.121.41 port 50900 [preauth]
Nov 29 07:39:51 compute-0 ceph-mon[75050]: pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:51 compute-0 reverent_feynman[258280]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:39:51 compute-0 reverent_feynman[258280]: --> relative data size: 1.0
Nov 29 07:39:51 compute-0 reverent_feynman[258280]: --> All data devices are unavailable
Nov 29 07:39:51 compute-0 systemd[1]: libpod-6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f.scope: Deactivated successfully.
Nov 29 07:39:51 compute-0 systemd[1]: libpod-6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f.scope: Consumed 1.086s CPU time.
Nov 29 07:39:51 compute-0 podman[258263]: 2025-11-29 07:39:51.928752022 +0000 UTC m=+1.282878469 container died 6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:39:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-997b2a3d61bcb7955f61fb88b52b2c5e061ddb3846ed7181974a87d8b1f20b48-merged.mount: Deactivated successfully.
Nov 29 07:39:52 compute-0 sshd-session[258285]: Connection closed by authenticating user root 143.14.121.41 port 50912 [preauth]
Nov 29 07:39:53 compute-0 podman[258263]: 2025-11-29 07:39:53.556193725 +0000 UTC m=+2.910320162 container remove 6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:39:53 compute-0 systemd[1]: libpod-conmon-6c89aa228e5a749814bc7224aaf5b497548de175acf8a8e548e479805905166f.scope: Deactivated successfully.
Nov 29 07:39:53 compute-0 sudo[258161]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:53 compute-0 sudo[258326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:53 compute-0 sudo[258326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:53 compute-0 sudo[258326]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:53 compute-0 sudo[258351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:39:53 compute-0 sudo[258351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:53 compute-0 sudo[258351]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:53 compute-0 sudo[258376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:39:53 compute-0 sudo[258376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:53 compute-0 sudo[258376]: pam_unix(sudo:session): session closed for user root
Nov 29 07:39:53 compute-0 sudo[258401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:39:53 compute-0 sudo[258401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:39:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:39:54 compute-0 podman[258465]: 2025-11-29 07:39:54.12248511 +0000 UTC m=+0.021720865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:39:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:54 compute-0 ceph-mon[75050]: pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:55 compute-0 podman[258465]: 2025-11-29 07:39:55.373843529 +0000 UTC m=+1.273079264 container create 8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:39:55 compute-0 systemd[1]: Started libpod-conmon-8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646.scope.
Nov 29 07:39:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:39:55 compute-0 ceph-mon[75050]: pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:55 compute-0 podman[258465]: 2025-11-29 07:39:55.739366179 +0000 UTC m=+1.638601944 container init 8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:39:55 compute-0 podman[258465]: 2025-11-29 07:39:55.747981622 +0000 UTC m=+1.647217367 container start 8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:39:55 compute-0 systemd[1]: libpod-8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646.scope: Deactivated successfully.
Nov 29 07:39:55 compute-0 charming_mccarthy[258482]: 167 167
Nov 29 07:39:55 compute-0 conmon[258482]: conmon 8c89fc01f2f5b78b5e38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646.scope/container/memory.events
Nov 29 07:39:55 compute-0 podman[258465]: 2025-11-29 07:39:55.886319589 +0000 UTC m=+1.785555344 container attach 8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:39:55 compute-0 podman[258465]: 2025-11-29 07:39:55.886990207 +0000 UTC m=+1.786225942 container died 8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:39:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:57 compute-0 sshd-session[258324]: Connection closed by authenticating user root 143.14.121.41 port 56468 [preauth]
Nov 29 07:39:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:39:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:39:59.755 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:39:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:39:59.756 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:39:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:39:59.757 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:40:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:00 compute-0 ceph-mds[102316]: mds.beacon.cephfs.compute-0.bdhrqf missed beacon ack from the monitors
Nov 29 07:40:00 compute-0 sshd-session[258499]: Connection closed by authenticating user root 143.14.121.41 port 56474 [preauth]
Nov 29 07:40:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd09013ae56ab442e32a99e2c9ccfab6fd5d1ae40958ded2ec7fca9ba2d601d6-merged.mount: Deactivated successfully.
Nov 29 07:40:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:04 compute-0 ceph-mon[75050]: pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:04 compute-0 sshd-session[258501]: Connection closed by authenticating user root 143.14.121.41 port 56480 [preauth]
Nov 29 07:40:05 compute-0 podman[258465]: 2025-11-29 07:40:05.482640681 +0000 UTC m=+11.381876456 container remove 8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:40:05
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images', '.mgr']
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:05 compute-0 systemd[1]: libpod-conmon-8c89fc01f2f5b78b5e381607148786376a55f4c31e4979761a560b1e7e18c646.scope: Deactivated successfully.
Nov 29 07:40:05 compute-0 podman[258512]: 2025-11-29 07:40:05.720476969 +0000 UTC m=+0.039577017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:06 compute-0 sshd-session[258503]: Invalid user odoo from 143.14.121.41 port 38466
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:40:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:40:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:06 compute-0 sshd-session[258503]: Connection closed by invalid user odoo 143.14.121.41 port 38466 [preauth]
Nov 29 07:40:08 compute-0 podman[258512]: 2025-11-29 07:40:08.272713394 +0000 UTC m=+2.591813382 container create 8cd61bf96d9ddf3c06663e381aa875d2102d726b19676859b90cc19cb0d4ad41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_beaver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:40:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:40:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3925295884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:40:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:40:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3925295884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:40:09 compute-0 systemd[1]: Started libpod-conmon-8cd61bf96d9ddf3c06663e381aa875d2102d726b19676859b90cc19cb0d4ad41.scope.
Nov 29 07:40:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffbb91922c997553c15de0f60fbdf9fb716e7ca06f22d276e6abc4c6b12c496/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffbb91922c997553c15de0f60fbdf9fb716e7ca06f22d276e6abc4c6b12c496/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffbb91922c997553c15de0f60fbdf9fb716e7ca06f22d276e6abc4c6b12c496/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffbb91922c997553c15de0f60fbdf9fb716e7ca06f22d276e6abc4c6b12c496/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:10 compute-0 ceph-mon[75050]: pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:10 compute-0 ceph-mon[75050]: pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:10 compute-0 ceph-mon[75050]: pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:11 compute-0 podman[258512]: 2025-11-29 07:40:11.024481742 +0000 UTC m=+5.343581780 container init 8cd61bf96d9ddf3c06663e381aa875d2102d726b19676859b90cc19cb0d4ad41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:40:11 compute-0 podman[258512]: 2025-11-29 07:40:11.032911101 +0000 UTC m=+5.352011089 container start 8cd61bf96d9ddf3c06663e381aa875d2102d726b19676859b90cc19cb0d4ad41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:40:11 compute-0 sshd-session[258526]: Invalid user server from 143.14.121.41 port 38476
Nov 29 07:40:11 compute-0 magical_beaver[258530]: {
Nov 29 07:40:11 compute-0 magical_beaver[258530]:     "0": [
Nov 29 07:40:11 compute-0 magical_beaver[258530]:         {
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "devices": [
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "/dev/loop3"
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             ],
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_name": "ceph_lv0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_size": "21470642176",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "name": "ceph_lv0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "tags": {
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.cluster_name": "ceph",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.crush_device_class": "",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.encrypted": "0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.osd_id": "0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.type": "block",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.vdo": "0"
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             },
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "type": "block",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "vg_name": "ceph_vg0"
Nov 29 07:40:11 compute-0 magical_beaver[258530]:         }
Nov 29 07:40:11 compute-0 magical_beaver[258530]:     ],
Nov 29 07:40:11 compute-0 magical_beaver[258530]:     "1": [
Nov 29 07:40:11 compute-0 magical_beaver[258530]:         {
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "devices": [
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "/dev/loop4"
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             ],
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_name": "ceph_lv1",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_size": "21470642176",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "name": "ceph_lv1",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "tags": {
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.cluster_name": "ceph",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.crush_device_class": "",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.encrypted": "0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.osd_id": "1",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.type": "block",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.vdo": "0"
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             },
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "type": "block",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "vg_name": "ceph_vg1"
Nov 29 07:40:11 compute-0 magical_beaver[258530]:         }
Nov 29 07:40:11 compute-0 magical_beaver[258530]:     ],
Nov 29 07:40:11 compute-0 magical_beaver[258530]:     "2": [
Nov 29 07:40:11 compute-0 magical_beaver[258530]:         {
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "devices": [
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "/dev/loop5"
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             ],
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_name": "ceph_lv2",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_size": "21470642176",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "name": "ceph_lv2",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "tags": {
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.cluster_name": "ceph",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.crush_device_class": "",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.encrypted": "0",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.osd_id": "2",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.type": "block",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:                 "ceph.vdo": "0"
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             },
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "type": "block",
Nov 29 07:40:11 compute-0 magical_beaver[258530]:             "vg_name": "ceph_vg2"
Nov 29 07:40:11 compute-0 magical_beaver[258530]:         }
Nov 29 07:40:11 compute-0 magical_beaver[258530]:     ]
Nov 29 07:40:11 compute-0 magical_beaver[258530]: }
Nov 29 07:40:11 compute-0 systemd[1]: libpod-8cd61bf96d9ddf3c06663e381aa875d2102d726b19676859b90cc19cb0d4ad41.scope: Deactivated successfully.
Nov 29 07:40:11 compute-0 sshd-session[258526]: Connection closed by invalid user server 143.14.121.41 port 38476 [preauth]
Nov 29 07:40:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:14 compute-0 sshd-session[258596]: Connection closed by authenticating user root 143.14.121.41 port 38336 [preauth]
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:40:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:40:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 29 07:40:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4081125442' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 07:40:16 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 07:40:16 compute-0 ceph-mgr[75345]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 07:40:16 compute-0 ceph-mgr[75345]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 07:40:16 compute-0 podman[258512]: 2025-11-29 07:40:16.313506048 +0000 UTC m=+10.632606036 container attach 8cd61bf96d9ddf3c06663e381aa875d2102d726b19676859b90cc19cb0d4ad41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:40:16 compute-0 podman[258512]: 2025-11-29 07:40:16.315750186 +0000 UTC m=+10.634850174 container died 8cd61bf96d9ddf3c06663e381aa875d2102d726b19676859b90cc19cb0d4ad41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_beaver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:40:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:17 compute-0 sshd-session[258598]: Connection closed by authenticating user root 143.14.121.41 port 38350 [preauth]
Nov 29 07:40:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:19 compute-0 sshd-session[258600]: Connection closed by authenticating user root 143.14.121.41 port 38360 [preauth]
Nov 29 07:40:19 compute-0 ceph-mon[75050]: pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:19 compute-0 ceph-mon[75050]: pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:19 compute-0 ceph-mon[75050]: pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3925295884' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:40:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3925295884' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:40:19 compute-0 ceph-mon[75050]: pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:19 compute-0 ceph-mon[75050]: pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:19 compute-0 ceph-mon[75050]: pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:20 compute-0 podman[258531]: 2025-11-29 07:40:20.232427614 +0000 UTC m=+10.228914359 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 07:40:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:22 compute-0 sshd-session[258602]: Connection closed by authenticating user root 143.14.121.41 port 38366 [preauth]
Nov 29 07:40:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:24 compute-0 podman[258533]: 2025-11-29 07:40:24.446007891 +0000 UTC m=+14.442215529 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:40:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fffbb91922c997553c15de0f60fbdf9fb716e7ca06f22d276e6abc4c6b12c496-merged.mount: Deactivated successfully.
Nov 29 07:40:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4081125442' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 07:40:25 compute-0 ceph-mon[75050]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 07:40:25 compute-0 ceph-mon[75050]: pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:25 compute-0 ceph-mon[75050]: pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:25 compute-0 ceph-mon[75050]: pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:25 compute-0 sshd-session[258605]: Invalid user kafka from 143.14.121.41 port 59908
Nov 29 07:40:25 compute-0 sshd-session[258605]: Connection closed by invalid user kafka 143.14.121.41 port 59908 [preauth]
Nov 29 07:40:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:27 compute-0 ceph-mon[75050]: pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:27 compute-0 ceph-mon[75050]: pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:27 compute-0 podman[258512]: 2025-11-29 07:40:27.671098385 +0000 UTC m=+21.990198373 container remove 8cd61bf96d9ddf3c06663e381aa875d2102d726b19676859b90cc19cb0d4ad41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_beaver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:40:27 compute-0 sudo[258401]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:27 compute-0 podman[258570]: 2025-11-29 07:40:27.714904271 +0000 UTC m=+16.165093656 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:40:27 compute-0 systemd[1]: libpod-conmon-8cd61bf96d9ddf3c06663e381aa875d2102d726b19676859b90cc19cb0d4ad41.scope: Deactivated successfully.
Nov 29 07:40:27 compute-0 sudo[258626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:27 compute-0 sudo[258626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:27 compute-0 sudo[258626]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:27 compute-0 sudo[258651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:40:27 compute-0 sudo[258651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:27 compute-0 sudo[258651]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:27 compute-0 sudo[258676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:27 compute-0 sudo[258676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:27 compute-0 sudo[258676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:28 compute-0 sudo[258701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:40:28 compute-0 sudo[258701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:28 compute-0 podman[258769]: 2025-11-29 07:40:28.480520764 +0000 UTC m=+0.033674354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:40:29 compute-0 podman[258769]: 2025-11-29 07:40:29.017110851 +0000 UTC m=+0.570264391 container create 4e32fa5d349a8a10a63eb21a62c4e8600584455b30ac3544a408df5b1d33fe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:40:29 compute-0 sshd-session[258610]: Invalid user git from 143.14.121.41 port 59910
Nov 29 07:40:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:29 compute-0 sshd-session[258610]: Connection closed by invalid user git 143.14.121.41 port 59910 [preauth]
Nov 29 07:40:29 compute-0 systemd[1]: Started libpod-conmon-4e32fa5d349a8a10a63eb21a62c4e8600584455b30ac3544a408df5b1d33fe08.scope.
Nov 29 07:40:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:30 compute-0 ceph-mon[75050]: pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:30 compute-0 podman[258769]: 2025-11-29 07:40:30.061273508 +0000 UTC m=+1.614427098 container init 4e32fa5d349a8a10a63eb21a62c4e8600584455b30ac3544a408df5b1d33fe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:40:30 compute-0 podman[258769]: 2025-11-29 07:40:30.07677699 +0000 UTC m=+1.629930490 container start 4e32fa5d349a8a10a63eb21a62c4e8600584455b30ac3544a408df5b1d33fe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:40:30 compute-0 xenodochial_bell[258785]: 167 167
Nov 29 07:40:30 compute-0 systemd[1]: libpod-4e32fa5d349a8a10a63eb21a62c4e8600584455b30ac3544a408df5b1d33fe08.scope: Deactivated successfully.
Nov 29 07:40:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:30 compute-0 podman[258769]: 2025-11-29 07:40:30.569543568 +0000 UTC m=+2.122697108 container attach 4e32fa5d349a8a10a63eb21a62c4e8600584455b30ac3544a408df5b1d33fe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:40:30 compute-0 podman[258769]: 2025-11-29 07:40:30.570140953 +0000 UTC m=+2.123294493 container died 4e32fa5d349a8a10a63eb21a62c4e8600584455b30ac3544a408df5b1d33fe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:40:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:32 compute-0 sshd-session[258788]: Invalid user dolphinscheduler from 143.14.121.41 port 59914
Nov 29 07:40:33 compute-0 sshd-session[258788]: Connection closed by invalid user dolphinscheduler 143.14.121.41 port 59914 [preauth]
Nov 29 07:40:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:35 compute-0 ceph-mon[75050]: pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:35 compute-0 ceph-mon[75050]: pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-339fcb135c1b86d7cf94e5cc853badbb3e541cd15a320fe3052713c7f99eff2c-merged.mount: Deactivated successfully.
Nov 29 07:40:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:40:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:40:36 compute-0 sshd-session[258804]: Invalid user docker from 143.14.121.41 port 53686
Nov 29 07:40:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:36 compute-0 sshd-session[258804]: Connection closed by invalid user docker 143.14.121.41 port 53686 [preauth]
Nov 29 07:40:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 29 07:40:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/184253275' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 07:40:37 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.14359 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 07:40:37 compute-0 ceph-mgr[75345]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 07:40:37 compute-0 ceph-mgr[75345]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 07:40:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:39 compute-0 sshd-session[258807]: Invalid user test from 143.14.121.41 port 53688
Nov 29 07:40:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:40 compute-0 ceph-mon[75050]: pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:40 compute-0 ceph-mon[75050]: pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:40 compute-0 sshd-session[258807]: Connection closed by invalid user test 143.14.121.41 port 53688 [preauth]
Nov 29 07:40:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.664 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.666 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 podman[258769]: 2025-11-29 07:40:40.693212457 +0000 UTC m=+12.246366007 container remove 4e32fa5d349a8a10a63eb21a62c4e8600584455b30ac3544a408df5b1d33fe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.697 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.697 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.698 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.715 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.715 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.716 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.716 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.717 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.717 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.717 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.718 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.718 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.753 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.753 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.754 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.754 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:40:40 compute-0 nova_compute[256729]: 2025-11-29 07:40:40.755 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:40:40 compute-0 systemd[1]: libpod-conmon-4e32fa5d349a8a10a63eb21a62c4e8600584455b30ac3544a408df5b1d33fe08.scope: Deactivated successfully.
Nov 29 07:40:41 compute-0 podman[258818]: 2025-11-29 07:40:40.937373168 +0000 UTC m=+0.047039701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:40:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:40:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264313220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:41 compute-0 nova_compute[256729]: 2025-11-29 07:40:41.493 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.738s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:40:41 compute-0 nova_compute[256729]: 2025-11-29 07:40:41.754 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:40:41 compute-0 nova_compute[256729]: 2025-11-29 07:40:41.757 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5149MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:40:41 compute-0 nova_compute[256729]: 2025-11-29 07:40:41.758 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:40:41 compute-0 nova_compute[256729]: 2025-11-29 07:40:41.759 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:40:41 compute-0 nova_compute[256729]: 2025-11-29 07:40:41.847 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:40:41 compute-0 nova_compute[256729]: 2025-11-29 07:40:41.848 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:40:41 compute-0 nova_compute[256729]: 2025-11-29 07:40:41.867 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:40:42 compute-0 podman[258818]: 2025-11-29 07:40:42.23620141 +0000 UTC m=+1.345867933 container create 864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:40:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:40:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2509937163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:42 compute-0 nova_compute[256729]: 2025-11-29 07:40:42.462 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:40:42 compute-0 nova_compute[256729]: 2025-11-29 07:40:42.469 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:40:42 compute-0 nova_compute[256729]: 2025-11-29 07:40:42.485 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:40:42 compute-0 nova_compute[256729]: 2025-11-29 07:40:42.487 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:40:42 compute-0 nova_compute[256729]: 2025-11-29 07:40:42.487 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:40:42 compute-0 systemd[1]: Started libpod-conmon-864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf.scope.
Nov 29 07:40:42 compute-0 ceph-mon[75050]: pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/184253275' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 07:40:42 compute-0 ceph-mon[75050]: from='client.14359 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 07:40:42 compute-0 ceph-mon[75050]: pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:42 compute-0 ceph-mon[75050]: pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e02e8f5c500d9aae6182500d5918d936fa837fe926f8aa99ba6accaa43ce68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e02e8f5c500d9aae6182500d5918d936fa837fe926f8aa99ba6accaa43ce68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e02e8f5c500d9aae6182500d5918d936fa837fe926f8aa99ba6accaa43ce68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e02e8f5c500d9aae6182500d5918d936fa837fe926f8aa99ba6accaa43ce68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:40:42 compute-0 podman[258818]: 2025-11-29 07:40:42.783766309 +0000 UTC m=+1.893432802 container init 864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:40:42 compute-0 podman[258818]: 2025-11-29 07:40:42.800655727 +0000 UTC m=+1.910322220 container start 864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:40:42 compute-0 podman[258818]: 2025-11-29 07:40:42.951457028 +0000 UTC m=+2.061123531 container attach 864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]: {
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "osd_id": 2,
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "type": "bluestore"
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:     },
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "osd_id": 1,
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "type": "bluestore"
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:     },
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "osd_id": 0,
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:         "type": "bluestore"
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]:     }
Nov 29 07:40:43 compute-0 magical_ishizaka[258879]: }
Nov 29 07:40:43 compute-0 systemd[1]: libpod-864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf.scope: Deactivated successfully.
Nov 29 07:40:43 compute-0 podman[258818]: 2025-11-29 07:40:43.789694675 +0000 UTC m=+2.899361168 container died 864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:40:43 compute-0 systemd[1]: libpod-864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf.scope: Consumed 1.007s CPU time.
Nov 29 07:40:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3264313220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:43 compute-0 ceph-mon[75050]: pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2509937163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:40:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:44 compute-0 sshd-session[258809]: Connection closed by authenticating user root 143.14.121.41 port 53690 [preauth]
Nov 29 07:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-36e02e8f5c500d9aae6182500d5918d936fa837fe926f8aa99ba6accaa43ce68-merged.mount: Deactivated successfully.
Nov 29 07:40:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:48 compute-0 sshd-session[258924]: Connection closed by authenticating user root 143.14.121.41 port 56336 [preauth]
Nov 29 07:40:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:51 compute-0 sshd-session[258926]: Connection closed by authenticating user root 143.14.121.41 port 56350 [preauth]
Nov 29 07:40:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:52 compute-0 ceph-mon[75050]: pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:54 compute-0 sshd-session[258939]: Connection closed by authenticating user root 143.14.121.41 port 55248 [preauth]
Nov 29 07:40:55 compute-0 podman[258818]: 2025-11-29 07:40:55.645393667 +0000 UTC m=+14.755060150 container remove 864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:40:55 compute-0 sudo[258701]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:55 compute-0 systemd[1]: libpod-conmon-864e92846562970663467cfb5ef623fd7c99d736a4ecf964ce5503c4dff98ccf.scope: Deactivated successfully.
Nov 29 07:40:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:40:55 compute-0 podman[258941]: 2025-11-29 07:40:55.723094532 +0000 UTC m=+1.091804144 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:40:55 compute-0 podman[258928]: 2025-11-29 07:40:55.723253596 +0000 UTC m=+5.093549756 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 07:40:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:40:56 compute-0 ceph-mon[75050]: pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:56 compute-0 ceph-mon[75050]: pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:56 compute-0 ceph-mon[75050]: pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:56 compute-0 ceph-mon[75050]: pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:56 compute-0 ceph-mon[75050]: pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:56 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:40:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:40:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:56 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:40:56 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 803aacc8-c69a-4187-9054-6b473c1d3384 does not exist
Nov 29 07:40:56 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 14596cd0-43fe-467e-95fa-a75e5a2fe04f does not exist
Nov 29 07:40:56 compute-0 sudo[258969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:40:56 compute-0 sudo[258969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:56 compute-0 sudo[258969]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:56 compute-0 sudo[258994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:40:56 compute-0 sudo[258994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:40:56 compute-0 sudo[258994]: pam_unix(sudo:session): session closed for user root
Nov 29 07:40:56 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 29 07:40:56 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:56.792048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:40:56 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 29 07:40:56 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402056792195, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1266, "num_deletes": 251, "total_data_size": 1901651, "memory_usage": 1929200, "flush_reason": "Manual Compaction"}
Nov 29 07:40:56 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402057178518, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1873058, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15352, "largest_seqno": 16617, "table_properties": {"data_size": 1866836, "index_size": 3425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13671, "raw_average_key_size": 20, "raw_value_size": 1854160, "raw_average_value_size": 2755, "num_data_blocks": 156, "num_entries": 673, "num_filter_entries": 673, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401853, "oldest_key_time": 1764401853, "file_creation_time": 1764402056, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 386556 microseconds, and 8711 cpu microseconds.
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.178613) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1873058 bytes OK
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.178640) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.340331) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.340408) EVENT_LOG_v1 {"time_micros": 1764402057340392, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.340451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1895730, prev total WAL file size 1923790, number of live WAL files 2.
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.341707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1829KB)], [35(8190KB)]
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402057341813, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10260007, "oldest_snapshot_seqno": -1}
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4248 keys, 8470257 bytes, temperature: kUnknown
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402057638102, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8470257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8439200, "index_size": 19363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 104768, "raw_average_key_size": 24, "raw_value_size": 8359548, "raw_average_value_size": 1967, "num_data_blocks": 818, "num_entries": 4248, "num_filter_entries": 4248, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764402057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:40:57 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:40:57 compute-0 ceph-mon[75050]: pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:57 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.638479) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8470257 bytes
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.703863) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 34.6 rd, 28.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(10.0) write-amplify(4.5) OK, records in: 4762, records dropped: 514 output_compression: NoCompression
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.703919) EVENT_LOG_v1 {"time_micros": 1764402057703895, "job": 16, "event": "compaction_finished", "compaction_time_micros": 296503, "compaction_time_cpu_micros": 40119, "output_level": 6, "num_output_files": 1, "total_output_size": 8470257, "num_input_records": 4762, "num_output_records": 4248, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402057705421, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402057707485, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.341471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.707646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.707653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.707655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.707656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:57 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:40:57.707658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:40:57 compute-0 sshd-session[258967]: Connection closed by authenticating user root 143.14.121.41 port 55250 [preauth]
Nov 29 07:40:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:40:58 compute-0 podman[259019]: 2025-11-29 07:40:58.802267071 +0000 UTC m=+0.163476460 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:40:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:40:59.756 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:40:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:40:59.757 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:40:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:40:59.757 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:40:59 compute-0 ceph-mon[75050]: pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:01 compute-0 ceph-mon[75050]: pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:02 compute-0 sshd-session[259045]: Connection closed by authenticating user root 143.14.121.41 port 55264 [preauth]
Nov 29 07:41:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:04 compute-0 ceph-mon[75050]: pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:04 compute-0 sshd-session[259047]: Connection closed by authenticating user root 143.14.121.41 port 55630 [preauth]
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:41:05
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta']
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:05 compute-0 ceph-mon[75050]: pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:41:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:41:07 compute-0 ceph-mon[75050]: pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:08 compute-0 sshd-session[259049]: Connection closed by authenticating user root 143.14.121.41 port 55642 [preauth]
Nov 29 07:41:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:10 compute-0 ceph-mon[75050]: pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:11 compute-0 ceph-mon[75050]: pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:12 compute-0 ceph-mon[75050]: pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:13 compute-0 sshd-session[259051]: Connection closed by authenticating user root 143.14.121.41 port 55658 [preauth]
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:41:14 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:41:15 compute-0 ceph-mon[75050]: pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:16 compute-0 sshd-session[259053]: Connection closed by authenticating user root 143.14.121.41 port 39466 [preauth]
Nov 29 07:41:16 compute-0 ceph-mon[75050]: pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:41:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 3649 writes, 16K keys, 3649 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 3649 writes, 3649 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 863 writes, 4228 keys, 863 commit groups, 1.0 writes per commit group, ingest: 5.93 MB, 0.01 MB/s
                                           Interval WAL: 864 writes, 864 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.3      3.82              0.09         8    0.478       0      0       0.0       0.0
                                             L6      1/0    8.08 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.6     25.2     20.5      2.54              0.20         7    0.364     29K   3789       0.0       0.0
                                            Sum      1/0    8.08 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6     10.1     11.4      6.37              0.29        15    0.424     29K   3789       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8      6.2      6.2      4.68              0.11         6    0.779     14K   1999       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0     25.2     20.5      2.54              0.20         7    0.364     29K   3789       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.2      3.82              0.09         7    0.546       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.04 MB/s write, 0.06 GB read, 0.04 MB/s read, 6.4 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 4.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdb5ecb1f0#2 capacity: 308.00 MB usage: 3.86 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00013 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(246,3.60 MB,1.16787%) FilterBlock(16,90.86 KB,0.0288084%) IndexBlock(16,176.95 KB,0.0561058%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:41:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:19 compute-0 sshd-session[259055]: Invalid user prueba from 143.14.121.41 port 39472
Nov 29 07:41:20 compute-0 sshd-session[259055]: Connection closed by invalid user prueba 143.14.121.41 port 39472 [preauth]
Nov 29 07:41:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:21 compute-0 ceph-mon[75050]: pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:22 compute-0 sshd-session[259057]: Invalid user user1 from 143.14.121.41 port 39484
Nov 29 07:41:23 compute-0 sshd-session[259057]: Connection closed by invalid user user1 143.14.121.41 port 39484 [preauth]
Nov 29 07:41:23 compute-0 ceph-mon[75050]: pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:23 compute-0 ceph-mon[75050]: pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:26 compute-0 sshd-session[259059]: Invalid user support from 143.14.121.41 port 35148
Nov 29 07:41:26 compute-0 podman[259063]: 2025-11-29 07:41:26.622002899 +0000 UTC m=+0.084312178 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:41:26 compute-0 podman[259062]: 2025-11-29 07:41:26.623790286 +0000 UTC m=+0.093272872 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:41:27 compute-0 sshd-session[259059]: Connection closed by invalid user support 143.14.121.41 port 35148 [preauth]
Nov 29 07:41:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:27 compute-0 ceph-mon[75050]: pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:41:29.727 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:41:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:41:29.728 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:41:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:41:29.729 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:41:29 compute-0 podman[259102]: 2025-11-29 07:41:29.739998175 +0000 UTC m=+0.109290795 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 07:41:30 compute-0 ceph-mon[75050]: pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:30 compute-0 ceph-mon[75050]: pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:31 compute-0 sshd-session[259100]: Connection closed by authenticating user root 143.14.121.41 port 35154 [preauth]
Nov 29 07:41:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:32 compute-0 ceph-mon[75050]: pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:33 compute-0 sshd-session[259128]: Connection closed by authenticating user root 143.14.121.41 port 49468 [preauth]
Nov 29 07:41:34 compute-0 ceph-mon[75050]: pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:35 compute-0 ceph-mon[75050]: pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:41:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:41:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:36 compute-0 sshd-session[259130]: Connection closed by authenticating user root 143.14.121.41 port 49474 [preauth]
Nov 29 07:41:36 compute-0 ceph-mon[75050]: pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:38 compute-0 ceph-mon[75050]: pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:40 compute-0 sshd-session[259132]: Connection closed by authenticating user root 143.14.121.41 port 49484 [preauth]
Nov 29 07:41:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:41 compute-0 ceph-mon[75050]: pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.489 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.489 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.490 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.490 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.504 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.504 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.504 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.505 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.505 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.506 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.506 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.506 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.506 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.531 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.532 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.532 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.532 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:41:42 compute-0 nova_compute[256729]: 2025-11-29 07:41:42.533 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:41:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:41:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116328368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.108 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.266 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.267 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5193MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.267 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.267 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.339 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.340 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.356 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:41:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:41:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335617371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.771 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.779 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.796 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.799 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:41:43 compute-0 nova_compute[256729]: 2025-11-29 07:41:43.800 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:41:43 compute-0 sshd-session[259134]: Connection closed by authenticating user root 143.14.121.41 port 49498 [preauth]
Nov 29 07:41:44 compute-0 ceph-mon[75050]: pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4116328368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/335617371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:41:45 compute-0 ceph-mon[75050]: pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:47 compute-0 sshd-session[259180]: Connection closed by authenticating user root 143.14.121.41 port 54880 [preauth]
Nov 29 07:41:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:49 compute-0 ceph-mon[75050]: pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:50 compute-0 ceph-mon[75050]: pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:51 compute-0 sshd-session[259182]: Invalid user upload from 143.14.121.41 port 54888
Nov 29 07:41:51 compute-0 sshd-session[259182]: Connection closed by invalid user upload 143.14.121.41 port 54888 [preauth]
Nov 29 07:41:51 compute-0 ceph-mon[75050]: pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:53 compute-0 ceph-mon[75050]: pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:54 compute-0 sshd-session[259184]: Connection closed by authenticating user root 143.14.121.41 port 54904 [preauth]
Nov 29 07:41:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:55 compute-0 ceph-mon[75050]: pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:56 compute-0 sudo[259188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:56 compute-0 sudo[259188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:56 compute-0 sudo[259188]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:56 compute-0 sudo[259223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:41:56 compute-0 sudo[259223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:56 compute-0 sudo[259223]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:56 compute-0 podman[259213]: 2025-11-29 07:41:56.787067886 +0000 UTC m=+0.056003794 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:41:56 compute-0 podman[259212]: 2025-11-29 07:41:56.798703397 +0000 UTC m=+0.081509475 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:41:56 compute-0 sudo[259273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:41:56 compute-0 sudo[259273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:56 compute-0 sudo[259273]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:56 compute-0 sudo[259298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:41:56 compute-0 sudo[259298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:41:57 compute-0 ceph-mon[75050]: pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:57 compute-0 sudo[259298]: pam_unix(sudo:session): session closed for user root
Nov 29 07:41:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:41:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:41:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:41:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:41:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:41:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:41:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:41:59 compute-0 sshd-session[259186]: Connection closed by authenticating user root 143.14.121.41 port 60620 [preauth]
Nov 29 07:41:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:41:59.758 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:41:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:41:59.759 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:41:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:41:59.759 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:00 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:42:00 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:42:00 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:42:00 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 36f3bce8-925f-486e-8964-5b731b415b6e does not exist
Nov 29 07:42:00 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b9f1bd80-a081-4627-819f-3c8718b9eba7 does not exist
Nov 29 07:42:00 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev eae617b8-8ea7-4402-8e2e-8e1fd52b550b does not exist
Nov 29 07:42:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:42:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:42:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:42:00 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:42:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:42:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:42:00 compute-0 sudo[259356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:00 compute-0 sudo[259356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:00 compute-0 sudo[259356]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:00 compute-0 sudo[259387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:00 compute-0 sudo[259387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:00 compute-0 sudo[259387]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:00 compute-0 podman[259380]: 2025-11-29 07:42:00.693725784 +0000 UTC m=+0.078516658 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:42:00 compute-0 sudo[259432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:00 compute-0 sudo[259432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:00 compute-0 sudo[259432]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:00 compute-0 sudo[259458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:42:00 compute-0 sudo[259458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:01 compute-0 podman[259524]: 2025-11-29 07:42:01.196368208 +0000 UTC m=+0.029016323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:01 compute-0 podman[259524]: 2025-11-29 07:42:01.948741749 +0000 UTC m=+0.781389864 container create 825c05d2f05d57e7034603d83164f5a6994046a869677081b22d9e53ccdd2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:42:01 compute-0 ceph-mon[75050]: pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:01 compute-0 ceph-mon[75050]: pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:42:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:42:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:42:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:42:01 compute-0 systemd[1]: Started libpod-conmon-825c05d2f05d57e7034603d83164f5a6994046a869677081b22d9e53ccdd2628.scope.
Nov 29 07:42:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:02 compute-0 podman[259524]: 2025-11-29 07:42:02.163339253 +0000 UTC m=+0.995987398 container init 825c05d2f05d57e7034603d83164f5a6994046a869677081b22d9e53ccdd2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:42:02 compute-0 podman[259524]: 2025-11-29 07:42:02.174730019 +0000 UTC m=+1.007378164 container start 825c05d2f05d57e7034603d83164f5a6994046a869677081b22d9e53ccdd2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:42:02 compute-0 cool_cohen[259540]: 167 167
Nov 29 07:42:02 compute-0 systemd[1]: libpod-825c05d2f05d57e7034603d83164f5a6994046a869677081b22d9e53ccdd2628.scope: Deactivated successfully.
Nov 29 07:42:02 compute-0 podman[259524]: 2025-11-29 07:42:02.285791539 +0000 UTC m=+1.118439754 container attach 825c05d2f05d57e7034603d83164f5a6994046a869677081b22d9e53ccdd2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:42:02 compute-0 podman[259524]: 2025-11-29 07:42:02.286444726 +0000 UTC m=+1.119092881 container died 825c05d2f05d57e7034603d83164f5a6994046a869677081b22d9e53ccdd2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:42:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:03 compute-0 ceph-mon[75050]: pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-cddca33ed456a3216262c21f3bda124bc9b21e6b5d4def00f020543caa158755-merged.mount: Deactivated successfully.
Nov 29 07:42:03 compute-0 podman[259524]: 2025-11-29 07:42:03.844147352 +0000 UTC m=+2.676795477 container remove 825c05d2f05d57e7034603d83164f5a6994046a869677081b22d9e53ccdd2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_cohen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:42:03 compute-0 systemd[1]: libpod-conmon-825c05d2f05d57e7034603d83164f5a6994046a869677081b22d9e53ccdd2628.scope: Deactivated successfully.
Nov 29 07:42:04 compute-0 podman[259563]: 2025-11-29 07:42:03.987100149 +0000 UTC m=+0.021683634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:04 compute-0 sshd-session[259354]: Connection closed by authenticating user root 143.14.121.41 port 60622 [preauth]
Nov 29 07:42:04 compute-0 podman[259563]: 2025-11-29 07:42:04.220802 +0000 UTC m=+0.255385465 container create 768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:42:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:04 compute-0 systemd[1]: Started libpod-conmon-768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376.scope.
Nov 29 07:42:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021358ae621622aa4c586203ee6d06db363bbc0ce58fdae4d5a5222f978f8fdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021358ae621622aa4c586203ee6d06db363bbc0ce58fdae4d5a5222f978f8fdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021358ae621622aa4c586203ee6d06db363bbc0ce58fdae4d5a5222f978f8fdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021358ae621622aa4c586203ee6d06db363bbc0ce58fdae4d5a5222f978f8fdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021358ae621622aa4c586203ee6d06db363bbc0ce58fdae4d5a5222f978f8fdc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:04 compute-0 podman[259563]: 2025-11-29 07:42:04.464638493 +0000 UTC m=+0.499221998 container init 768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:42:04 compute-0 podman[259563]: 2025-11-29 07:42:04.472006634 +0000 UTC m=+0.506590099 container start 768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:42:04 compute-0 podman[259563]: 2025-11-29 07:42:04.500198175 +0000 UTC m=+0.534781660 container attach 768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:42:05 compute-0 reverent_northcutt[259580]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:42:05 compute-0 reverent_northcutt[259580]: --> relative data size: 1.0
Nov 29 07:42:05 compute-0 reverent_northcutt[259580]: --> All data devices are unavailable
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:42:05
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'images', '.mgr', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:42:05 compute-0 systemd[1]: libpod-768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376.scope: Deactivated successfully.
Nov 29 07:42:05 compute-0 podman[259563]: 2025-11-29 07:42:05.546932698 +0000 UTC m=+1.581516173 container died 768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:42:05 compute-0 systemd[1]: libpod-768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376.scope: Consumed 1.032s CPU time.
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:42:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:42:08 compute-0 sshd-session[259577]: Connection closed by authenticating user root 143.14.121.41 port 37268 [preauth]
Nov 29 07:42:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:42:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3439160095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:42:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:42:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3439160095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:42:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:09 compute-0 ceph-mon[75050]: pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:11 compute-0 sshd-session[259621]: Connection closed by authenticating user root 143.14.121.41 port 37278 [preauth]
Nov 29 07:42:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-021358ae621622aa4c586203ee6d06db363bbc0ce58fdae4d5a5222f978f8fdc-merged.mount: Deactivated successfully.
Nov 29 07:42:11 compute-0 ceph-mon[75050]: pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:11 compute-0 ceph-mon[75050]: pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3439160095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:42:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3439160095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:42:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:12 compute-0 podman[259563]: 2025-11-29 07:42:12.688904624 +0000 UTC m=+8.723488099 container remove 768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:42:12 compute-0 sudo[259458]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:12 compute-0 systemd[1]: libpod-conmon-768e57df574fdb690a9d81c1c7c58c0c03c98de26314f73a8be4e01d6eb9e376.scope: Deactivated successfully.
Nov 29 07:42:12 compute-0 sudo[259626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:12 compute-0 sudo[259626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:12 compute-0 sudo[259626]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:12 compute-0 sudo[259651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:12 compute-0 sudo[259651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:12 compute-0 sudo[259651]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:12 compute-0 sudo[259676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:12 compute-0 sudo[259676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:12 compute-0 sudo[259676]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:13 compute-0 ceph-mon[75050]: pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:13 compute-0 ceph-mon[75050]: pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:13 compute-0 sudo[259701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:42:13 compute-0 sudo[259701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:13 compute-0 podman[259766]: 2025-11-29 07:42:13.365531531 +0000 UTC m=+0.023825189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:13 compute-0 podman[259766]: 2025-11-29 07:42:13.705547828 +0000 UTC m=+0.363841386 container create 573daeaf993a90e44031f1608fcdb628ed00519816465ed8ff0ce4a33f7c49cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:42:13 compute-0 systemd[1]: Started libpod-conmon-573daeaf993a90e44031f1608fcdb628ed00519816465ed8ff0ce4a33f7c49cc.scope.
Nov 29 07:42:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:14 compute-0 podman[259766]: 2025-11-29 07:42:14.151746979 +0000 UTC m=+0.810040557 container init 573daeaf993a90e44031f1608fcdb628ed00519816465ed8ff0ce4a33f7c49cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:42:14 compute-0 podman[259766]: 2025-11-29 07:42:14.161216875 +0000 UTC m=+0.819510463 container start 573daeaf993a90e44031f1608fcdb628ed00519816465ed8ff0ce4a33f7c49cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:42:14 compute-0 boring_neumann[259782]: 167 167
Nov 29 07:42:14 compute-0 systemd[1]: libpod-573daeaf993a90e44031f1608fcdb628ed00519816465ed8ff0ce4a33f7c49cc.scope: Deactivated successfully.
Nov 29 07:42:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:14 compute-0 sshd-session[259624]: Invalid user admin from 143.14.121.41 port 37292
Nov 29 07:42:15 compute-0 sshd-session[259624]: Connection closed by invalid user admin 143.14.121.41 port 37292 [preauth]
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:42:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:42:15 compute-0 podman[259766]: 2025-11-29 07:42:15.242011462 +0000 UTC m=+1.900305030 container attach 573daeaf993a90e44031f1608fcdb628ed00519816465ed8ff0ce4a33f7c49cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:42:15 compute-0 podman[259766]: 2025-11-29 07:42:15.243867909 +0000 UTC m=+1.902161507 container died 573daeaf993a90e44031f1608fcdb628ed00519816465ed8ff0ce4a33f7c49cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:42:15 compute-0 ceph-mon[75050]: pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9626666203dea0bf45fb920c7d48de9dcf22d7f19768c2c835e34e4721a7c3cf-merged.mount: Deactivated successfully.
Nov 29 07:42:18 compute-0 sshd-session[259798]: Connection closed by authenticating user root 143.14.121.41 port 48510 [preauth]
Nov 29 07:42:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:18 compute-0 ceph-mon[75050]: pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:20 compute-0 podman[259766]: 2025-11-29 07:42:20.745729995 +0000 UTC m=+7.404023593 container remove 573daeaf993a90e44031f1608fcdb628ed00519816465ed8ff0ce4a33f7c49cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:42:20 compute-0 systemd[1]: libpod-conmon-573daeaf993a90e44031f1608fcdb628ed00519816465ed8ff0ce4a33f7c49cc.scope: Deactivated successfully.
Nov 29 07:42:21 compute-0 podman[259810]: 2025-11-29 07:42:20.975540045 +0000 UTC m=+0.034308391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:21 compute-0 ceph-mon[75050]: pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:21 compute-0 podman[259810]: 2025-11-29 07:42:21.430815972 +0000 UTC m=+0.489584258 container create 98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:42:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:22 compute-0 systemd[1]: Started libpod-conmon-98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6.scope.
Nov 29 07:42:22 compute-0 ceph-mon[75050]: pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b93612500a33daf6b24bf9f51d88744379116326759bd0079dad3980be52bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b93612500a33daf6b24bf9f51d88744379116326759bd0079dad3980be52bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b93612500a33daf6b24bf9f51d88744379116326759bd0079dad3980be52bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b93612500a33daf6b24bf9f51d88744379116326759bd0079dad3980be52bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:22 compute-0 sshd-session[259801]: Connection closed by authenticating user root 143.14.121.41 port 48512 [preauth]
Nov 29 07:42:22 compute-0 podman[259810]: 2025-11-29 07:42:22.764102427 +0000 UTC m=+1.822870733 container init 98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:42:22 compute-0 podman[259810]: 2025-11-29 07:42:22.77419418 +0000 UTC m=+1.832962446 container start 98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:42:22 compute-0 podman[259810]: 2025-11-29 07:42:22.916118029 +0000 UTC m=+1.974886325 container attach 98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:42:23 compute-0 practical_villani[259827]: {
Nov 29 07:42:23 compute-0 practical_villani[259827]:     "0": [
Nov 29 07:42:23 compute-0 practical_villani[259827]:         {
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "devices": [
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "/dev/loop3"
Nov 29 07:42:23 compute-0 practical_villani[259827]:             ],
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_name": "ceph_lv0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_size": "21470642176",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "name": "ceph_lv0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "tags": {
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.cluster_name": "ceph",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.crush_device_class": "",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.encrypted": "0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.osd_id": "0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.type": "block",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.vdo": "0"
Nov 29 07:42:23 compute-0 practical_villani[259827]:             },
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "type": "block",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "vg_name": "ceph_vg0"
Nov 29 07:42:23 compute-0 practical_villani[259827]:         }
Nov 29 07:42:23 compute-0 practical_villani[259827]:     ],
Nov 29 07:42:23 compute-0 practical_villani[259827]:     "1": [
Nov 29 07:42:23 compute-0 practical_villani[259827]:         {
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "devices": [
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "/dev/loop4"
Nov 29 07:42:23 compute-0 practical_villani[259827]:             ],
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_name": "ceph_lv1",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_size": "21470642176",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "name": "ceph_lv1",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "tags": {
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.cluster_name": "ceph",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.crush_device_class": "",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.encrypted": "0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.osd_id": "1",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.type": "block",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.vdo": "0"
Nov 29 07:42:23 compute-0 practical_villani[259827]:             },
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "type": "block",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "vg_name": "ceph_vg1"
Nov 29 07:42:23 compute-0 practical_villani[259827]:         }
Nov 29 07:42:23 compute-0 practical_villani[259827]:     ],
Nov 29 07:42:23 compute-0 practical_villani[259827]:     "2": [
Nov 29 07:42:23 compute-0 practical_villani[259827]:         {
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "devices": [
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "/dev/loop5"
Nov 29 07:42:23 compute-0 practical_villani[259827]:             ],
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_name": "ceph_lv2",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_size": "21470642176",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "name": "ceph_lv2",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "tags": {
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.cluster_name": "ceph",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.crush_device_class": "",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.encrypted": "0",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.osd_id": "2",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.type": "block",
Nov 29 07:42:23 compute-0 practical_villani[259827]:                 "ceph.vdo": "0"
Nov 29 07:42:23 compute-0 practical_villani[259827]:             },
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "type": "block",
Nov 29 07:42:23 compute-0 practical_villani[259827]:             "vg_name": "ceph_vg2"
Nov 29 07:42:23 compute-0 practical_villani[259827]:         }
Nov 29 07:42:23 compute-0 practical_villani[259827]:     ]
Nov 29 07:42:23 compute-0 practical_villani[259827]: }
Nov 29 07:42:23 compute-0 systemd[1]: libpod-98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6.scope: Deactivated successfully.
Nov 29 07:42:23 compute-0 conmon[259827]: conmon 98e76b9800aa96c9fd19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6.scope/container/memory.events
Nov 29 07:42:23 compute-0 podman[259810]: 2025-11-29 07:42:23.643218125 +0000 UTC m=+2.701986461 container died 98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:42:23 compute-0 ceph-mon[75050]: pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-85b93612500a33daf6b24bf9f51d88744379116326759bd0079dad3980be52bc-merged.mount: Deactivated successfully.
Nov 29 07:42:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:24 compute-0 podman[259810]: 2025-11-29 07:42:24.746092434 +0000 UTC m=+3.804860720 container remove 98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:42:24 compute-0 sudo[259701]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-0 systemd[1]: libpod-conmon-98e76b9800aa96c9fd1954636130a7132bc48561f81fc6b515c4b053560dbff6.scope: Deactivated successfully.
Nov 29 07:42:24 compute-0 sudo[259850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:24 compute-0 sudo[259850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-0 sudo[259850]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:24 compute-0 sudo[259875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:42:24 compute-0 sudo[259875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:24 compute-0 sudo[259875]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:25 compute-0 sudo[259900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:25 compute-0 sudo[259900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:25 compute-0 sudo[259900]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:25 compute-0 sudo[259925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:42:25 compute-0 sudo[259925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:25 compute-0 podman[259989]: 2025-11-29 07:42:25.421689735 +0000 UTC m=+0.024044595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:25 compute-0 sshd-session[259832]: Connection closed by authenticating user root 143.14.121.41 port 37670 [preauth]
Nov 29 07:42:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:27 compute-0 ceph-mon[75050]: pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:27 compute-0 podman[259989]: 2025-11-29 07:42:27.229187836 +0000 UTC m=+1.831542736 container create 7e92695e101b6f22aedff5aedd6ca5d44a7a4c4b7f5a79b6abf9b5cb7ee23fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rosalind, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:42:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:29 compute-0 systemd[1]: Started libpod-conmon-7e92695e101b6f22aedff5aedd6ca5d44a7a4c4b7f5a79b6abf9b5cb7ee23fdc.scope.
Nov 29 07:42:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:29 compute-0 sshd-session[260003]: Connection closed by authenticating user root 143.14.121.41 port 37680 [preauth]
Nov 29 07:42:29 compute-0 podman[259989]: 2025-11-29 07:42:29.791351848 +0000 UTC m=+4.393706758 container init 7e92695e101b6f22aedff5aedd6ca5d44a7a4c4b7f5a79b6abf9b5cb7ee23fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rosalind, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:42:29 compute-0 podman[259989]: 2025-11-29 07:42:29.801032539 +0000 UTC m=+4.403387409 container start 7e92695e101b6f22aedff5aedd6ca5d44a7a4c4b7f5a79b6abf9b5cb7ee23fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rosalind, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:42:29 compute-0 flamboyant_rosalind[260030]: 167 167
Nov 29 07:42:29 compute-0 systemd[1]: libpod-7e92695e101b6f22aedff5aedd6ca5d44a7a4c4b7f5a79b6abf9b5cb7ee23fdc.scope: Deactivated successfully.
Nov 29 07:42:29 compute-0 ceph-mon[75050]: pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:29 compute-0 ceph-mon[75050]: pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:29 compute-0 podman[259989]: 2025-11-29 07:42:29.828518702 +0000 UTC m=+4.430873562 container attach 7e92695e101b6f22aedff5aedd6ca5d44a7a4c4b7f5a79b6abf9b5cb7ee23fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rosalind, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:42:29 compute-0 podman[259989]: 2025-11-29 07:42:29.829013995 +0000 UTC m=+4.431368855 container died 7e92695e101b6f22aedff5aedd6ca5d44a7a4c4b7f5a79b6abf9b5cb7ee23fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:42:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-51cb6a8780acbbe2df1cba8641b627564bf747f71215627d08ecb48f8bf1aa26-merged.mount: Deactivated successfully.
Nov 29 07:42:31 compute-0 ceph-mon[75050]: pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:31 compute-0 podman[259989]: 2025-11-29 07:42:31.91387326 +0000 UTC m=+6.516228120 container remove 7e92695e101b6f22aedff5aedd6ca5d44a7a4c4b7f5a79b6abf9b5cb7ee23fdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:42:31 compute-0 podman[260006]: 2025-11-29 07:42:31.926293462 +0000 UTC m=+4.640993012 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:42:31 compute-0 podman[260005]: 2025-11-29 07:42:31.929767893 +0000 UTC m=+4.644997476 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:42:31 compute-0 systemd[1]: libpod-conmon-7e92695e101b6f22aedff5aedd6ca5d44a7a4c4b7f5a79b6abf9b5cb7ee23fdc.scope: Deactivated successfully.
Nov 29 07:42:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:32 compute-0 podman[260063]: 2025-11-29 07:42:32.041596643 +0000 UTC m=+1.219877686 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:42:32 compute-0 podman[260094]: 2025-11-29 07:42:32.122275584 +0000 UTC m=+0.086910675 container create da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:42:32 compute-0 podman[260094]: 2025-11-29 07:42:32.058584323 +0000 UTC m=+0.023219434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:42:32 compute-0 systemd[1]: Started libpod-conmon-da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b.scope.
Nov 29 07:42:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3e54f758fd9d7b3851bca28a4f8fdb1aa7837a0bc176508a25c204e3a6983e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3e54f758fd9d7b3851bca28a4f8fdb1aa7837a0bc176508a25c204e3a6983e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3e54f758fd9d7b3851bca28a4f8fdb1aa7837a0bc176508a25c204e3a6983e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3e54f758fd9d7b3851bca28a4f8fdb1aa7837a0bc176508a25c204e3a6983e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:42:32 compute-0 podman[260094]: 2025-11-29 07:42:32.353707796 +0000 UTC m=+0.318342957 container init da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:42:32 compute-0 podman[260094]: 2025-11-29 07:42:32.36273374 +0000 UTC m=+0.327368831 container start da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:42:32 compute-0 podman[260094]: 2025-11-29 07:42:32.395460549 +0000 UTC m=+0.360095650 container attach da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 07:42:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:32 compute-0 ceph-mon[75050]: pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:33 compute-0 silly_mayer[260114]: {
Nov 29 07:42:33 compute-0 silly_mayer[260114]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "osd_id": 2,
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "type": "bluestore"
Nov 29 07:42:33 compute-0 silly_mayer[260114]:     },
Nov 29 07:42:33 compute-0 silly_mayer[260114]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "osd_id": 1,
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "type": "bluestore"
Nov 29 07:42:33 compute-0 silly_mayer[260114]:     },
Nov 29 07:42:33 compute-0 silly_mayer[260114]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "osd_id": 0,
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:42:33 compute-0 silly_mayer[260114]:         "type": "bluestore"
Nov 29 07:42:33 compute-0 silly_mayer[260114]:     }
Nov 29 07:42:33 compute-0 silly_mayer[260114]: }
Nov 29 07:42:33 compute-0 systemd[1]: libpod-da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b.scope: Deactivated successfully.
Nov 29 07:42:33 compute-0 podman[260094]: 2025-11-29 07:42:33.371374146 +0000 UTC m=+1.336009237 container died da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 07:42:33 compute-0 systemd[1]: libpod-da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b.scope: Consumed 1.015s CPU time.
Nov 29 07:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f3e54f758fd9d7b3851bca28a4f8fdb1aa7837a0bc176508a25c204e3a6983e-merged.mount: Deactivated successfully.
Nov 29 07:42:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:34 compute-0 sshd-session[260061]: Connection closed by authenticating user root 143.14.121.41 port 37690 [preauth]
Nov 29 07:42:34 compute-0 podman[260094]: 2025-11-29 07:42:34.831813989 +0000 UTC m=+2.796449100 container remove da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:42:34 compute-0 systemd[1]: libpod-conmon-da1db356e532ed777c89ef05cfd535d7a1f0a94509d73da715935f752f94f87b.scope: Deactivated successfully.
Nov 29 07:42:34 compute-0 sudo[259925]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:42:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:42:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:42:35 compute-0 ceph-mon[75050]: pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:42:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:42:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:42:35 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b4bd7e4f-3a57-401a-8676-ee897132b371 does not exist
Nov 29 07:42:35 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c19b65af-a95f-4646-81ad-8b06a29b0df4 does not exist
Nov 29 07:42:35 compute-0 sudo[260160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:42:35 compute-0 sudo[260160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:35 compute-0 sudo[260160]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:36 compute-0 sudo[260185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:42:36 compute-0 sudo[260185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:42:36 compute-0 sudo[260185]: pam_unix(sudo:session): session closed for user root
Nov 29 07:42:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:42:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:42:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:38 compute-0 sshd-session[260158]: Connection closed by authenticating user root 143.14.121.41 port 60438 [preauth]
Nov 29 07:42:38 compute-0 ceph-mon[75050]: pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:40 compute-0 sshd-session[260210]: Invalid user peertube from 143.14.121.41 port 60446
Nov 29 07:42:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:40 compute-0 sshd-session[260210]: Connection closed by invalid user peertube 143.14.121.41 port 60446 [preauth]
Nov 29 07:42:40 compute-0 ceph-mon[75050]: pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:41 compute-0 ceph-mon[75050]: pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:43 compute-0 nova_compute[256729]: 2025-11-29 07:42:43.454 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:43 compute-0 nova_compute[256729]: 2025-11-29 07:42:43.456 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:43 compute-0 ceph-mon[75050]: pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:44 compute-0 sshd-session[260212]: Invalid user osboxes from 143.14.121.41 port 60454
Nov 29 07:42:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:44 compute-0 sshd-session[260212]: Connection closed by invalid user osboxes 143.14.121.41 port 60454 [preauth]
Nov 29 07:42:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:46 compute-0 sshd-session[260214]: Connection closed by authenticating user root 143.14.121.41 port 44136 [preauth]
Nov 29 07:42:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:47 compute-0 ceph-mon[75050]: pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:48 compute-0 nova_compute[256729]: 2025-11-29 07:42:48.599 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:48 compute-0 nova_compute[256729]: 2025-11-29 07:42:48.599 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:42:48 compute-0 nova_compute[256729]: 2025-11-29 07:42:48.600 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:42:49 compute-0 sshd-session[260216]: Connection closed by authenticating user root 143.14.121.41 port 44144 [preauth]
Nov 29 07:42:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:52 compute-0 sshd-session[260218]: Connection closed by authenticating user root 143.14.121.41 port 44156 [preauth]
Nov 29 07:42:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:54 compute-0 ceph-mon[75050]: pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:55 compute-0 sshd-session[260220]: Connection closed by authenticating user root 143.14.121.41 port 57768 [preauth]
Nov 29 07:42:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:42:57 compute-0 nova_compute[256729]: 2025-11-29 07:42:57.576 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:42:57 compute-0 nova_compute[256729]: 2025-11-29 07:42:57.576 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:57 compute-0 nova_compute[256729]: 2025-11-29 07:42:57.577 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:57 compute-0 nova_compute[256729]: 2025-11-29 07:42:57.577 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:57 compute-0 nova_compute[256729]: 2025-11-29 07:42:57.578 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:57 compute-0 nova_compute[256729]: 2025-11-29 07:42:57.578 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:57 compute-0 nova_compute[256729]: 2025-11-29 07:42:57.578 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:57 compute-0 nova_compute[256729]: 2025-11-29 07:42:57.578 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:42:57 compute-0 nova_compute[256729]: 2025-11-29 07:42:57.578 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:42:57 compute-0 sshd-session[260222]: Connection closed by authenticating user root 143.14.121.41 port 57784 [preauth]
Nov 29 07:42:58 compute-0 ceph-mon[75050]: pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:58 compute-0 ceph-mon[75050]: pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:58 compute-0 ceph-mon[75050]: pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:58 compute-0 ceph-mon[75050]: pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:42:58 compute-0 nova_compute[256729]: 2025-11-29 07:42:58.973 256736 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 2.21 sec
Nov 29 07:42:58 compute-0 nova_compute[256729]: 2025-11-29 07:42:58.978 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:58 compute-0 nova_compute[256729]: 2025-11-29 07:42:58.978 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:58 compute-0 nova_compute[256729]: 2025-11-29 07:42:58.978 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:42:58 compute-0 nova_compute[256729]: 2025-11-29 07:42:58.979 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:42:58 compute-0 nova_compute[256729]: 2025-11-29 07:42:58.979 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:42:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:42:59.760 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:42:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:42:59.761 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:42:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:42:59.761 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:43:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835419294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:00 compute-0 nova_compute[256729]: 2025-11-29 07:43:00.775 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.796s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.006 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.008 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5181MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.008 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.009 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.115 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.116 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.133 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:43:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:43:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4232352979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.580 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.586 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.742 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.744 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:43:01 compute-0 nova_compute[256729]: 2025-11-29 07:43:01.745 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:01 compute-0 sshd-session[260224]: Connection closed by authenticating user root 143.14.121.41 port 57800 [preauth]
Nov 29 07:43:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:02 compute-0 podman[260274]: 2025-11-29 07:43:02.714700445 +0000 UTC m=+0.069058658 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:43:02 compute-0 podman[260273]: 2025-11-29 07:43:02.731486054 +0000 UTC m=+0.091997964 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:43:02 compute-0 podman[260272]: 2025-11-29 07:43:02.731883294 +0000 UTC m=+0.104654069 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 07:43:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:04 compute-0 sshd-session[260270]: Connection closed by authenticating user root 143.14.121.41 port 39244 [preauth]
Nov 29 07:43:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:43:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5680 writes, 23K keys, 5680 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5680 writes, 924 syncs, 6.15 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:43:05
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.mgr', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log']
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:06 compute-0 ceph-mon[75050]: pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:06 compute-0 ceph-mon[75050]: pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:43:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:43:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:07 compute-0 ceph-mon[75050]: pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:07 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2835419294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:07 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4232352979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:07 compute-0 ceph-mon[75050]: pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:07 compute-0 ceph-mon[75050]: pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:07 compute-0 ceph-mon[75050]: pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:08 compute-0 sshd-session[260335]: Connection closed by authenticating user root 143.14.121.41 port 39258 [preauth]
Nov 29 07:43:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:43:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543057507' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:43:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:43:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543057507' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:43:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:43:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 6716 writes, 27K keys, 6716 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6716 writes, 1207 syncs, 5.56 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:43:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:11 compute-0 sshd-session[260337]: Invalid user mysql from 143.14.121.41 port 39272
Nov 29 07:43:11 compute-0 ceph-mon[75050]: pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1543057507' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:43:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1543057507' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:43:11 compute-0 sshd-session[260337]: Connection closed by invalid user mysql 143.14.121.41 port 39272 [preauth]
Nov 29 07:43:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:12 compute-0 ceph-mon[75050]: pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:14 compute-0 ceph-mon[75050]: pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:43:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:43:15 compute-0 sshd-session[260339]: Invalid user ubuntu from 143.14.121.41 port 40612
Nov 29 07:43:15 compute-0 sshd-session[260339]: Connection closed by invalid user ubuntu 143.14.121.41 port 40612 [preauth]
Nov 29 07:43:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:43:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.5 total, 600.0 interval
                                           Cumulative writes: 5656 writes, 23K keys, 5656 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5656 writes, 887 syncs, 6.38 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:43:16 compute-0 ceph-mon[75050]: pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:17 compute-0 sshd-session[260341]: Connection closed by authenticating user root 143.14.121.41 port 40628 [preauth]
Nov 29 07:43:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:19 compute-0 ceph-mon[75050]: pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:21 compute-0 sshd-session[260343]: Connection closed by authenticating user root 143.14.121.41 port 40650 [preauth]
Nov 29 07:43:21 compute-0 ceph-mgr[75345]: [devicehealth INFO root] Check health
Nov 29 07:43:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:23 compute-0 ceph-mon[75050]: pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:24 compute-0 sshd-session[260345]: Connection closed by authenticating user root 143.14.121.41 port 40666 [preauth]
Nov 29 07:43:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:27 compute-0 ceph-mon[75050]: pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:27 compute-0 ceph-mon[75050]: pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:27 compute-0 ceph-mon[75050]: pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:28 compute-0 sshd-session[260347]: Connection closed by authenticating user root 143.14.121.41 port 35164 [preauth]
Nov 29 07:43:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:31 compute-0 sshd-session[260349]: Connection closed by authenticating user root 143.14.121.41 port 35170 [preauth]
Nov 29 07:43:32 compute-0 ceph-mon[75050]: pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:33 compute-0 podman[260354]: 2025-11-29 07:43:33.72760944 +0000 UTC m=+0.089424222 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 07:43:33 compute-0 podman[260355]: 2025-11-29 07:43:33.729828501 +0000 UTC m=+0.081814735 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:43:33 compute-0 podman[260353]: 2025-11-29 07:43:33.756912291 +0000 UTC m=+0.121598612 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:43:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:34 compute-0 ceph-mon[75050]: pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:34 compute-0 ceph-mon[75050]: pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:34 compute-0 sshd-session[260351]: Connection closed by authenticating user root 143.14.121.41 port 35904 [preauth]
Nov 29 07:43:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:43:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:43:36 compute-0 sudo[260420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:36 compute-0 sudo[260420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:36 compute-0 sudo[260420]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:36 compute-0 sudo[260445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:43:36 compute-0 sudo[260445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:36 compute-0 sudo[260445]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:36 compute-0 sudo[260470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:36 compute-0 sudo[260470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:36 compute-0 sudo[260470]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:36 compute-0 sudo[260495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:43:36 compute-0 sudo[260495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:37 compute-0 sudo[260495]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:37 compute-0 sudo[260550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:37 compute-0 sudo[260550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:37 compute-0 sudo[260550]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:37 compute-0 sudo[260575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:43:37 compute-0 sudo[260575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:37 compute-0 sudo[260575]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:37 compute-0 ceph-mon[75050]: pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:37 compute-0 ceph-mon[75050]: pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:37 compute-0 sudo[260600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:37 compute-0 sudo[260600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:37 compute-0 sudo[260600]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:37 compute-0 sudo[260625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 07:43:37 compute-0 sudo[260625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:37 compute-0 sudo[260625]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:43:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:43:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:39 compute-0 sshd-session[260418]: Invalid user user01 from 143.14.121.41 port 35918
Nov 29 07:43:39 compute-0 nova_compute[256729]: 2025-11-29 07:43:39.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:39 compute-0 nova_compute[256729]: 2025-11-29 07:43:39.151 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 07:43:39 compute-0 nova_compute[256729]: 2025-11-29 07:43:39.188 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 07:43:39 compute-0 nova_compute[256729]: 2025-11-29 07:43:39.190 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:39 compute-0 nova_compute[256729]: 2025-11-29 07:43:39.191 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 07:43:39 compute-0 nova_compute[256729]: 2025-11-29 07:43:39.285 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:39 compute-0 sshd-session[260418]: Connection closed by invalid user user01 143.14.121.41 port 35918 [preauth]
Nov 29 07:43:39 compute-0 ceph-mon[75050]: pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:40 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:40 compute-0 sudo[260671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:40 compute-0 sudo[260671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:40 compute-0 sudo[260671]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:40 compute-0 sudo[260696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:43:40 compute-0 sudo[260696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:40 compute-0 sudo[260696]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:40 compute-0 sudo[260721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:40 compute-0 sudo[260721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:40 compute-0 sudo[260721]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:40 compute-0 sudo[260746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- inventory --format=json-pretty --filter-for-batch
Nov 29 07:43:40 compute-0 sudo[260746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:40 compute-0 podman[260809]: 2025-11-29 07:43:40.557569021 +0000 UTC m=+0.024573660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:43:41 compute-0 podman[260809]: 2025-11-29 07:43:41.631527334 +0000 UTC m=+1.098531993 container create 57059efe1d5a8fcae62f9188a5b5119a72bc331592e019b46838a2c891c5eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:43:41 compute-0 ceph-mon[75050]: pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:41 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:41 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:42 compute-0 sshd-session[260669]: Connection closed by authenticating user root 143.14.121.41 port 35932 [preauth]
Nov 29 07:43:42 compute-0 systemd[1]: Started libpod-conmon-57059efe1d5a8fcae62f9188a5b5119a72bc331592e019b46838a2c891c5eb78.scope.
Nov 29 07:43:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:43:42 compute-0 podman[260809]: 2025-11-29 07:43:42.352105182 +0000 UTC m=+1.819109891 container init 57059efe1d5a8fcae62f9188a5b5119a72bc331592e019b46838a2c891c5eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wescoff, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:43:42 compute-0 podman[260809]: 2025-11-29 07:43:42.360664495 +0000 UTC m=+1.827669144 container start 57059efe1d5a8fcae62f9188a5b5119a72bc331592e019b46838a2c891c5eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:43:42 compute-0 sharp_wescoff[260825]: 167 167
Nov 29 07:43:42 compute-0 systemd[1]: libpod-57059efe1d5a8fcae62f9188a5b5119a72bc331592e019b46838a2c891c5eb78.scope: Deactivated successfully.
Nov 29 07:43:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:42 compute-0 podman[260809]: 2025-11-29 07:43:42.543990688 +0000 UTC m=+2.010995327 container attach 57059efe1d5a8fcae62f9188a5b5119a72bc331592e019b46838a2c891c5eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:43:42 compute-0 podman[260809]: 2025-11-29 07:43:42.545104588 +0000 UTC m=+2.012109207 container died 57059efe1d5a8fcae62f9188a5b5119a72bc331592e019b46838a2c891c5eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wescoff, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:43:42 compute-0 nova_compute[256729]: 2025-11-29 07:43:42.606 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:42 compute-0 nova_compute[256729]: 2025-11-29 07:43:42.607 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:42 compute-0 nova_compute[256729]: 2025-11-29 07:43:42.607 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:43 compute-0 ceph-mon[75050]: pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:43 compute-0 nova_compute[256729]: 2025-11-29 07:43:43.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:43 compute-0 nova_compute[256729]: 2025-11-29 07:43:43.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:43 compute-0 nova_compute[256729]: 2025-11-29 07:43:43.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:43 compute-0 nova_compute[256729]: 2025-11-29 07:43:43.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:43:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-df2a3e0f36b8a37e6fb7ab0f485136e73c84a1649fc53f3cd18d7fab7d2296f7-merged.mount: Deactivated successfully.
Nov 29 07:43:44 compute-0 nova_compute[256729]: 2025-11-29 07:43:44.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:44 compute-0 ceph-mon[75050]: pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:45 compute-0 sshd-session[260828]: Connection closed by authenticating user root 143.14.121.41 port 42924 [preauth]
Nov 29 07:43:45 compute-0 nova_compute[256729]: 2025-11-29 07:43:45.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:45 compute-0 nova_compute[256729]: 2025-11-29 07:43:45.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:43:45 compute-0 nova_compute[256729]: 2025-11-29 07:43:45.151 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:43:45 compute-0 nova_compute[256729]: 2025-11-29 07:43:45.225 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:43:46 compute-0 nova_compute[256729]: 2025-11-29 07:43:46.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:43:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:47 compute-0 podman[260809]: 2025-11-29 07:43:47.326331394 +0000 UTC m=+6.793336053 container remove 57059efe1d5a8fcae62f9188a5b5119a72bc331592e019b46838a2c891c5eb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:43:47 compute-0 systemd[1]: libpod-conmon-57059efe1d5a8fcae62f9188a5b5119a72bc331592e019b46838a2c891c5eb78.scope: Deactivated successfully.
Nov 29 07:43:47 compute-0 nova_compute[256729]: 2025-11-29 07:43:47.404 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:47 compute-0 nova_compute[256729]: 2025-11-29 07:43:47.405 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:47 compute-0 nova_compute[256729]: 2025-11-29 07:43:47.405 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:47 compute-0 nova_compute[256729]: 2025-11-29 07:43:47.405 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:43:47 compute-0 nova_compute[256729]: 2025-11-29 07:43:47.406 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:43:47 compute-0 sshd-session[260845]: Invalid user user1 from 143.14.121.41 port 42936
Nov 29 07:43:47 compute-0 podman[260855]: 2025-11-29 07:43:47.532295373 +0000 UTC m=+0.041266104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:43:47 compute-0 ceph-mon[75050]: pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:47 compute-0 podman[260855]: 2025-11-29 07:43:47.755215166 +0000 UTC m=+0.264185817 container create 8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:43:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:43:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2785980118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:47 compute-0 nova_compute[256729]: 2025-11-29 07:43:47.883 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:43:47 compute-0 systemd[1]: Started libpod-conmon-8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255.scope.
Nov 29 07:43:47 compute-0 sshd-session[260845]: Connection closed by invalid user user1 143.14.121.41 port 42936 [preauth]
Nov 29 07:43:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ec181ab1e12ac9edb06297b810046671b6030b1fe5663328058010092c72c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ec181ab1e12ac9edb06297b810046671b6030b1fe5663328058010092c72c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ec181ab1e12ac9edb06297b810046671b6030b1fe5663328058010092c72c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd5ec181ab1e12ac9edb06297b810046671b6030b1fe5663328058010092c72c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:48 compute-0 podman[260855]: 2025-11-29 07:43:48.034086131 +0000 UTC m=+0.543056812 container init 8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pare, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.036 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.037 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.037 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.037 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:48 compute-0 podman[260855]: 2025-11-29 07:43:48.051056924 +0000 UTC m=+0.560027595 container start 8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pare, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:43:48 compute-0 podman[260855]: 2025-11-29 07:43:48.176195352 +0000 UTC m=+0.685166003 container attach 8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.383 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.383 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.442 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing inventories for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:43:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.525 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating ProviderTree inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.526 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.549 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing aggregate associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.574 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing trait associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, traits: COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NODE,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:43:48 compute-0 nova_compute[256729]: 2025-11-29 07:43:48.597 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]: [
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:     {
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         "available": false,
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         "ceph_device": false,
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         "lsm_data": {},
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         "lvs": [],
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         "path": "/dev/sr0",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         "rejected_reasons": [
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "Insufficient space (<5GB)",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "Has a FileSystem"
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         ],
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         "sys_api": {
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "actuators": null,
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "device_nodes": "sr0",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "devname": "sr0",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "human_readable_size": "482.00 KB",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "id_bus": "ata",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "model": "QEMU DVD-ROM",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "nr_requests": "2",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "parent": "/dev/sr0",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "partitions": {},
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "path": "/dev/sr0",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "removable": "1",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "rev": "2.5+",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "ro": "0",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "rotational": "1",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "sas_address": "",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "sas_device_handle": "",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "scheduler_mode": "mq-deadline",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "sectors": 0,
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "sectorsize": "2048",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "size": 493568.0,
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "support_discard": "2048",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "type": "disk",
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:             "vendor": "QEMU"
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:         }
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]:     }
Nov 29 07:43:49 compute-0 flamboyant_pare[260893]: ]
Nov 29 07:43:49 compute-0 systemd[1]: libpod-8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255.scope: Deactivated successfully.
Nov 29 07:43:49 compute-0 podman[260855]: 2025-11-29 07:43:49.53480705 +0000 UTC m=+2.043777741 container died 8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:43:49 compute-0 systemd[1]: libpod-8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255.scope: Consumed 1.524s CPU time.
Nov 29 07:43:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:43:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3579488348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:49 compute-0 nova_compute[256729]: 2025-11-29 07:43:49.883 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:43:49 compute-0 nova_compute[256729]: 2025-11-29 07:43:49.895 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:43:50 compute-0 nova_compute[256729]: 2025-11-29 07:43:50.045 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:43:50 compute-0 nova_compute[256729]: 2025-11-29 07:43:50.048 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:43:50 compute-0 nova_compute[256729]: 2025-11-29 07:43:50.049 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:43:50 compute-0 sshd-session[260898]: Invalid user user from 143.14.121.41 port 42940
Nov 29 07:43:50 compute-0 ceph-mon[75050]: pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2785980118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:50 compute-0 sshd-session[260898]: Connection closed by invalid user user 143.14.121.41 port 42940 [preauth]
Nov 29 07:43:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd5ec181ab1e12ac9edb06297b810046671b6030b1fe5663328058010092c72c-merged.mount: Deactivated successfully.
Nov 29 07:43:54 compute-0 ceph-mon[75050]: pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3579488348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:43:54 compute-0 sshd-session[262757]: Invalid user ubuntu from 143.14.121.41 port 42952
Nov 29 07:43:54 compute-0 podman[260855]: 2025-11-29 07:43:54.413910692 +0000 UTC m=+6.922881343 container remove 8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:43:54 compute-0 systemd[1]: libpod-conmon-8399b9e1a369ec38dde889a3b2247c7529a244175671ed4a4912ae8e7f9a2255.scope: Deactivated successfully.
Nov 29 07:43:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:54 compute-0 sudo[260746]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:43:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:43:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:43:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:43:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:43:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:43:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:43:54 compute-0 sshd-session[262757]: Connection closed by invalid user ubuntu 143.14.121.41 port 42952 [preauth]
Nov 29 07:43:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:54 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5e92a2ef-c839-4bde-a077-48e11cac7dae does not exist
Nov 29 07:43:54 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev e9b37e0a-b243-4dcf-9f1c-2d9b547df62b does not exist
Nov 29 07:43:54 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev cdff1cba-0c6d-40ee-82cc-18c6f3c4d487 does not exist
Nov 29 07:43:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:43:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:43:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:43:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:43:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:43:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:43:55 compute-0 sudo[262761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:55 compute-0 sudo[262761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:55 compute-0 sudo[262761]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:55 compute-0 sudo[262786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:43:55 compute-0 sudo[262786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:55 compute-0 sudo[262786]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:55 compute-0 sudo[262812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:43:55 compute-0 sudo[262812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:55 compute-0 sudo[262812]: pam_unix(sudo:session): session closed for user root
Nov 29 07:43:55 compute-0 sudo[262837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:43:55 compute-0 sudo[262837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:43:55 compute-0 podman[262903]: 2025-11-29 07:43:55.561372707 +0000 UTC m=+0.030415479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:43:55 compute-0 ceph-mon[75050]: pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:55 compute-0 ceph-mon[75050]: pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:55 compute-0 ceph-mon[75050]: pgmap v1053: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:43:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:43:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:43:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:43:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:43:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:43:55 compute-0 podman[262903]: 2025-11-29 07:43:55.882254128 +0000 UTC m=+0.351296870 container create 60f5de0144ebfe3c9df5c7c6f11fd7025dab6e28f775907490e2a5b30aa346fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:43:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:56 compute-0 systemd[1]: Started libpod-conmon-60f5de0144ebfe3c9df5c7c6f11fd7025dab6e28f775907490e2a5b30aa346fb.scope.
Nov 29 07:43:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:43:57 compute-0 sshd-session[262811]: Connection closed by authenticating user root 143.14.121.41 port 37278 [preauth]
Nov 29 07:43:57 compute-0 podman[262903]: 2025-11-29 07:43:57.366361403 +0000 UTC m=+1.835404235 container init 60f5de0144ebfe3c9df5c7c6f11fd7025dab6e28f775907490e2a5b30aa346fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:43:57 compute-0 podman[262903]: 2025-11-29 07:43:57.373636891 +0000 UTC m=+1.842679663 container start 60f5de0144ebfe3c9df5c7c6f11fd7025dab6e28f775907490e2a5b30aa346fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:43:57 compute-0 quirky_banach[262919]: 167 167
Nov 29 07:43:57 compute-0 systemd[1]: libpod-60f5de0144ebfe3c9df5c7c6f11fd7025dab6e28f775907490e2a5b30aa346fb.scope: Deactivated successfully.
Nov 29 07:43:57 compute-0 ceph-mon[75050]: pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:57 compute-0 podman[262903]: 2025-11-29 07:43:57.777179373 +0000 UTC m=+2.246222205 container attach 60f5de0144ebfe3c9df5c7c6f11fd7025dab6e28f775907490e2a5b30aa346fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:43:57 compute-0 podman[262903]: 2025-11-29 07:43:57.778368996 +0000 UTC m=+2.247411778 container died 60f5de0144ebfe3c9df5c7c6f11fd7025dab6e28f775907490e2a5b30aa346fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-894599c918feb28883de7a44d035e90ebc8bf31d07c850b3e1e419f81258f288-merged.mount: Deactivated successfully.
Nov 29 07:43:57 compute-0 podman[262903]: 2025-11-29 07:43:57.862439346 +0000 UTC m=+2.331482088 container remove 60f5de0144ebfe3c9df5c7c6f11fd7025dab6e28f775907490e2a5b30aa346fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banach, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:43:57 compute-0 systemd[1]: libpod-conmon-60f5de0144ebfe3c9df5c7c6f11fd7025dab6e28f775907490e2a5b30aa346fb.scope: Deactivated successfully.
Nov 29 07:43:58 compute-0 podman[262947]: 2025-11-29 07:43:58.007833006 +0000 UTC m=+0.022558475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:43:58 compute-0 podman[262947]: 2025-11-29 07:43:58.12105858 +0000 UTC m=+0.135784029 container create b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:43:58 compute-0 systemd[1]: Started libpod-conmon-b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73.scope.
Nov 29 07:43:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81260da6cf508add7dbdab873c8c13cbda089e6a2fd0a5a29a50369d68d2ae7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81260da6cf508add7dbdab873c8c13cbda089e6a2fd0a5a29a50369d68d2ae7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81260da6cf508add7dbdab873c8c13cbda089e6a2fd0a5a29a50369d68d2ae7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81260da6cf508add7dbdab873c8c13cbda089e6a2fd0a5a29a50369d68d2ae7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81260da6cf508add7dbdab873c8c13cbda089e6a2fd0a5a29a50369d68d2ae7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:43:58 compute-0 podman[262947]: 2025-11-29 07:43:58.239823325 +0000 UTC m=+0.254548774 container init b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:43:58 compute-0 podman[262947]: 2025-11-29 07:43:58.247705869 +0000 UTC m=+0.262431328 container start b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:43:58 compute-0 podman[262947]: 2025-11-29 07:43:58.429390118 +0000 UTC m=+0.444115577 container attach b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:43:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:59 compute-0 ceph-mon[75050]: pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:43:59 compute-0 festive_ardinghelli[262963]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:43:59 compute-0 festive_ardinghelli[262963]: --> relative data size: 1.0
Nov 29 07:43:59 compute-0 festive_ardinghelli[262963]: --> All data devices are unavailable
Nov 29 07:43:59 compute-0 systemd[1]: libpod-b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73.scope: Deactivated successfully.
Nov 29 07:43:59 compute-0 systemd[1]: libpod-b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73.scope: Consumed 1.018s CPU time.
Nov 29 07:43:59 compute-0 podman[262947]: 2025-11-29 07:43:59.322219599 +0000 UTC m=+1.336945058 container died b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:43:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-81260da6cf508add7dbdab873c8c13cbda089e6a2fd0a5a29a50369d68d2ae7e-merged.mount: Deactivated successfully.
Nov 29 07:43:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:43:59.761 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:43:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:43:59.763 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:43:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:43:59.763 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:00 compute-0 podman[262947]: 2025-11-29 07:44:00.020293414 +0000 UTC m=+2.035018863 container remove b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:44:00 compute-0 systemd[1]: libpod-conmon-b3953511776f4f60c5884ad63c58490511f4d1123fd402b2a821170b1b51ed73.scope: Deactivated successfully.
Nov 29 07:44:00 compute-0 sudo[262837]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:00 compute-0 sudo[263007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:00 compute-0 sudo[263007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:00 compute-0 sudo[263007]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:00 compute-0 sshd-session[262935]: Connection closed by authenticating user root 143.14.121.41 port 37294 [preauth]
Nov 29 07:44:00 compute-0 sudo[263032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:44:00 compute-0 sudo[263032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:00 compute-0 sudo[263032]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:00 compute-0 sudo[263057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:00 compute-0 sudo[263057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:00 compute-0 sudo[263057]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:00 compute-0 sudo[263082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:44:00 compute-0 sudo[263082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:00 compute-0 podman[263148]: 2025-11-29 07:44:00.7133146 +0000 UTC m=+0.088897912 container create fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:44:00 compute-0 podman[263148]: 2025-11-29 07:44:00.646880831 +0000 UTC m=+0.022464143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:00 compute-0 systemd[1]: Started libpod-conmon-fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5.scope.
Nov 29 07:44:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:00 compute-0 podman[263148]: 2025-11-29 07:44:00.874381998 +0000 UTC m=+0.249965310 container init fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:44:00 compute-0 podman[263148]: 2025-11-29 07:44:00.881407769 +0000 UTC m=+0.256991041 container start fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:44:00 compute-0 pedantic_chatterjee[263164]: 167 167
Nov 29 07:44:00 compute-0 systemd[1]: libpod-fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5.scope: Deactivated successfully.
Nov 29 07:44:00 compute-0 conmon[263164]: conmon fb19a8cb8a9e35444359 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5.scope/container/memory.events
Nov 29 07:44:00 compute-0 podman[263148]: 2025-11-29 07:44:00.98458971 +0000 UTC m=+0.360173022 container attach fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:44:00 compute-0 podman[263148]: 2025-11-29 07:44:00.986857382 +0000 UTC m=+0.362440674 container died fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-30463488478ac0befd681f3ad520de6a74b90516f77e410fe2986794d2d70e5a-merged.mount: Deactivated successfully.
Nov 29 07:44:01 compute-0 podman[263148]: 2025-11-29 07:44:01.204679565 +0000 UTC m=+0.580262887 container remove fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:44:01 compute-0 systemd[1]: libpod-conmon-fb19a8cb8a9e354443592dba25b4827f301bdb5a32d389ae4bd448298e6782e5.scope: Deactivated successfully.
Nov 29 07:44:01 compute-0 podman[263190]: 2025-11-29 07:44:01.396211602 +0000 UTC m=+0.027733357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:01 compute-0 podman[263190]: 2025-11-29 07:44:01.533250355 +0000 UTC m=+0.164772080 container create bfebe29f5acd14b1b30ac2d3d1037da4c8b93ce4663d7e89d86f9818a4c9d75f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 07:44:01 compute-0 systemd[1]: Started libpod-conmon-bfebe29f5acd14b1b30ac2d3d1037da4c8b93ce4663d7e89d86f9818a4c9d75f.scope.
Nov 29 07:44:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af42a2e827f833cb34ad23027e3a62319ba3f5ac782c306ef3df0ecfb23e930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af42a2e827f833cb34ad23027e3a62319ba3f5ac782c306ef3df0ecfb23e930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af42a2e827f833cb34ad23027e3a62319ba3f5ac782c306ef3df0ecfb23e930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4af42a2e827f833cb34ad23027e3a62319ba3f5ac782c306ef3df0ecfb23e930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:02 compute-0 podman[263190]: 2025-11-29 07:44:02.420819841 +0000 UTC m=+1.052341606 container init bfebe29f5acd14b1b30ac2d3d1037da4c8b93ce4663d7e89d86f9818a4c9d75f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:44:02 compute-0 ceph-mon[75050]: pgmap v1056: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:02 compute-0 podman[263190]: 2025-11-29 07:44:02.433711092 +0000 UTC m=+1.065232827 container start bfebe29f5acd14b1b30ac2d3d1037da4c8b93ce4663d7e89d86f9818a4c9d75f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:44:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:02 compute-0 podman[263190]: 2025-11-29 07:44:02.457305545 +0000 UTC m=+1.088827290 container attach bfebe29f5acd14b1b30ac2d3d1037da4c8b93ce4663d7e89d86f9818a4c9d75f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]: {
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:     "0": [
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:         {
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "devices": [
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "/dev/loop3"
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             ],
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_name": "ceph_lv0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_size": "21470642176",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "name": "ceph_lv0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "tags": {
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.cluster_name": "ceph",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.crush_device_class": "",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.encrypted": "0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.osd_id": "0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.type": "block",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.vdo": "0"
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             },
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "type": "block",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "vg_name": "ceph_vg0"
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:         }
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:     ],
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:     "1": [
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:         {
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "devices": [
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "/dev/loop4"
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             ],
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_name": "ceph_lv1",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_size": "21470642176",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "name": "ceph_lv1",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "tags": {
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.cluster_name": "ceph",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.crush_device_class": "",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.encrypted": "0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.osd_id": "1",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.type": "block",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.vdo": "0"
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             },
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "type": "block",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "vg_name": "ceph_vg1"
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:         }
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:     ],
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:     "2": [
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:         {
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "devices": [
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "/dev/loop5"
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             ],
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_name": "ceph_lv2",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_size": "21470642176",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "name": "ceph_lv2",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "tags": {
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.cluster_name": "ceph",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.crush_device_class": "",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.encrypted": "0",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.osd_id": "2",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.type": "block",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:                 "ceph.vdo": "0"
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             },
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "type": "block",
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:             "vg_name": "ceph_vg2"
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:         }
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]:     ]
Nov 29 07:44:03 compute-0 xenodochial_vaughan[263207]: }
Nov 29 07:44:03 compute-0 systemd[1]: libpod-bfebe29f5acd14b1b30ac2d3d1037da4c8b93ce4663d7e89d86f9818a4c9d75f.scope: Deactivated successfully.
Nov 29 07:44:03 compute-0 podman[263216]: 2025-11-29 07:44:03.257544973 +0000 UTC m=+0.030204544 container died bfebe29f5acd14b1b30ac2d3d1037da4c8b93ce4663d7e89d86f9818a4c9d75f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:44:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4af42a2e827f833cb34ad23027e3a62319ba3f5ac782c306ef3df0ecfb23e930-merged.mount: Deactivated successfully.
Nov 29 07:44:03 compute-0 sshd-session[263107]: Connection closed by authenticating user root 143.14.121.41 port 37302 [preauth]
Nov 29 07:44:03 compute-0 podman[263216]: 2025-11-29 07:44:03.873430119 +0000 UTC m=+0.646089680 container remove bfebe29f5acd14b1b30ac2d3d1037da4c8b93ce4663d7e89d86f9818a4c9d75f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:44:03 compute-0 systemd[1]: libpod-conmon-bfebe29f5acd14b1b30ac2d3d1037da4c8b93ce4663d7e89d86f9818a4c9d75f.scope: Deactivated successfully.
Nov 29 07:44:03 compute-0 ceph-mon[75050]: pgmap v1057: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:03 compute-0 sudo[263082]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:03 compute-0 podman[263234]: 2025-11-29 07:44:03.997779946 +0000 UTC m=+0.077625686 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:44:03 compute-0 podman[263235]: 2025-11-29 07:44:03.997247152 +0000 UTC m=+0.071427017 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:44:04 compute-0 sudo[263266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:04 compute-0 sudo[263266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:04 compute-0 sudo[263266]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:04 compute-0 podman[263232]: 2025-11-29 07:44:04.026757285 +0000 UTC m=+0.107341815 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 07:44:04 compute-0 sudo[263321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:44:04 compute-0 sudo[263321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:04 compute-0 sudo[263321]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:04 compute-0 sudo[263346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:04 compute-0 sudo[263346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:04 compute-0 sudo[263346]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:04 compute-0 sudo[263371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:44:04 compute-0 sudo[263371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:04 compute-0 podman[263437]: 2025-11-29 07:44:04.496694986 +0000 UTC m=+0.026507223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:04 compute-0 podman[263437]: 2025-11-29 07:44:04.680263527 +0000 UTC m=+0.210075744 container create 5b26cb23ebaecb0e69cfcd6d1278596a4937e6ee06c4772c8b5bda8513201738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:04.758412) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402244758497, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1413, "num_deletes": 251, "total_data_size": 2276144, "memory_usage": 2312872, "flush_reason": "Manual Compaction"}
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 29 07:44:04 compute-0 systemd[1]: Started libpod-conmon-5b26cb23ebaecb0e69cfcd6d1278596a4937e6ee06c4772c8b5bda8513201738.scope.
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402244865478, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1327725, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16618, "largest_seqno": 18030, "table_properties": {"data_size": 1322728, "index_size": 2329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13021, "raw_average_key_size": 20, "raw_value_size": 1311681, "raw_average_value_size": 2059, "num_data_blocks": 107, "num_entries": 637, "num_filter_entries": 637, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402057, "oldest_key_time": 1764402057, "file_creation_time": 1764402244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 107107 microseconds, and 8184 cpu microseconds.
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:44:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:04.865531) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1327725 bytes OK
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:04.865553) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:04.877051) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:04.877079) EVENT_LOG_v1 {"time_micros": 1764402244877072, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:04.877099) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2269794, prev total WAL file size 2269794, number of live WAL files 2.
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:04.878051) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1296KB)], [38(8271KB)]
Nov 29 07:44:04 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402244878163, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9797982, "oldest_snapshot_seqno": -1}
Nov 29 07:44:04 compute-0 podman[263437]: 2025-11-29 07:44:04.899487258 +0000 UTC m=+0.429299495 container init 5b26cb23ebaecb0e69cfcd6d1278596a4937e6ee06c4772c8b5bda8513201738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:44:04 compute-0 podman[263437]: 2025-11-29 07:44:04.909629663 +0000 UTC m=+0.439441880 container start 5b26cb23ebaecb0e69cfcd6d1278596a4937e6ee06c4772c8b5bda8513201738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:44:04 compute-0 jovial_colden[263454]: 167 167
Nov 29 07:44:04 compute-0 systemd[1]: libpod-5b26cb23ebaecb0e69cfcd6d1278596a4937e6ee06c4772c8b5bda8513201738.scope: Deactivated successfully.
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4440 keys, 7391014 bytes, temperature: kUnknown
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402245145064, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7391014, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7361297, "index_size": 17511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 109169, "raw_average_key_size": 24, "raw_value_size": 7280881, "raw_average_value_size": 1639, "num_data_blocks": 740, "num_entries": 4440, "num_filter_entries": 4440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764402244, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:44:05 compute-0 podman[263437]: 2025-11-29 07:44:05.147149443 +0000 UTC m=+0.676961670 container attach 5b26cb23ebaecb0e69cfcd6d1278596a4937e6ee06c4772c8b5bda8513201738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:44:05 compute-0 podman[263437]: 2025-11-29 07:44:05.149459846 +0000 UTC m=+0.679272073 container died 5b26cb23ebaecb0e69cfcd6d1278596a4937e6ee06c4772c8b5bda8513201738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:05.145346) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7391014 bytes
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:05.166064) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 36.7 rd, 27.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.1 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(12.9) write-amplify(5.6) OK, records in: 4885, records dropped: 445 output_compression: NoCompression
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:05.166102) EVENT_LOG_v1 {"time_micros": 1764402245166086, "job": 18, "event": "compaction_finished", "compaction_time_micros": 267007, "compaction_time_cpu_micros": 35619, "output_level": 6, "num_output_files": 1, "total_output_size": 7391014, "num_input_records": 4885, "num_output_records": 4440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402245166503, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402245167904, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:04.877865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:05.168006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:05.168011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:05.168013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:05.168015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:44:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:44:05.168018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e022e33b56c9020a615a2097027e9b76647ad755b67b5a8355876b596450d38-merged.mount: Deactivated successfully.
Nov 29 07:44:05 compute-0 podman[263437]: 2025-11-29 07:44:05.545527474 +0000 UTC m=+1.075339691 container remove 5b26cb23ebaecb0e69cfcd6d1278596a4937e6ee06c4772c8b5bda8513201738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:44:05
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.mgr', 'volumes', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups']
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:05 compute-0 systemd[1]: libpod-conmon-5b26cb23ebaecb0e69cfcd6d1278596a4937e6ee06c4772c8b5bda8513201738.scope: Deactivated successfully.
Nov 29 07:44:05 compute-0 podman[263479]: 2025-11-29 07:44:05.711516806 +0000 UTC m=+0.033037391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:44:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:44:07 compute-0 podman[263479]: 2025-11-29 07:44:07.062724001 +0000 UTC m=+1.384244596 container create 1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_davinci, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:44:07 compute-0 ceph-mon[75050]: pgmap v1058: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:07 compute-0 sshd-session[263470]: Connection closed by authenticating user root 143.14.121.41 port 44866 [preauth]
Nov 29 07:44:07 compute-0 systemd[1]: Started libpod-conmon-1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9.scope.
Nov 29 07:44:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6306bf9445d3555e55e3b5db481e27d432cffa5a1efb69eef04afdb860f6beb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6306bf9445d3555e55e3b5db481e27d432cffa5a1efb69eef04afdb860f6beb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6306bf9445d3555e55e3b5db481e27d432cffa5a1efb69eef04afdb860f6beb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6306bf9445d3555e55e3b5db481e27d432cffa5a1efb69eef04afdb860f6beb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:44:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:08 compute-0 podman[263479]: 2025-11-29 07:44:08.535031885 +0000 UTC m=+2.856552460 container init 1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:44:08 compute-0 podman[263479]: 2025-11-29 07:44:08.544280707 +0000 UTC m=+2.865801302 container start 1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:44:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:44:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3478737305' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:44:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:44:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3478737305' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:44:08 compute-0 podman[263479]: 2025-11-29 07:44:08.769315916 +0000 UTC m=+3.090836521 container attach 1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:44:09 compute-0 ceph-mon[75050]: pgmap v1059: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:09 compute-0 recursing_davinci[263496]: {
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "osd_id": 2,
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "type": "bluestore"
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:     },
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "osd_id": 1,
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "type": "bluestore"
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:     },
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "osd_id": 0,
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:         "type": "bluestore"
Nov 29 07:44:09 compute-0 recursing_davinci[263496]:     }
Nov 29 07:44:09 compute-0 recursing_davinci[263496]: }
Nov 29 07:44:09 compute-0 systemd[1]: libpod-1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9.scope: Deactivated successfully.
Nov 29 07:44:09 compute-0 podman[263479]: 2025-11-29 07:44:09.645109242 +0000 UTC m=+3.966629817 container died 1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_davinci, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 07:44:09 compute-0 systemd[1]: libpod-1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9.scope: Consumed 1.092s CPU time.
Nov 29 07:44:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6306bf9445d3555e55e3b5db481e27d432cffa5a1efb69eef04afdb860f6beb-merged.mount: Deactivated successfully.
Nov 29 07:44:10 compute-0 ceph-mon[75050]: pgmap v1060: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3478737305' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:44:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3478737305' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:44:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:11 compute-0 podman[263479]: 2025-11-29 07:44:11.103301391 +0000 UTC m=+5.424821976 container remove 1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:44:11 compute-0 systemd[1]: libpod-conmon-1e8ea2117726cd57f8cc905ff21f94005d2cce3f39305f6c5ca8fef2c55fa8a9.scope: Deactivated successfully.
Nov 29 07:44:11 compute-0 sudo[263371]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:44:11 compute-0 sshd-session[263499]: Connection closed by authenticating user root 143.14.121.41 port 44868 [preauth]
Nov 29 07:44:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:44:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:44:13 compute-0 ceph-mon[75050]: pgmap v1061: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:14 compute-0 sshd-session[263546]: Connection closed by authenticating user root 143.14.121.41 port 57700 [preauth]
Nov 29 07:44:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:44:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:44:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:44:16 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b1b0f5db-e11f-4897-8a34-5e2731b38fdc does not exist
Nov 29 07:44:16 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 448221b1-df4a-4ec0-a6f4-f5e24e723ef9 does not exist
Nov 29 07:44:16 compute-0 sudo[263550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:44:16 compute-0 sudo[263550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:16 compute-0 sudo[263550]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:17 compute-0 sshd-session[263548]: Connection closed by authenticating user root 143.14.121.41 port 57710 [preauth]
Nov 29 07:44:17 compute-0 sudo[263575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:44:17 compute-0 sudo[263575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:44:17 compute-0 sudo[263575]: pam_unix(sudo:session): session closed for user root
Nov 29 07:44:17 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:44:17 compute-0 ceph-mon[75050]: pgmap v1062: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:19 compute-0 ceph-mon[75050]: pgmap v1063: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:19 compute-0 ceph-mon[75050]: pgmap v1064: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:44:19 compute-0 sshd-session[263600]: Connection closed by authenticating user root 143.14.121.41 port 57726 [preauth]
Nov 29 07:44:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:20 compute-0 ceph-mon[75050]: pgmap v1065: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:20 compute-0 ceph-mon[75050]: pgmap v1066: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:23 compute-0 sshd-session[263602]: Invalid user prueba from 143.14.121.41 port 57734
Nov 29 07:44:24 compute-0 ceph-mon[75050]: pgmap v1067: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:24 compute-0 sshd-session[263602]: Connection closed by invalid user prueba 143.14.121.41 port 57734 [preauth]
Nov 29 07:44:25 compute-0 ceph-mon[75050]: pgmap v1068: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:27 compute-0 ceph-mon[75050]: pgmap v1069: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:28 compute-0 sshd-session[263604]: Invalid user what from 143.14.121.41 port 58550
Nov 29 07:44:28 compute-0 sshd-session[263604]: Connection closed by invalid user what 143.14.121.41 port 58550 [preauth]
Nov 29 07:44:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:31 compute-0 ceph-mon[75050]: pgmap v1070: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:31 compute-0 sshd-session[263606]: Invalid user samba from 143.14.121.41 port 58552
Nov 29 07:44:32 compute-0 sshd-session[263606]: Connection closed by invalid user samba 143.14.121.41 port 58552 [preauth]
Nov 29 07:44:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:34 compute-0 podman[263612]: 2025-11-29 07:44:34.717474746 +0000 UTC m=+0.072893787 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 07:44:34 compute-0 ceph-mon[75050]: pgmap v1071: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:34 compute-0 ceph-mon[75050]: pgmap v1072: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:34 compute-0 podman[263611]: 2025-11-29 07:44:34.741663474 +0000 UTC m=+0.099631124 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 07:44:34 compute-0 podman[263610]: 2025-11-29 07:44:34.760288942 +0000 UTC m=+0.118743135 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:44:34 compute-0 sshd-session[263608]: Connection closed by authenticating user root 143.14.121.41 port 53756 [preauth]
Nov 29 07:44:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:44:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:44:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:44:36 compute-0 ceph-mon[75050]: pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:44:37 compute-0 sshd-session[263676]: Connection closed by authenticating user root 143.14.121.41 port 53758 [preauth]
Nov 29 07:44:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 07:44:38 compute-0 ceph-mon[75050]: pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Nov 29 07:44:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 07:44:40 compute-0 ceph-mon[75050]: pgmap v1075: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 07:44:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:41 compute-0 sshd-session[263682]: Connection closed by authenticating user root 143.14.121.41 port 53764 [preauth]
Nov 29 07:44:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 07:44:43 compute-0 ceph-mon[75050]: pgmap v1076: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 1 op/s
Nov 29 07:44:44 compute-0 ceph-mon[75050]: pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 07:44:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 8 op/s
Nov 29 07:44:45 compute-0 ceph-mon[75050]: pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 8 op/s
Nov 29 07:44:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:46 compute-0 nova_compute[256729]: 2025-11-29 07:44:46.045 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:46 compute-0 nova_compute[256729]: 2025-11-29 07:44:46.045 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:46 compute-0 sshd-session[263685]: Connection closed by authenticating user root 143.14.121.41 port 52716 [preauth]
Nov 29 07:44:46 compute-0 nova_compute[256729]: 2025-11-29 07:44:46.351 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:46 compute-0 nova_compute[256729]: 2025-11-29 07:44:46.352 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:46 compute-0 nova_compute[256729]: 2025-11-29 07:44:46.352 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:46 compute-0 nova_compute[256729]: 2025-11-29 07:44:46.352 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:46 compute-0 nova_compute[256729]: 2025-11-29 07:44:46.352 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:46 compute-0 nova_compute[256729]: 2025-11-29 07:44:46.352 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:46 compute-0 nova_compute[256729]: 2025-11-29 07:44:46.352 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:44:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 07:44:47 compute-0 nova_compute[256729]: 2025-11-29 07:44:47.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:47 compute-0 nova_compute[256729]: 2025-11-29 07:44:47.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:44:47 compute-0 nova_compute[256729]: 2025-11-29 07:44:47.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:44:47 compute-0 nova_compute[256729]: 2025-11-29 07:44:47.169 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:44:47 compute-0 ceph-mon[75050]: pgmap v1079: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.178 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.179 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.179 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.179 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.180 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Nov 29 07:44:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:44:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3085357591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.588 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.751 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.753 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5180MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.753 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.753 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.822 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.822 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:44:48 compute-0 nova_compute[256729]: 2025-11-29 07:44:48.843 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:44:49 compute-0 sshd-session[263688]: Connection closed by authenticating user root 143.14.121.41 port 52724 [preauth]
Nov 29 07:44:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:44:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667419343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:49 compute-0 nova_compute[256729]: 2025-11-29 07:44:49.497 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:44:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3085357591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:49 compute-0 nova_compute[256729]: 2025-11-29 07:44:49.505 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:44:49 compute-0 nova_compute[256729]: 2025-11-29 07:44:49.732 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:44:49 compute-0 nova_compute[256729]: 2025-11-29 07:44:49.734 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:44:49 compute-0 nova_compute[256729]: 2025-11-29 07:44:49.734 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 07:44:50 compute-0 ceph-mon[75050]: pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Nov 29 07:44:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1667419343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:44:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:51 compute-0 sshd-session[263732]: Invalid user music from 143.14.121.41 port 52728
Nov 29 07:44:52 compute-0 ceph-mon[75050]: pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 07:44:52 compute-0 sshd-session[263732]: Connection closed by invalid user music 143.14.121.41 port 52728 [preauth]
Nov 29 07:44:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Nov 29 07:44:53 compute-0 ceph-mon[75050]: pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Nov 29 07:44:54 compute-0 sshd-session[263736]: Invalid user ftpuser from 143.14.121.41 port 51190
Nov 29 07:44:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Nov 29 07:44:54 compute-0 sshd-session[263736]: Connection closed by invalid user ftpuser 143.14.121.41 port 51190 [preauth]
Nov 29 07:44:55 compute-0 ceph-mon[75050]: pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Nov 29 07:44:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:44:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 29 07:44:57 compute-0 ceph-mon[75050]: pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 29 07:44:57 compute-0 sshd-session[263738]: Invalid user download from 143.14.121.41 port 51204
Nov 29 07:44:57 compute-0 sshd-session[263738]: Connection closed by invalid user download 143.14.121.41 port 51204 [preauth]
Nov 29 07:44:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Nov 29 07:44:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:44:59.762 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:44:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:44:59.764 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:44:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:44:59.765 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:44:59 compute-0 ceph-mon[75050]: pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Nov 29 07:45:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Nov 29 07:45:01 compute-0 ceph-mon[75050]: pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Nov 29 07:45:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:01 compute-0 sshd-session[263740]: Connection closed by authenticating user root 143.14.121.41 port 51220 [preauth]
Nov 29 07:45:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 29 07:45:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 29 07:45:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 07:45:03 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 29 07:45:04 compute-0 sshd-session[263742]: Connection closed by authenticating user root 143.14.121.41 port 51826 [preauth]
Nov 29 07:45:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Nov 29 07:45:04 compute-0 ceph-mon[75050]: pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 29 07:45:04 compute-0 ceph-mon[75050]: osdmap e109: 3 total, 3 up, 3 in
Nov 29 07:45:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:45:05
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'images']
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:45:05 compute-0 podman[263746]: 2025-11-29 07:45:05.747141631 +0000 UTC m=+0.121075979 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 07:45:05 compute-0 podman[263748]: 2025-11-29 07:45:05.748519409 +0000 UTC m=+0.103616174 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:45:05 compute-0 podman[263747]: 2025-11-29 07:45:05.758035448 +0000 UTC m=+0.117618095 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Nov 29 07:45:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 07:45:06 compute-0 ceph-mon[75050]: pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:45:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:45:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 29 07:45:07 compute-0 sshd-session[263744]: Connection closed by authenticating user root 143.14.121.41 port 51830 [preauth]
Nov 29 07:45:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 255 B/s wr, 8 op/s
Nov 29 07:45:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:45:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3829265795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:45:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:45:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3829265795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:45:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 255 B/s wr, 5 op/s
Nov 29 07:45:11 compute-0 ceph-mon[75050]: pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Nov 29 07:45:11 compute-0 ceph-mon[75050]: osdmap e110: 3 total, 3 up, 3 in
Nov 29 07:45:11 compute-0 sshd-session[263810]: Connection closed by authenticating user root 143.14.121.41 port 51844 [preauth]
Nov 29 07:45:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 333 B/s wr, 3 op/s
Nov 29 07:45:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:13 compute-0 sshd-session[263812]: Invalid user developer from 143.14.121.41 port 52846
Nov 29 07:45:14 compute-0 ceph-mon[75050]: pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 255 B/s wr, 8 op/s
Nov 29 07:45:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3829265795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3829265795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:45:14 compute-0 ceph-mon[75050]: pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 255 B/s wr, 5 op/s
Nov 29 07:45:14 compute-0 ceph-mon[75050]: pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 333 B/s wr, 3 op/s
Nov 29 07:45:14 compute-0 sshd-session[263812]: Connection closed by invalid user developer 143.14.121.41 port 52846 [preauth]
Nov 29 07:45:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 409 B/s wr, 3 op/s
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:45:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:45:16 compute-0 ceph-mon[75050]: pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 409 B/s wr, 3 op/s
Nov 29 07:45:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 511 B/s wr, 5 op/s
Nov 29 07:45:17 compute-0 sudo[263816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:17 compute-0 sudo[263816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:17 compute-0 sudo[263816]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:17 compute-0 sudo[263841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:45:17 compute-0 sudo[263841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:17 compute-0 sudo[263841]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:17 compute-0 sudo[263866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:17 compute-0 sudo[263866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:17 compute-0 sudo[263866]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:17 compute-0 sudo[263891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:45:17 compute-0 sudo[263891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:17 compute-0 sudo[263891]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 29 07:45:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:45:18 compute-0 sshd-session[263814]: Invalid user user2 from 143.14.121.41 port 52848
Nov 29 07:45:18 compute-0 sshd-session[263814]: Connection closed by invalid user user2 143.14.121.41 port 52848 [preauth]
Nov 29 07:45:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 527 B/s wr, 6 op/s
Nov 29 07:45:18 compute-0 ceph-mon[75050]: pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 511 B/s wr, 5 op/s
Nov 29 07:45:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 07:45:20 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 29 07:45:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 409 B/s wr, 6 op/s
Nov 29 07:45:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:45:21 compute-0 ceph-mon[75050]: pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 527 B/s wr, 6 op/s
Nov 29 07:45:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 716 B/s wr, 9 op/s
Nov 29 07:45:22 compute-0 sshd-session[263936]: Connection closed by authenticating user root 143.14.121.41 port 52864 [preauth]
Nov 29 07:45:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:23 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:23 compute-0 sudo[263942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:23 compute-0 sudo[263942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:23 compute-0 sudo[263942]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:23 compute-0 sudo[263967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:45:23 compute-0 sudo[263967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:23 compute-0 sudo[263967]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:24 compute-0 sudo[263992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:24 compute-0 sudo[263992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:24 compute-0 sudo[263992]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:24 compute-0 sudo[264017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:45:24 compute-0 sudo[264017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 613 B/s wr, 9 op/s
Nov 29 07:45:24 compute-0 sudo[264017]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:45:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:45:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:45:24 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:45:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:45:24 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:24 compute-0 ceph-mon[75050]: osdmap e111: 3 total, 3 up, 3 in
Nov 29 07:45:24 compute-0 ceph-mon[75050]: pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 409 B/s wr, 6 op/s
Nov 29 07:45:24 compute-0 ceph-mon[75050]: pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 716 B/s wr, 9 op/s
Nov 29 07:45:25 compute-0 sshd-session[263940]: Connection closed by authenticating user root 143.14.121.41 port 45820 [preauth]
Nov 29 07:45:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:25 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3f6eb9ac-d744-4f40-a1fa-85c1df54b7fa does not exist
Nov 29 07:45:25 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 9bc542a5-22e6-4818-abf9-22d8a02028e0 does not exist
Nov 29 07:45:25 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 47d5bfb8-5047-419a-adb3-04a228af3bd3 does not exist
Nov 29 07:45:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:45:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:45:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:45:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:45:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:45:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:45:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:26 compute-0 ceph-mon[75050]: pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 613 B/s wr, 9 op/s
Nov 29 07:45:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:45:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:45:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 8.4 MiB data, 156 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 819 KiB/s wr, 8 op/s
Nov 29 07:45:27 compute-0 sudo[264076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:27 compute-0 sudo[264076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:27 compute-0 sudo[264076]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:45:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:45:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:45:27 compute-0 ceph-mon[75050]: pgmap v1102: 305 pgs: 305 active+clean; 8.4 MiB data, 156 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 819 KiB/s wr, 8 op/s
Nov 29 07:45:27 compute-0 sudo[264101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:45:27 compute-0 sudo[264101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:27 compute-0 sudo[264101]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:27 compute-0 sudo[264126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:27 compute-0 sudo[264126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:27 compute-0 sudo[264126]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:27 compute-0 sudo[264151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:45:27 compute-0 sudo[264151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:28 compute-0 podman[264215]: 2025-11-29 07:45:28.043829488 +0000 UTC m=+0.029836524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 16 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.6 MiB/s wr, 8 op/s
Nov 29 07:45:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:28 compute-0 podman[264215]: 2025-11-29 07:45:28.903194005 +0000 UTC m=+0.889201051 container create 17db24cbcdac6450f4cea733c773beaae9b084089034723d988c26338d803e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:45:29 compute-0 ceph-mon[75050]: pgmap v1103: 305 pgs: 305 active+clean; 16 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.6 MiB/s wr, 8 op/s
Nov 29 07:45:29 compute-0 systemd[1]: Started libpod-conmon-17db24cbcdac6450f4cea733c773beaae9b084089034723d988c26338d803e19.scope.
Nov 29 07:45:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:30 compute-0 sshd-session[264074]: Connection closed by authenticating user root 143.14.121.41 port 45824 [preauth]
Nov 29 07:45:30 compute-0 podman[264215]: 2025-11-29 07:45:30.244333487 +0000 UTC m=+2.230340513 container init 17db24cbcdac6450f4cea733c773beaae9b084089034723d988c26338d803e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:45:30 compute-0 podman[264215]: 2025-11-29 07:45:30.259905832 +0000 UTC m=+2.245912838 container start 17db24cbcdac6450f4cea733c773beaae9b084089034723d988c26338d803e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:45:30 compute-0 quizzical_banzai[264232]: 167 167
Nov 29 07:45:30 compute-0 systemd[1]: libpod-17db24cbcdac6450f4cea733c773beaae9b084089034723d988c26338d803e19.scope: Deactivated successfully.
Nov 29 07:45:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 16 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 1.6 MiB/s wr, 8 op/s
Nov 29 07:45:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 29 07:45:30 compute-0 podman[264215]: 2025-11-29 07:45:30.920530366 +0000 UTC m=+2.906537462 container attach 17db24cbcdac6450f4cea733c773beaae9b084089034723d988c26338d803e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:45:30 compute-0 podman[264215]: 2025-11-29 07:45:30.921254176 +0000 UTC m=+2.907261182 container died 17db24cbcdac6450f4cea733c773beaae9b084089034723d988c26338d803e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:45:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 07:45:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 29 07:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d81c0be4ee4be0d9b085f86a0c9b594f24d0f46bbfff7df2c6060a6e364e549b-merged.mount: Deactivated successfully.
Nov 29 07:45:31 compute-0 ceph-mon[75050]: pgmap v1104: 305 pgs: 305 active+clean; 16 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 1.6 MiB/s wr, 8 op/s
Nov 29 07:45:31 compute-0 ceph-mon[75050]: osdmap e112: 3 total, 3 up, 3 in
Nov 29 07:45:31 compute-0 podman[264215]: 2025-11-29 07:45:31.550583418 +0000 UTC m=+3.536590424 container remove 17db24cbcdac6450f4cea733c773beaae9b084089034723d988c26338d803e19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:45:31 compute-0 systemd[1]: libpod-conmon-17db24cbcdac6450f4cea733c773beaae9b084089034723d988c26338d803e19.scope: Deactivated successfully.
Nov 29 07:45:31 compute-0 podman[264258]: 2025-11-29 07:45:31.750250958 +0000 UTC m=+0.051523885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:31 compute-0 podman[264258]: 2025-11-29 07:45:31.828229411 +0000 UTC m=+0.129502358 container create 5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_neumann, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:45:32 compute-0 systemd[1]: Started libpod-conmon-5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319.scope.
Nov 29 07:45:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb55345ca27a80350e4333e5e5e60917a828aee86fa334cbd9c4be623c42b6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb55345ca27a80350e4333e5e5e60917a828aee86fa334cbd9c4be623c42b6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb55345ca27a80350e4333e5e5e60917a828aee86fa334cbd9c4be623c42b6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb55345ca27a80350e4333e5e5e60917a828aee86fa334cbd9c4be623c42b6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb55345ca27a80350e4333e5e5e60917a828aee86fa334cbd9c4be623c42b6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 16 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.6 MiB/s wr, 10 op/s
Nov 29 07:45:32 compute-0 podman[264258]: 2025-11-29 07:45:32.576627797 +0000 UTC m=+0.877900734 container init 5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:45:32 compute-0 podman[264258]: 2025-11-29 07:45:32.584686236 +0000 UTC m=+0.885959143 container start 5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:45:32 compute-0 podman[264258]: 2025-11-29 07:45:32.716550618 +0000 UTC m=+1.017823565 container attach 5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_neumann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:45:33 compute-0 sshd-session[264237]: Invalid user pi from 143.14.121.41 port 45840
Nov 29 07:45:33 compute-0 compassionate_neumann[264274]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:45:33 compute-0 compassionate_neumann[264274]: --> relative data size: 1.0
Nov 29 07:45:33 compute-0 compassionate_neumann[264274]: --> All data devices are unavailable
Nov 29 07:45:33 compute-0 systemd[1]: libpod-5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319.scope: Deactivated successfully.
Nov 29 07:45:33 compute-0 systemd[1]: libpod-5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319.scope: Consumed 1.098s CPU time.
Nov 29 07:45:33 compute-0 podman[264258]: 2025-11-29 07:45:33.729004826 +0000 UTC m=+2.030277793 container died 5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_neumann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:45:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:33 compute-0 ceph-mon[75050]: pgmap v1106: 305 pgs: 305 active+clean; 16 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1.6 MiB/s wr, 10 op/s
Nov 29 07:45:34 compute-0 sshd-session[264237]: Connection closed by invalid user pi 143.14.121.41 port 45840 [preauth]
Nov 29 07:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebb55345ca27a80350e4333e5e5e60917a828aee86fa334cbd9c4be623c42b6d-merged.mount: Deactivated successfully.
Nov 29 07:45:34 compute-0 podman[264258]: 2025-11-29 07:45:34.455233838 +0000 UTC m=+2.756506745 container remove 5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:45:34 compute-0 sudo[264151]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 21 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Nov 29 07:45:34 compute-0 systemd[1]: libpod-conmon-5cc893d174476a62abffbaa26ec2e851dfc9e12c4b615241348e459c66d6a319.scope: Deactivated successfully.
Nov 29 07:45:34 compute-0 sudo[264316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:34 compute-0 sudo[264316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:34 compute-0 sudo[264316]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:34 compute-0 sudo[264341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:45:34 compute-0 sudo[264341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:34 compute-0 sudo[264341]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:34 compute-0 sudo[264366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:34 compute-0 sudo[264366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:34 compute-0 sudo[264366]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:34 compute-0 sudo[264391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:45:34 compute-0 sudo[264391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:35 compute-0 podman[264458]: 2025-11-29 07:45:35.070150648 +0000 UTC m=+0.025592538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:35 compute-0 podman[264458]: 2025-11-29 07:45:35.243640574 +0000 UTC m=+0.199082434 container create edf1065c918711768679f1c8c4089f9b974bf774b5e9bcd1260b50f6e3c2025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_roentgen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:45:35 compute-0 ceph-mon[75050]: pgmap v1107: 305 pgs: 305 active+clean; 21 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Nov 29 07:45:35 compute-0 systemd[1]: Started libpod-conmon-edf1065c918711768679f1c8c4089f9b974bf774b5e9bcd1260b50f6e3c2025a.scope.
Nov 29 07:45:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:35 compute-0 podman[264458]: 2025-11-29 07:45:35.48362789 +0000 UTC m=+0.439069770 container init edf1065c918711768679f1c8c4089f9b974bf774b5e9bcd1260b50f6e3c2025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:45:35 compute-0 podman[264458]: 2025-11-29 07:45:35.489573782 +0000 UTC m=+0.445015642 container start edf1065c918711768679f1c8c4089f9b974bf774b5e9bcd1260b50f6e3c2025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:45:35 compute-0 funny_roentgen[264474]: 167 167
Nov 29 07:45:35 compute-0 systemd[1]: libpod-edf1065c918711768679f1c8c4089f9b974bf774b5e9bcd1260b50f6e3c2025a.scope: Deactivated successfully.
Nov 29 07:45:35 compute-0 podman[264458]: 2025-11-29 07:45:35.499857232 +0000 UTC m=+0.455299122 container attach edf1065c918711768679f1c8c4089f9b974bf774b5e9bcd1260b50f6e3c2025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_roentgen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:45:35 compute-0 podman[264458]: 2025-11-29 07:45:35.500331185 +0000 UTC m=+0.455773045 container died edf1065c918711768679f1c8c4089f9b974bf774b5e9bcd1260b50f6e3c2025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_roentgen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:45:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:45:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:45:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5a343714116bab5a4867141647f445f387dbb93eb7d7c8a39eb37f74189fe19-merged.mount: Deactivated successfully.
Nov 29 07:45:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 21 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 MiB/s wr, 23 op/s
Nov 29 07:45:36 compute-0 podman[264458]: 2025-11-29 07:45:36.594589931 +0000 UTC m=+1.550031811 container remove edf1065c918711768679f1c8c4089f9b974bf774b5e9bcd1260b50f6e3c2025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_roentgen, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:45:36 compute-0 podman[264493]: 2025-11-29 07:45:36.672943126 +0000 UTC m=+0.398002872 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:45:36 compute-0 systemd[1]: libpod-conmon-edf1065c918711768679f1c8c4089f9b974bf774b5e9bcd1260b50f6e3c2025a.scope: Deactivated successfully.
Nov 29 07:45:36 compute-0 podman[264492]: 2025-11-29 07:45:36.699624092 +0000 UTC m=+0.423448745 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 29 07:45:36 compute-0 podman[264491]: 2025-11-29 07:45:36.726563106 +0000 UTC m=+0.450471351 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:45:36 compute-0 podman[264565]: 2025-11-29 07:45:36.777182015 +0000 UTC m=+0.027252394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:36 compute-0 podman[264565]: 2025-11-29 07:45:36.879498532 +0000 UTC m=+0.129568901 container create aad0f75eb42fb38a32a34d486c113921858775fce76045132a76013dbff970a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kepler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:45:36 compute-0 systemd[1]: Started libpod-conmon-aad0f75eb42fb38a32a34d486c113921858775fce76045132a76013dbff970a7.scope.
Nov 29 07:45:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1268555305b4d0d12f62567f4d8bcae99e3c3658ad346c9b363878cc6579ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1268555305b4d0d12f62567f4d8bcae99e3c3658ad346c9b363878cc6579ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1268555305b4d0d12f62567f4d8bcae99e3c3658ad346c9b363878cc6579ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1268555305b4d0d12f62567f4d8bcae99e3c3658ad346c9b363878cc6579ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:37 compute-0 podman[264565]: 2025-11-29 07:45:37.643237545 +0000 UTC m=+0.893307974 container init aad0f75eb42fb38a32a34d486c113921858775fce76045132a76013dbff970a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kepler, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:45:37 compute-0 podman[264565]: 2025-11-29 07:45:37.657792521 +0000 UTC m=+0.907862920 container start aad0f75eb42fb38a32a34d486c113921858775fce76045132a76013dbff970a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kepler, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:45:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 29 07:45:37 compute-0 ceph-mon[75050]: pgmap v1108: 305 pgs: 305 active+clean; 21 MiB data, 168 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 MiB/s wr, 23 op/s
Nov 29 07:45:37 compute-0 podman[264565]: 2025-11-29 07:45:37.77265698 +0000 UTC m=+1.022727449 container attach aad0f75eb42fb38a32a34d486c113921858775fce76045132a76013dbff970a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kepler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:45:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 07:45:37 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]: {
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:     "0": [
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:         {
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "devices": [
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "/dev/loop3"
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             ],
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_name": "ceph_lv0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_size": "21470642176",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "name": "ceph_lv0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "tags": {
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.cluster_name": "ceph",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.crush_device_class": "",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.encrypted": "0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.osd_id": "0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.type": "block",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.vdo": "0"
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             },
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "type": "block",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "vg_name": "ceph_vg0"
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:         }
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:     ],
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:     "1": [
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:         {
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "devices": [
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "/dev/loop4"
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             ],
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_name": "ceph_lv1",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_size": "21470642176",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "name": "ceph_lv1",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "tags": {
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.cluster_name": "ceph",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.crush_device_class": "",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.encrypted": "0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.osd_id": "1",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.type": "block",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.vdo": "0"
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             },
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "type": "block",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "vg_name": "ceph_vg1"
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:         }
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:     ],
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:     "2": [
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:         {
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "devices": [
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "/dev/loop5"
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             ],
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_name": "ceph_lv2",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_size": "21470642176",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "name": "ceph_lv2",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "tags": {
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.cluster_name": "ceph",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.crush_device_class": "",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.encrypted": "0",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.osd_id": "2",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.type": "block",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:                 "ceph.vdo": "0"
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             },
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "type": "block",
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:             "vg_name": "ceph_vg2"
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:         }
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]:     ]
Nov 29 07:45:38 compute-0 sleepy_kepler[264580]: }
Nov 29 07:45:38 compute-0 systemd[1]: libpod-aad0f75eb42fb38a32a34d486c113921858775fce76045132a76013dbff970a7.scope: Deactivated successfully.
Nov 29 07:45:38 compute-0 podman[264565]: 2025-11-29 07:45:38.446883906 +0000 UTC m=+1.696954275 container died aad0f75eb42fb38a32a34d486c113921858775fce76045132a76013dbff970a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:45:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.1 MiB/s wr, 42 op/s
Nov 29 07:45:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:38 compute-0 ceph-mon[75050]: osdmap e113: 3 total, 3 up, 3 in
Nov 29 07:45:39 compute-0 sshd-session[264315]: Invalid user 0 from 143.14.121.41 port 51370
Nov 29 07:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-af1268555305b4d0d12f62567f4d8bcae99e3c3658ad346c9b363878cc6579ac-merged.mount: Deactivated successfully.
Nov 29 07:45:39 compute-0 sshd-session[264315]: Connection closed by invalid user 0 143.14.121.41 port 51370 [preauth]
Nov 29 07:45:39 compute-0 podman[264565]: 2025-11-29 07:45:39.617272275 +0000 UTC m=+2.867342694 container remove aad0f75eb42fb38a32a34d486c113921858775fce76045132a76013dbff970a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kepler, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:45:39 compute-0 sudo[264391]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:39 compute-0 systemd[1]: libpod-conmon-aad0f75eb42fb38a32a34d486c113921858775fce76045132a76013dbff970a7.scope: Deactivated successfully.
Nov 29 07:45:39 compute-0 sudo[264605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:39 compute-0 sudo[264605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:39 compute-0 sudo[264605]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:39 compute-0 sudo[264631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:45:39 compute-0 sudo[264631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:39 compute-0 sudo[264631]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:39 compute-0 sudo[264656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:39 compute-0 sudo[264656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:39 compute-0 sudo[264656]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:40 compute-0 sudo[264681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:45:40 compute-0 sudo[264681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:40 compute-0 ceph-mon[75050]: pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.1 MiB/s wr, 42 op/s
Nov 29 07:45:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.6 MiB/s wr, 32 op/s
Nov 29 07:45:40 compute-0 podman[264749]: 2025-11-29 07:45:40.427096584 +0000 UTC m=+0.041094121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:40 compute-0 podman[264749]: 2025-11-29 07:45:40.615725782 +0000 UTC m=+0.229723229 container create eb67d4086e818d01f0e199090c470c625602ca84d48e3ca5ab82a323e6300188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_knuth, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:45:40 compute-0 systemd[1]: Started libpod-conmon-eb67d4086e818d01f0e199090c470c625602ca84d48e3ca5ab82a323e6300188.scope.
Nov 29 07:45:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:41 compute-0 podman[264749]: 2025-11-29 07:45:41.16265443 +0000 UTC m=+0.776651957 container init eb67d4086e818d01f0e199090c470c625602ca84d48e3ca5ab82a323e6300188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:45:41 compute-0 podman[264749]: 2025-11-29 07:45:41.176045775 +0000 UTC m=+0.790043212 container start eb67d4086e818d01f0e199090c470c625602ca84d48e3ca5ab82a323e6300188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:45:41 compute-0 ecstatic_knuth[264765]: 167 167
Nov 29 07:45:41 compute-0 systemd[1]: libpod-eb67d4086e818d01f0e199090c470c625602ca84d48e3ca5ab82a323e6300188.scope: Deactivated successfully.
Nov 29 07:45:41 compute-0 ceph-mon[75050]: pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.6 MiB/s wr, 32 op/s
Nov 29 07:45:41 compute-0 podman[264749]: 2025-11-29 07:45:41.335744595 +0000 UTC m=+0.949742132 container attach eb67d4086e818d01f0e199090c470c625602ca84d48e3ca5ab82a323e6300188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:45:41 compute-0 podman[264749]: 2025-11-29 07:45:41.336358652 +0000 UTC m=+0.950356169 container died eb67d4086e818d01f0e199090c470c625602ca84d48e3ca5ab82a323e6300188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_knuth, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c701f8529b5ee149f293a5e2d7da03a8cdf839c54384b30227bfdd4407a0d87-merged.mount: Deactivated successfully.
Nov 29 07:45:42 compute-0 podman[264749]: 2025-11-29 07:45:42.323284105 +0000 UTC m=+1.937281572 container remove eb67d4086e818d01f0e199090c470c625602ca84d48e3ca5ab82a323e6300188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:45:42 compute-0 systemd[1]: libpod-conmon-eb67d4086e818d01f0e199090c470c625602ca84d48e3ca5ab82a323e6300188.scope: Deactivated successfully.
Nov 29 07:45:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 MiB/s wr, 34 op/s
Nov 29 07:45:42 compute-0 podman[264789]: 2025-11-29 07:45:42.52906348 +0000 UTC m=+0.028020525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:45:43 compute-0 sshd-session[264604]: Invalid user steam from 143.14.121.41 port 51372
Nov 29 07:45:43 compute-0 sshd-session[264604]: Connection closed by invalid user steam 143.14.121.41 port 51372 [preauth]
Nov 29 07:45:43 compute-0 nova_compute[256729]: 2025-11-29 07:45:43.735 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:44 compute-0 nova_compute[256729]: 2025-11-29 07:45:44.144 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:44 compute-0 nova_compute[256729]: 2025-11-29 07:45:44.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:44 compute-0 nova_compute[256729]: 2025-11-29 07:45:44.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:44 compute-0 nova_compute[256729]: 2025-11-29 07:45:44.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:44 compute-0 podman[264789]: 2025-11-29 07:45:44.209273448 +0000 UTC m=+1.708230493 container create 00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:45:44 compute-0 ceph-mon[75050]: pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 MiB/s wr, 34 op/s
Nov 29 07:45:44 compute-0 systemd[1]: Started libpod-conmon-00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868.scope.
Nov 29 07:45:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a8f4339e81eb27d3c2821d7130af18b7fe8744786f3c1e3daf104fd69d2353/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a8f4339e81eb27d3c2821d7130af18b7fe8744786f3c1e3daf104fd69d2353/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a8f4339e81eb27d3c2821d7130af18b7fe8744786f3c1e3daf104fd69d2353/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a8f4339e81eb27d3c2821d7130af18b7fe8744786f3c1e3daf104fd69d2353/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:45:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Nov 29 07:45:44 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 29 07:45:44 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:44.680650) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:45:44 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 29 07:45:44 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402344680731, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 966, "num_deletes": 251, "total_data_size": 1407612, "memory_usage": 1428320, "flush_reason": "Manual Compaction"}
Nov 29 07:45:44 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 29 07:45:44 compute-0 podman[264789]: 2025-11-29 07:45:44.957077367 +0000 UTC m=+2.456034462 container init 00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:45:44 compute-0 podman[264789]: 2025-11-29 07:45:44.971636713 +0000 UTC m=+2.470593758 container start 00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:45:44 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:45:45 compute-0 nova_compute[256729]: 2025-11-29 07:45:45.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:45 compute-0 nova_compute[256729]: 2025-11-29 07:45:45.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402345202567, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1383810, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18031, "largest_seqno": 18996, "table_properties": {"data_size": 1378880, "index_size": 2454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10652, "raw_average_key_size": 19, "raw_value_size": 1368985, "raw_average_value_size": 2544, "num_data_blocks": 111, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402244, "oldest_key_time": 1764402244, "file_creation_time": 1764402344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 521999 microseconds, and 8035 cpu microseconds.
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:45.202654) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1383810 bytes OK
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:45.202681) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:45.252405) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:45.252448) EVENT_LOG_v1 {"time_micros": 1764402345252436, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:45.252478) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1402891, prev total WAL file size 1416193, number of live WAL files 2.
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:45:45 compute-0 podman[264789]: 2025-11-29 07:45:45.313192697 +0000 UTC m=+2.812149712 container attach 00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:45.314842) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1351KB)], [41(7217KB)]
Nov 29 07:45:45 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402345314919, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8774824, "oldest_snapshot_seqno": -1}
Nov 29 07:45:46 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4460 keys, 6968051 bytes, temperature: kUnknown
Nov 29 07:45:46 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402346010218, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6968051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6938616, "index_size": 17165, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 110286, "raw_average_key_size": 24, "raw_value_size": 6858188, "raw_average_value_size": 1537, "num_data_blocks": 720, "num_entries": 4460, "num_filter_entries": 4460, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764402345, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:45:46 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:45:46 compute-0 laughing_chaum[264808]: {
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "osd_id": 2,
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "type": "bluestore"
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:     },
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "osd_id": 1,
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "type": "bluestore"
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:     },
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "osd_id": 0,
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:         "type": "bluestore"
Nov 29 07:45:46 compute-0 laughing_chaum[264808]:     }
Nov 29 07:45:46 compute-0 laughing_chaum[264808]: }
Nov 29 07:45:46 compute-0 systemd[1]: libpod-00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868.scope: Deactivated successfully.
Nov 29 07:45:46 compute-0 podman[264789]: 2025-11-29 07:45:46.05816464 +0000 UTC m=+3.557121685 container died 00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:45:46 compute-0 systemd[1]: libpod-00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868.scope: Consumed 1.092s CPU time.
Nov 29 07:45:46 compute-0 sshd-session[264803]: Connection closed by authenticating user root 143.14.121.41 port 40016 [preauth]
Nov 29 07:45:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:46.010550) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6968051 bytes
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:47.119562) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 12.6 rd, 10.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.0 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 4978, records dropped: 518 output_compression: NoCompression
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:47.119668) EVENT_LOG_v1 {"time_micros": 1764402347119639, "job": 20, "event": "compaction_finished", "compaction_time_micros": 695416, "compaction_time_cpu_micros": 35630, "output_level": 6, "num_output_files": 1, "total_output_size": 6968051, "num_input_records": 4978, "num_output_records": 4460, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:45.314668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:47.120243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:47.120252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:47.120255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:47.120258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:45:47.120261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402347121339, "job": 0, "event": "table_file_deletion", "file_number": 43}
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:45:47 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402347124478, "job": 0, "event": "table_file_deletion", "file_number": 41}
Nov 29 07:45:47 compute-0 ceph-mon[75050]: pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Nov 29 07:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-41a8f4339e81eb27d3c2821d7130af18b7fe8744786f3c1e3daf104fd69d2353-merged.mount: Deactivated successfully.
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.171 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.172 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.172 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.200 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.201 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.201 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.201 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:45:48 compute-0 nova_compute[256729]: 2025-11-29 07:45:48.202 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Nov 29 07:45:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:45:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1871232457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.062 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.860s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.310 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.312 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5095MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.312 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.313 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:49 compute-0 podman[264789]: 2025-11-29 07:45:49.362839805 +0000 UTC m=+6.861796810 container remove 00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:45:49 compute-0 ceph-mon[75050]: pgmap v1114: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 29 07:45:49 compute-0 systemd[1]: libpod-conmon-00be4fbf894dbfb662e598e9769685ed06b0248d15cb483ede702e4519188868.scope: Deactivated successfully.
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.402 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.402 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:45:49 compute-0 sudo[264681]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.421 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:45:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:45:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:45:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 9a63ef4f-990e-4616-bc6a-e81879fe17ab does not exist
Nov 29 07:45:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5c259933-a5db-4beb-b101-05c5fc74a461 does not exist
Nov 29 07:45:49 compute-0 sudo[264899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:45:49 compute-0 sudo[264899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:49 compute-0 sudo[264899]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:49 compute-0 sudo[264924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:45:49 compute-0 sudo[264924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:45:49 compute-0 sudo[264924]: pam_unix(sudo:session): session closed for user root
Nov 29 07:45:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:45:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3841894314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.922 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.928 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.948 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.951 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:45:49 compute-0 nova_compute[256729]: 2025-11-29 07:45:49.952 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:45:50 compute-0 sshd-session[264854]: Connection closed by authenticating user root 143.14.121.41 port 40022 [preauth]
Nov 29 07:45:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 511 B/s wr, 6 op/s
Nov 29 07:45:50 compute-0 ceph-mon[75050]: pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Nov 29 07:45:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1871232457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:45:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3841894314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:45:51 compute-0 ceph-mon[75050]: pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 511 B/s wr, 6 op/s
Nov 29 07:45:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 511 B/s wr, 6 op/s
Nov 29 07:45:53 compute-0 sshd-session[264951]: Connection closed by authenticating user root 143.14.121.41 port 40030 [preauth]
Nov 29 07:45:53 compute-0 ceph-mon[75050]: pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 511 B/s wr, 6 op/s
Nov 29 07:45:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 29 07:45:55 compute-0 ceph-mon[75050]: pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Nov 29 07:45:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:45:56.297 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:45:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:45:56.299 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:45:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:56 compute-0 sshd-session[264953]: Connection closed by authenticating user root 143.14.121.41 port 55798 [preauth]
Nov 29 07:45:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:45:57.301 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:45:57 compute-0 ceph-mon[75050]: pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 29 07:45:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 07:45:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 29 07:45:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:45:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 29 07:45:59 compute-0 ceph-mon[75050]: pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:45:59 compute-0 ceph-mon[75050]: osdmap e114: 3 total, 3 up, 3 in
Nov 29 07:45:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 07:45:59 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 29 07:45:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:45:59.763 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:45:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:45:59.763 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:45:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:45:59.763 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:00 compute-0 sshd-session[264955]: Connection closed by authenticating user root 143.14.121.41 port 55814 [preauth]
Nov 29 07:46:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 29 07:46:00 compute-0 ceph-mon[75050]: osdmap e115: 3 total, 3 up, 3 in
Nov 29 07:46:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 07:46:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 29 07:46:01 compute-0 ceph-mon[75050]: pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 29 07:46:01 compute-0 ceph-mon[75050]: osdmap e116: 3 total, 3 up, 3 in
Nov 29 07:46:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 1023 B/s wr, 13 op/s
Nov 29 07:46:03 compute-0 ceph-mon[75050]: pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 1023 B/s wr, 13 op/s
Nov 29 07:46:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 29 07:46:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 07:46:04 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 29 07:46:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 15 op/s
Nov 29 07:46:04 compute-0 sshd-session[264957]: Connection closed by authenticating user root 143.14.121.41 port 55816 [preauth]
Nov 29 07:46:05 compute-0 ceph-mon[75050]: osdmap e117: 3 total, 3 up, 3 in
Nov 29 07:46:05 compute-0 ceph-mon[75050]: pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 15 op/s
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:46:05
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'images']
Nov 29 07:46:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:46:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:46:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4085533529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:46:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4085533529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4085533529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4085533529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.9 KiB/s wr, 14 op/s
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:46:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:46:07 compute-0 podman[264962]: 2025-11-29 07:46:07.68391254 +0000 UTC m=+0.058728521 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:46:07 compute-0 podman[264963]: 2025-11-29 07:46:07.692057472 +0000 UTC m=+0.056914942 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 07:46:07 compute-0 podman[264961]: 2025-11-29 07:46:07.711561943 +0000 UTC m=+0.086285081 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:46:07 compute-0 ceph-mon[75050]: pgmap v1128: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 1.9 KiB/s wr, 14 op/s
Nov 29 07:46:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.9 KiB/s wr, 46 op/s
Nov 29 07:46:08 compute-0 sshd-session[264959]: Invalid user hadoop from 143.14.121.41 port 38800
Nov 29 07:46:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 29 07:46:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:46:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3768742180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:46:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3768742180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 07:46:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 29 07:46:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 29 07:46:09 compute-0 sshd-session[264959]: Connection closed by invalid user hadoop 143.14.121.41 port 38800 [preauth]
Nov 29 07:46:09 compute-0 ceph-mon[75050]: pgmap v1129: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.9 KiB/s wr, 46 op/s
Nov 29 07:46:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 07:46:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 29 07:46:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3768742180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3768742180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:10 compute-0 ceph-mon[75050]: osdmap e118: 3 total, 3 up, 3 in
Nov 29 07:46:10 compute-0 ceph-mon[75050]: osdmap e119: 3 total, 3 up, 3 in
Nov 29 07:46:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Nov 29 07:46:10 compute-0 sshd-session[265026]: Invalid user ahmed from 143.14.121.41 port 38804
Nov 29 07:46:11 compute-0 sshd-session[265026]: Connection closed by invalid user ahmed 143.14.121.41 port 38804 [preauth]
Nov 29 07:46:11 compute-0 ceph-mon[75050]: pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Nov 29 07:46:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 49 op/s
Nov 29 07:46:13 compute-0 ceph-mon[75050]: pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 49 op/s
Nov 29 07:46:13 compute-0 sshd-session[265028]: Invalid user web from 143.14.121.41 port 38816
Nov 29 07:46:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:46:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888702891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:46:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888702891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:13 compute-0 sshd-session[265028]: Connection closed by invalid user web 143.14.121.41 port 38816 [preauth]
Nov 29 07:46:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 29 07:46:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 07:46:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 29 07:46:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3888702891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3888702891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:14 compute-0 ceph-mon[75050]: osdmap e120: 3 total, 3 up, 3 in
Nov 29 07:46:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:46:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:46:15 compute-0 ceph-mon[75050]: pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Nov 29 07:46:16 compute-0 sshd-session[265030]: Invalid user super from 143.14.121.41 port 52604
Nov 29 07:46:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 29 07:46:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 07:46:16 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 29 07:46:16 compute-0 sshd-session[265030]: Connection closed by invalid user super 143.14.121.41 port 52604 [preauth]
Nov 29 07:46:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 55 op/s
Nov 29 07:46:17 compute-0 ceph-mon[75050]: osdmap e121: 3 total, 3 up, 3 in
Nov 29 07:46:17 compute-0 ceph-mon[75050]: pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 55 op/s
Nov 29 07:46:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:46:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3006472003' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:46:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3006472003' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:18 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3006472003' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:18 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3006472003' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.1 KiB/s wr, 70 op/s
Nov 29 07:46:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:19 compute-0 ceph-mon[75050]: pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.1 KiB/s wr, 70 op/s
Nov 29 07:46:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Nov 29 07:46:21 compute-0 ceph-mon[75050]: pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Nov 29 07:46:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.5 KiB/s wr, 69 op/s
Nov 29 07:46:22 compute-0 sshd-session[265032]: Connection closed by authenticating user root 143.14.121.41 port 52620 [preauth]
Nov 29 07:46:23 compute-0 ceph-mon[75050]: pgmap v1140: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.5 KiB/s wr, 69 op/s
Nov 29 07:46:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 29 07:46:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 07:46:24 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 29 07:46:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 40 op/s
Nov 29 07:46:25 compute-0 ceph-mon[75050]: osdmap e122: 3 total, 3 up, 3 in
Nov 29 07:46:25 compute-0 ceph-mon[75050]: pgmap v1142: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 40 op/s
Nov 29 07:46:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 27 KiB/s wr, 33 op/s
Nov 29 07:46:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 29 07:46:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 07:46:26 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 29 07:46:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 29 07:46:27 compute-0 ceph-mon[75050]: pgmap v1143: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 27 KiB/s wr, 33 op/s
Nov 29 07:46:27 compute-0 ceph-mon[75050]: osdmap e123: 3 total, 3 up, 3 in
Nov 29 07:46:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 07:46:27 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 29 07:46:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:46:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3021518121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:46:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3021518121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:46:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2646788566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:46:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2646788566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 45 KiB/s wr, 37 op/s
Nov 29 07:46:28 compute-0 sshd-session[265034]: Connection closed by authenticating user root 143.14.121.41 port 49908 [preauth]
Nov 29 07:46:28 compute-0 ceph-mon[75050]: osdmap e124: 3 total, 3 up, 3 in
Nov 29 07:46:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3021518121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3021518121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2646788566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2646788566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:29 compute-0 ceph-mon[75050]: pgmap v1146: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 45 KiB/s wr, 37 op/s
Nov 29 07:46:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 43 KiB/s wr, 33 op/s
Nov 29 07:46:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 29 07:46:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 07:46:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 29 07:46:31 compute-0 sshd-session[265036]: Connection closed by authenticating user root 143.14.121.41 port 49922 [preauth]
Nov 29 07:46:32 compute-0 ceph-mon[75050]: pgmap v1147: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 43 KiB/s wr, 33 op/s
Nov 29 07:46:32 compute-0 ceph-mon[75050]: osdmap e125: 3 total, 3 up, 3 in
Nov 29 07:46:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 5.2 KiB/s wr, 88 op/s
Nov 29 07:46:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 29 07:46:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 07:46:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 29 07:46:33 compute-0 ceph-mon[75050]: pgmap v1149: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 5.2 KiB/s wr, 88 op/s
Nov 29 07:46:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 4.3 KiB/s wr, 67 op/s
Nov 29 07:46:34 compute-0 ceph-mon[75050]: osdmap e126: 3 total, 3 up, 3 in
Nov 29 07:46:35 compute-0 nova_compute[256729]: 2025-11-29 07:46:35.327 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:35 compute-0 nova_compute[256729]: 2025-11-29 07:46:35.327 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:35 compute-0 nova_compute[256729]: 2025-11-29 07:46:35.353 256736 DEBUG nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:46:35 compute-0 nova_compute[256729]: 2025-11-29 07:46:35.523 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:35 compute-0 nova_compute[256729]: 2025-11-29 07:46:35.524 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:35 compute-0 nova_compute[256729]: 2025-11-29 07:46:35.534 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:46:35 compute-0 nova_compute[256729]: 2025-11-29 07:46:35.534 256736 INFO nova.compute.claims [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:46:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:46:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:46:35 compute-0 nova_compute[256729]: 2025-11-29 07:46:35.639 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:36 compute-0 sshd-session[265038]: Connection closed by authenticating user root 143.14.121.41 port 48364 [preauth]
Nov 29 07:46:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:46:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1755994545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.079 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.087 256736 DEBUG nova.compute.provider_tree [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.108 256736 DEBUG nova.scheduler.client.report [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.132 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.133 256736 DEBUG nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.184 256736 DEBUG nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.185 256736 DEBUG nova.network.neutron [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.219 256736 INFO nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.242 256736 DEBUG nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.352 256736 DEBUG nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.355 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.356 256736 INFO nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Creating image(s)
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.393 256736 DEBUG nova.storage.rbd_utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] rbd image 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.416 256736 DEBUG nova.storage.rbd_utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] rbd image 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.439 256736 DEBUG nova.storage.rbd_utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] rbd image 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.442 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.443 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:36 compute-0 ceph-mon[75050]: pgmap v1151: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 4.3 KiB/s wr, 67 op/s
Nov 29 07:46:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.0 KiB/s wr, 49 op/s
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.755 256736 WARNING oslo_policy.policy [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.756 256736 WARNING oslo_policy.policy [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.758 256736 DEBUG nova.policy [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a53522f9f2b14db5b3b2ead64c730558', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed7b8fec760c4dfeabbf878615dc25ec', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:46:36 compute-0 nova_compute[256729]: 2025-11-29 07:46:36.765 256736 DEBUG nova.virt.libvirt.imagebackend [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Image locations are: [{'url': 'rbd://14ff1f30-5059-58f1-9a23-69871bb275a1/images/0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://14ff1f30-5059-58f1-9a23-69871bb275a1/images/0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 07:46:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1755994545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:37 compute-0 ceph-mon[75050]: pgmap v1152: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.0 KiB/s wr, 49 op/s
Nov 29 07:46:37 compute-0 nova_compute[256729]: 2025-11-29 07:46:37.969 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:37 compute-0 nova_compute[256729]: 2025-11-29 07:46:37.986 256736 DEBUG nova.network.neutron [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Successfully created port: cf01fba8-1ce4-4048-8b70-76060249d02d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:46:38 compute-0 nova_compute[256729]: 2025-11-29 07:46:38.037 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389.part --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:38 compute-0 nova_compute[256729]: 2025-11-29 07:46:38.038 256736 DEBUG nova.virt.images [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] 0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 29 07:46:38 compute-0 nova_compute[256729]: 2025-11-29 07:46:38.039 256736 DEBUG nova.privsep.utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 29 07:46:38 compute-0 nova_compute[256729]: 2025-11-29 07:46:38.040 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389.part /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.7 KiB/s wr, 80 op/s
Nov 29 07:46:38 compute-0 podman[265129]: 2025-11-29 07:46:38.69492339 +0000 UTC m=+0.059573134 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 07:46:38 compute-0 podman[265130]: 2025-11-29 07:46:38.736650476 +0000 UTC m=+0.086890208 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:46:38 compute-0 podman[265128]: 2025-11-29 07:46:38.749885476 +0000 UTC m=+0.111732654 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Nov 29 07:46:39 compute-0 nova_compute[256729]: 2025-11-29 07:46:39.141 256736 DEBUG nova.network.neutron [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Successfully updated port: cf01fba8-1ce4-4048-8b70-76060249d02d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:46:39 compute-0 nova_compute[256729]: 2025-11-29 07:46:39.157 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:46:39 compute-0 nova_compute[256729]: 2025-11-29 07:46:39.157 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquired lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:46:39 compute-0 nova_compute[256729]: 2025-11-29 07:46:39.157 256736 DEBUG nova.network.neutron [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:46:39 compute-0 nova_compute[256729]: 2025-11-29 07:46:39.458 256736 DEBUG nova.network.neutron [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:46:39 compute-0 nova_compute[256729]: 2025-11-29 07:46:39.623 256736 DEBUG nova.compute.manager [req-317fdb4d-49f8-4222-8a7b-344768e712ba req-95c2a74e-3d39-4762-b1fd-63f88efd534d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-changed-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:46:39 compute-0 nova_compute[256729]: 2025-11-29 07:46:39.624 256736 DEBUG nova.compute.manager [req-317fdb4d-49f8-4222-8a7b-344768e712ba req-95c2a74e-3d39-4762-b1fd-63f88efd534d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Refreshing instance network info cache due to event network-changed-cf01fba8-1ce4-4048-8b70-76060249d02d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:46:39 compute-0 nova_compute[256729]: 2025-11-29 07:46:39.624 256736 DEBUG oslo_concurrency.lockutils [req-317fdb4d-49f8-4222-8a7b-344768e712ba req-95c2a74e-3d39-4762-b1fd-63f88efd534d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:46:40 compute-0 sshd-session[265062]: Connection closed by authenticating user root 143.14.121.41 port 48380 [preauth]
Nov 29 07:46:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.269 256736 DEBUG nova.network.neutron [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Updating instance_info_cache with network_info: [{"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.302 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Releasing lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.302 256736 DEBUG nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Instance network_info: |[{"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.303 256736 DEBUG oslo_concurrency.lockutils [req-317fdb4d-49f8-4222-8a7b-344768e712ba req-95c2a74e-3d39-4762-b1fd-63f88efd534d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.303 256736 DEBUG nova.network.neutron [req-317fdb4d-49f8-4222-8a7b-344768e712ba req-95c2a74e-3d39-4762-b1fd-63f88efd534d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Refreshing network info cache for port cf01fba8-1ce4-4048-8b70-76060249d02d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.390 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389.part /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389.converted" returned: 0 in 2.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.395 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.451 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389.converted --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.452 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 KiB/s wr, 66 op/s
Nov 29 07:46:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 07:46:40 compute-0 ceph-mon[75050]: pgmap v1153: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.7 KiB/s wr, 80 op/s
Nov 29 07:46:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.708 256736 DEBUG nova.storage.rbd_utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] rbd image 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:40 compute-0 nova_compute[256729]: 2025-11-29 07:46:40.715 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:41 compute-0 nova_compute[256729]: 2025-11-29 07:46:41.802 256736 DEBUG nova.network.neutron [req-317fdb4d-49f8-4222-8a7b-344768e712ba req-95c2a74e-3d39-4762-b1fd-63f88efd534d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Updated VIF entry in instance network info cache for port cf01fba8-1ce4-4048-8b70-76060249d02d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:46:41 compute-0 nova_compute[256729]: 2025-11-29 07:46:41.803 256736 DEBUG nova.network.neutron [req-317fdb4d-49f8-4222-8a7b-344768e712ba req-95c2a74e-3d39-4762-b1fd-63f88efd534d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Updating instance_info_cache with network_info: [{"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:46:41 compute-0 nova_compute[256729]: 2025-11-29 07:46:41.939 256736 DEBUG oslo_concurrency.lockutils [req-317fdb4d-49f8-4222-8a7b-344768e712ba req-95c2a74e-3d39-4762-b1fd-63f88efd534d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:46:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 29 07:46:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 KiB/s wr, 86 op/s
Nov 29 07:46:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 07:46:43 compute-0 ceph-mon[75050]: pgmap v1154: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 KiB/s wr, 66 op/s
Nov 29 07:46:43 compute-0 ceph-mon[75050]: osdmap e127: 3 total, 3 up, 3 in
Nov 29 07:46:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 29 07:46:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:46:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/957339297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:46:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/957339297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:43 compute-0 sshd-session[265202]: Invalid user ftpuser from 143.14.121.41 port 48392
Nov 29 07:46:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:46:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3639083881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:46:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3639083881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:43 compute-0 nova_compute[256729]: 2025-11-29 07:46:43.929 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 29 07:46:44 compute-0 sshd-session[265202]: Connection closed by invalid user ftpuser 143.14.121.41 port 48392 [preauth]
Nov 29 07:46:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 07:46:44 compute-0 ceph-mon[75050]: pgmap v1156: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 KiB/s wr, 86 op/s
Nov 29 07:46:44 compute-0 ceph-mon[75050]: osdmap e128: 3 total, 3 up, 3 in
Nov 29 07:46:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/957339297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/957339297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3639083881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:46:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3639083881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:46:44 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 29 07:46:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 KiB/s wr, 84 op/s
Nov 29 07:46:45 compute-0 nova_compute[256729]: 2025-11-29 07:46:45.144 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:45 compute-0 nova_compute[256729]: 2025-11-29 07:46:45.145 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:45 compute-0 nova_compute[256729]: 2025-11-29 07:46:45.171 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:45 compute-0 nova_compute[256729]: 2025-11-29 07:46:45.172 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:45 compute-0 nova_compute[256729]: 2025-11-29 07:46:45.172 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 29 07:46:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 07:46:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 29 07:46:45 compute-0 ceph-mon[75050]: osdmap e129: 3 total, 3 up, 3 in
Nov 29 07:46:45 compute-0 ceph-mon[75050]: pgmap v1159: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 KiB/s wr, 84 op/s
Nov 29 07:46:46 compute-0 nova_compute[256729]: 2025-11-29 07:46:46.225 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:46 compute-0 nova_compute[256729]: 2025-11-29 07:46:46.329 256736 DEBUG nova.storage.rbd_utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] resizing rbd image 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:46:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.6 KiB/s wr, 86 op/s
Nov 29 07:46:47 compute-0 nova_compute[256729]: 2025-11-29 07:46:47.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:47 compute-0 nova_compute[256729]: 2025-11-29 07:46:47.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:46:48 compute-0 nova_compute[256729]: 2025-11-29 07:46:48.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:48 compute-0 nova_compute[256729]: 2025-11-29 07:46:48.184 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:48 compute-0 nova_compute[256729]: 2025-11-29 07:46:48.185 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:48 compute-0 nova_compute[256729]: 2025-11-29 07:46:48.186 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:48 compute-0 nova_compute[256729]: 2025-11-29 07:46:48.186 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:46:48 compute-0 nova_compute[256729]: 2025-11-29 07:46:48.188 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 82 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 50 op/s
Nov 29 07:46:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:46:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490817327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:49 compute-0 nova_compute[256729]: 2025-11-29 07:46:49.168 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.980s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:49 compute-0 ceph-mon[75050]: osdmap e130: 3 total, 3 up, 3 in
Nov 29 07:46:49 compute-0 ceph-mon[75050]: pgmap v1161: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.6 KiB/s wr, 86 op/s
Nov 29 07:46:49 compute-0 nova_compute[256729]: 2025-11-29 07:46:49.392 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:46:49 compute-0 nova_compute[256729]: 2025-11-29 07:46:49.395 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5105MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:46:49 compute-0 nova_compute[256729]: 2025-11-29 07:46:49.395 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:49 compute-0 nova_compute[256729]: 2025-11-29 07:46:49.396 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:49 compute-0 nova_compute[256729]: 2025-11-29 07:46:49.470 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 470f20d7-0c57-4067-a7ff-7f6b0971ad23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:46:49 compute-0 nova_compute[256729]: 2025-11-29 07:46:49.471 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:46:49 compute-0 nova_compute[256729]: 2025-11-29 07:46:49.471 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:46:49 compute-0 nova_compute[256729]: 2025-11-29 07:46:49.505 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:49 compute-0 sudo[265323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:49 compute-0 sudo[265323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:49 compute-0 sudo[265323]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:49 compute-0 sudo[265348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:46:49 compute-0 sudo[265348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:49 compute-0 sudo[265348]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:50 compute-0 sudo[265373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:50 compute-0 sudo[265373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:50 compute-0 sudo[265373]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:50 compute-0 sudo[265407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:46:50 compute-0 sudo[265407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 29 07:46:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:46:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2184933386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:50 compute-0 nova_compute[256729]: 2025-11-29 07:46:50.317 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:50 compute-0 nova_compute[256729]: 2025-11-29 07:46:50.324 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:46:50 compute-0 nova_compute[256729]: 2025-11-29 07:46:50.355 256736 ERROR nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [req-109a675d-219a-4cdd-ad2d-5761cd886e6c] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-109a675d-219a-4cdd-ad2d-5761cd886e6c"}]}
Nov 29 07:46:50 compute-0 nova_compute[256729]: 2025-11-29 07:46:50.373 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing inventories for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:46:50 compute-0 nova_compute[256729]: 2025-11-29 07:46:50.397 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating ProviderTree inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:46:50 compute-0 nova_compute[256729]: 2025-11-29 07:46:50.397 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:46:50 compute-0 nova_compute[256729]: 2025-11-29 07:46:50.411 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing aggregate associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:46:50 compute-0 nova_compute[256729]: 2025-11-29 07:46:50.442 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing trait associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, traits: COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NODE,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:46:50 compute-0 nova_compute[256729]: 2025-11-29 07:46:50.476 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 82 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 40 op/s
Nov 29 07:46:50 compute-0 sudo[265407]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:46:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:46:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:46:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:46:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 07:46:51 compute-0 ceph-mon[75050]: pgmap v1162: 305 pgs: 305 active+clean; 82 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 50 op/s
Nov 29 07:46:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2490817327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 29 07:46:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:46:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1116998142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:51 compute-0 nova_compute[256729]: 2025-11-29 07:46:51.319 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.843s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:51 compute-0 nova_compute[256729]: 2025-11-29 07:46:51.327 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:46:51 compute-0 nova_compute[256729]: 2025-11-29 07:46:51.387 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updated inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 29 07:46:51 compute-0 nova_compute[256729]: 2025-11-29 07:46:51.388 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 07:46:51 compute-0 nova_compute[256729]: 2025-11-29 07:46:51.388 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:46:51 compute-0 nova_compute[256729]: 2025-11-29 07:46:51.413 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:46:51 compute-0 nova_compute[256729]: 2025-11-29 07:46:51.413 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:46:51 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3fe21188-a016-4350-9173-03a0b97e9373 does not exist
Nov 29 07:46:51 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 8946f783-dfec-469b-9bb0-28bdb66213bc does not exist
Nov 29 07:46:51 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7904e36b-da47-4fe2-a3b9-91c1052d71ad does not exist
Nov 29 07:46:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:46:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:46:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:46:51 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:46:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:46:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:51 compute-0 sudo[265502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:51 compute-0 sudo[265502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:51 compute-0 sudo[265502]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:51 compute-0 sudo[265529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:46:51 compute-0 sudo[265529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:51 compute-0 sudo[265529]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:51 compute-0 sudo[265554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:51 compute-0 sudo[265554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:51 compute-0 sudo[265554]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:51 compute-0 sshd-session[265231]: Invalid user user from 143.14.121.41 port 35306
Nov 29 07:46:51 compute-0 sudo[265579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:46:51 compute-0 sudo[265579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.082 256736 DEBUG nova.objects.instance [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lazy-loading 'migration_context' on Instance uuid 470f20d7-0c57-4067-a7ff-7f6b0971ad23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:46:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2184933386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:52 compute-0 ceph-mon[75050]: pgmap v1163: 305 pgs: 305 active+clean; 82 MiB data, 196 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 40 op/s
Nov 29 07:46:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:46:52 compute-0 ceph-mon[75050]: osdmap e131: 3 total, 3 up, 3 in
Nov 29 07:46:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1116998142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:46:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:46:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:46:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:46:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.105 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.106 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Ensure instance console log exists: /var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.107 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.108 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.109 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.113 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Start _get_guest_xml network_info=[{"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.119 256736 WARNING nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.126 256736 DEBUG nova.virt.libvirt.host [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.128 256736 DEBUG nova.virt.libvirt.host [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.152 256736 DEBUG nova.virt.libvirt.host [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.153 256736 DEBUG nova.virt.libvirt.host [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.153 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.154 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.154 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.154 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.155 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.155 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.155 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.155 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.156 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.156 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.156 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.156 256736 DEBUG nova.virt.hardware [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.160 256736 DEBUG nova.privsep.utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.160 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:52 compute-0 sshd-session[265231]: Connection closed by invalid user user 143.14.121.41 port 35306 [preauth]
Nov 29 07:46:52 compute-0 podman[265646]: 2025-11-29 07:46:52.345451336 +0000 UTC m=+0.110388528 container create 6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:46:52 compute-0 podman[265646]: 2025-11-29 07:46:52.259184677 +0000 UTC m=+0.024121889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.414 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.417 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.418 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:46:52 compute-0 systemd[1]: Started libpod-conmon-6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f.scope.
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.447 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.448 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.449 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:46:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:46:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Nov 29 07:46:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:46:52 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/162815123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.654 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.686 256736 DEBUG nova.storage.rbd_utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] rbd image 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:52 compute-0 nova_compute[256729]: 2025-11-29 07:46:52.691 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:52 compute-0 podman[265646]: 2025-11-29 07:46:52.840011157 +0000 UTC m=+0.604948479 container init 6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:46:52 compute-0 podman[265646]: 2025-11-29 07:46:52.856135717 +0000 UTC m=+0.621072949 container start 6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_saha, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:46:52 compute-0 charming_saha[265681]: 167 167
Nov 29 07:46:52 compute-0 podman[265646]: 2025-11-29 07:46:52.864745211 +0000 UTC m=+0.629682513 container attach 6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_saha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:46:52 compute-0 systemd[1]: libpod-6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f.scope: Deactivated successfully.
Nov 29 07:46:52 compute-0 conmon[265681]: conmon 6c1d25be62a089e5840b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f.scope/container/memory.events
Nov 29 07:46:52 compute-0 podman[265646]: 2025-11-29 07:46:52.867436125 +0000 UTC m=+0.632373327 container died 6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:46:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:46:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796273259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.372 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.375 256736 DEBUG nova.virt.libvirt.vif [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:46:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1543412800',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1543412800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1543412800',id=1,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPWpd+BDpiEvsb+/Y7B4qemwFzbHqOHZXcqLb3Lc82301t4mUHmYZZ6kFaiNduZ2VKKfDBVWcULnlQXy+O4iuVoSPVyYZy38PgEwdp/PE9meJZz5C2NLzf3taJFY/Vnc4A==',key_name='tempest-keypair-913396319',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7b8fec760c4dfeabbf878615dc25ec',ramdisk_id='',reservation_id='r-bj5gu0hr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-1640910800',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-1640910800-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:46:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a53522f9f2b14db5b3b2ead64c730558',uuid=470f20d7-0c57-4067-a7ff-7f6b0971ad23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.376 256736 DEBUG nova.network.os_vif_util [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Converting VIF {"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.379 256736 DEBUG nova.network.os_vif_util [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:90:9b,bridge_name='br-int',has_traffic_filtering=True,id=cf01fba8-1ce4-4048-8b70-76060249d02d,network=Network(a027e4c7-144b-44ef-882c-4c6ddedeae6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf01fba8-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.383 256736 DEBUG nova.objects.instance [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 470f20d7-0c57-4067-a7ff-7f6b0971ad23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:46:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.415 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <uuid>470f20d7-0c57-4067-a7ff-7f6b0971ad23</uuid>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <name>instance-00000001</name>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <nova:name>tempest-EncryptedVolumesExtendAttachedTest-instance-1543412800</nova:name>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:46:52</nova:creationTime>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <nova:user uuid="a53522f9f2b14db5b3b2ead64c730558">tempest-EncryptedVolumesExtendAttachedTest-1640910800-project-member</nova:user>
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <nova:project uuid="ed7b8fec760c4dfeabbf878615dc25ec">tempest-EncryptedVolumesExtendAttachedTest-1640910800</nova:project>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <nova:port uuid="cf01fba8-1ce4-4048-8b70-76060249d02d">
Nov 29 07:46:53 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <system>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <entry name="serial">470f20d7-0c57-4067-a7ff-7f6b0971ad23</entry>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <entry name="uuid">470f20d7-0c57-4067-a7ff-7f6b0971ad23</entry>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     </system>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <os>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   </os>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <features>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   </features>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk">
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       </source>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk.config">
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       </source>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:46:53 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:08:90:9b"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <target dev="tapcf01fba8-1c"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23/console.log" append="off"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <video>
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     </video>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:46:53 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:46:53 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:46:53 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:46:53 compute-0 nova_compute[256729]: </domain>
Nov 29 07:46:53 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.416 256736 DEBUG nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Preparing to wait for external event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.417 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.417 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.418 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.420 256736 DEBUG nova.virt.libvirt.vif [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:46:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1543412800',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1543412800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1543412800',id=1,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPWpd+BDpiEvsb+/Y7B4qemwFzbHqOHZXcqLb3Lc82301t4mUHmYZZ6kFaiNduZ2VKKfDBVWcULnlQXy+O4iuVoSPVyYZy38PgEwdp/PE9meJZz5C2NLzf3taJFY/Vnc4A==',key_name='tempest-keypair-913396319',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed7b8fec760c4dfeabbf878615dc25ec',ramdisk_id='',reservation_id='r-bj5gu0hr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-1640910800',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-1640910800-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:46:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a53522f9f2b14db5b3b2ead64c730558',uuid=470f20d7-0c57-4067-a7ff-7f6b0971ad23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.420 256736 DEBUG nova.network.os_vif_util [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Converting VIF {"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.422 256736 DEBUG nova.network.os_vif_util [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:90:9b,bridge_name='br-int',has_traffic_filtering=True,id=cf01fba8-1ce4-4048-8b70-76060249d02d,network=Network(a027e4c7-144b-44ef-882c-4c6ddedeae6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf01fba8-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.423 256736 DEBUG os_vif [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:90:9b,bridge_name='br-int',has_traffic_filtering=True,id=cf01fba8-1ce4-4048-8b70-76060249d02d,network=Network(a027e4c7-144b-44ef-882c-4c6ddedeae6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf01fba8-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.483 256736 DEBUG ovsdbapp.backend.ovs_idl [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.484 256736 DEBUG ovsdbapp.backend.ovs_idl [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.484 256736 DEBUG ovsdbapp.backend.ovs_idl [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.485 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.486 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.486 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.487 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.488 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.491 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.502 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.503 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.503 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:46:53 compute-0 nova_compute[256729]: 2025-11-29 07:46:53.504 256736 INFO oslo.privsep.daemon [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp4yqe7f_k/privsep.sock']
Nov 29 07:46:53 compute-0 ceph-mon[75050]: pgmap v1165: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Nov 29 07:46:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/162815123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-56a58b97b9678ea86989c3e9718db4460996580655dbd7d8a9853522341342d1-merged.mount: Deactivated successfully.
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.248 256736 INFO oslo.privsep.daemon [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Spawned new privsep daemon via rootwrap
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.127 265746 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.134 265746 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.137 265746 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.138 265746 INFO oslo.privsep.daemon [-] privsep daemon running as pid 265746
Nov 29 07:46:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 07:46:54 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 29 07:46:54 compute-0 podman[265646]: 2025-11-29 07:46:54.416201992 +0000 UTC m=+2.181139194 container remove 6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:46:54 compute-0 systemd[1]: libpod-conmon-6c1d25be62a089e5840b76c9647f2d1bb44912376330f00e7e7a90d39977e05f.scope: Deactivated successfully.
Nov 29 07:46:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 49 op/s
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.608 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.609 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf01fba8-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.610 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf01fba8-1c, col_values=(('external_ids', {'iface-id': 'cf01fba8-1ce4-4048-8b70-76060249d02d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:90:9b', 'vm-uuid': '470f20d7-0c57-4067-a7ff-7f6b0971ad23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.613 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:54 compute-0 NetworkManager[48962]: <info>  [1764402414.6144] manager: (tapcf01fba8-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.617 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.620 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:54 compute-0 nova_compute[256729]: 2025-11-29 07:46:54.622 256736 INFO os_vif [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:90:9b,bridge_name='br-int',has_traffic_filtering=True,id=cf01fba8-1ce4-4048-8b70-76060249d02d,network=Network(a027e4c7-144b-44ef-882c-4c6ddedeae6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf01fba8-1c')
Nov 29 07:46:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1796273259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:46:54 compute-0 ceph-mon[75050]: osdmap e132: 3 total, 3 up, 3 in
Nov 29 07:46:54 compute-0 podman[265757]: 2025-11-29 07:46:54.633220612 +0000 UTC m=+0.047013871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:46:55 compute-0 nova_compute[256729]: 2025-11-29 07:46:55.123 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:46:55 compute-0 nova_compute[256729]: 2025-11-29 07:46:55.123 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:46:55 compute-0 nova_compute[256729]: 2025-11-29 07:46:55.124 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] No VIF found with MAC fa:16:3e:08:90:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:46:55 compute-0 nova_compute[256729]: 2025-11-29 07:46:55.124 256736 INFO nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Using config drive
Nov 29 07:46:55 compute-0 sshd-session[265684]: Invalid user ubuntu from 143.14.121.41 port 57926
Nov 29 07:46:55 compute-0 podman[265757]: 2025-11-29 07:46:55.616416943 +0000 UTC m=+1.030210172 container create 4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:46:55 compute-0 sshd-session[265684]: Connection closed by invalid user ubuntu 143.14.121.41 port 57926 [preauth]
Nov 29 07:46:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 07:46:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 29 07:46:56 compute-0 nova_compute[256729]: 2025-11-29 07:46:56.386 256736 DEBUG nova.storage.rbd_utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] rbd image 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:56 compute-0 nova_compute[256729]: 2025-11-29 07:46:56.393 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:56 compute-0 systemd[1]: Started libpod-conmon-4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81.scope.
Nov 29 07:46:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 07:46:56 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 29 07:46:56 compute-0 ceph-mon[75050]: pgmap v1167: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 MiB/s wr, 49 op/s
Nov 29 07:46:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe77dd1597704da76e1196fb934ad47d6d7938213aa2a454428d6df33003f18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe77dd1597704da76e1196fb934ad47d6d7938213aa2a454428d6df33003f18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe77dd1597704da76e1196fb934ad47d6d7938213aa2a454428d6df33003f18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe77dd1597704da76e1196fb934ad47d6d7938213aa2a454428d6df33003f18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe77dd1597704da76e1196fb934ad47d6d7938213aa2a454428d6df33003f18/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:46:56 compute-0 podman[265757]: 2025-11-29 07:46:56.476991604 +0000 UTC m=+1.890784863 container init 4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:46:56 compute-0 podman[265757]: 2025-11-29 07:46:56.487287075 +0000 UTC m=+1.901080324 container start 4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:46:56 compute-0 podman[265757]: 2025-11-29 07:46:56.492106957 +0000 UTC m=+1.905900206 container attach 4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:46:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 910 KiB/s wr, 23 op/s
Nov 29 07:46:56 compute-0 nova_compute[256729]: 2025-11-29 07:46:56.969 256736 INFO nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Creating config drive at /var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23/disk.config
Nov 29 07:46:56 compute-0 nova_compute[256729]: 2025-11-29 07:46:56.974 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyx3oy1t2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:57 compute-0 nova_compute[256729]: 2025-11-29 07:46:57.115 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyx3oy1t2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:57 compute-0 nova_compute[256729]: 2025-11-29 07:46:57.145 256736 DEBUG nova.storage.rbd_utils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] rbd image 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:46:57 compute-0 nova_compute[256729]: 2025-11-29 07:46:57.150 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23/disk.config 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:46:57 compute-0 tender_carver[265794]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:46:57 compute-0 tender_carver[265794]: --> relative data size: 1.0
Nov 29 07:46:57 compute-0 tender_carver[265794]: --> All data devices are unavailable
Nov 29 07:46:57 compute-0 systemd[1]: libpod-4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81.scope: Deactivated successfully.
Nov 29 07:46:57 compute-0 systemd[1]: libpod-4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81.scope: Consumed 1.084s CPU time.
Nov 29 07:46:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 29 07:46:57 compute-0 podman[265862]: 2025-11-29 07:46:57.687571759 +0000 UTC m=+0.032933667 container died 4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:46:58 compute-0 ceph-mon[75050]: osdmap e133: 3 total, 3 up, 3 in
Nov 29 07:46:58 compute-0 ceph-mon[75050]: pgmap v1169: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 910 KiB/s wr, 23 op/s
Nov 29 07:46:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.3 KiB/s wr, 38 op/s
Nov 29 07:46:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 07:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffe77dd1597704da76e1196fb934ad47d6d7938213aa2a454428d6df33003f18-merged.mount: Deactivated successfully.
Nov 29 07:46:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 29 07:46:58 compute-0 podman[265862]: 2025-11-29 07:46:58.75918993 +0000 UTC m=+1.104551818 container remove 4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:46:58 compute-0 systemd[1]: libpod-conmon-4a54810ffea4b357f4070b37b3c3ec10799c290b0bc2e4262f1f1723ebae5b81.scope: Deactivated successfully.
Nov 29 07:46:58 compute-0 sudo[265579]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:58 compute-0 sudo[265880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:58 compute-0 sudo[265880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:58 compute-0 sudo[265880]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:58 compute-0 sudo[265905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:46:58 compute-0 sudo[265905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:58 compute-0 sudo[265905]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:59 compute-0 sudo[265930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:46:59 compute-0 sudo[265930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:59 compute-0 sudo[265930]: pam_unix(sudo:session): session closed for user root
Nov 29 07:46:59 compute-0 sudo[265955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:46:59 compute-0 sudo[265955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:46:59 compute-0 nova_compute[256729]: 2025-11-29 07:46:59.579 256736 DEBUG oslo_concurrency.processutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23/disk.config 470f20d7-0c57-4067-a7ff-7f6b0971ad23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:46:59 compute-0 nova_compute[256729]: 2025-11-29 07:46:59.580 256736 INFO nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Deleting local config drive /var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23/disk.config because it was imported into RBD.
Nov 29 07:46:59 compute-0 podman[266020]: 2025-11-29 07:46:59.492759381 +0000 UTC m=+0.028229169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:46:59 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 29 07:46:59 compute-0 nova_compute[256729]: 2025-11-29 07:46:59.614 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:59 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 29 07:46:59 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 29 07:46:59 compute-0 kernel: tapcf01fba8-1c: entered promiscuous mode
Nov 29 07:46:59 compute-0 ovn_controller[153383]: 2025-11-29T07:46:59Z|00027|binding|INFO|Claiming lport cf01fba8-1ce4-4048-8b70-76060249d02d for this chassis.
Nov 29 07:46:59 compute-0 ovn_controller[153383]: 2025-11-29T07:46:59Z|00028|binding|INFO|cf01fba8-1ce4-4048-8b70-76060249d02d: Claiming fa:16:3e:08:90:9b 10.100.0.11
Nov 29 07:46:59 compute-0 NetworkManager[48962]: <info>  [1764402419.7310] manager: (tapcf01fba8-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 07:46:59 compute-0 nova_compute[256729]: 2025-11-29 07:46:59.729 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:59 compute-0 nova_compute[256729]: 2025-11-29 07:46:59.736 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:46:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:46:59.755 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:90:9b 10.100.0.11'], port_security=['fa:16:3e:08:90:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '470f20d7-0c57-4067-a7ff-7f6b0971ad23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7b8fec760c4dfeabbf878615dc25ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': '154de6d7-629c-4424-ada2-df33289aad97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14e16434-16e3-4beb-afc5-e4129659b07b, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=cf01fba8-1ce4-4048-8b70-76060249d02d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:46:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:46:59.758 163655 INFO neutron.agent.ovn.metadata.agent [-] Port cf01fba8-1ce4-4048-8b70-76060249d02d in datapath a027e4c7-144b-44ef-882c-4c6ddedeae6f bound to our chassis
Nov 29 07:46:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:46:59.761 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a027e4c7-144b-44ef-882c-4c6ddedeae6f
Nov 29 07:46:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:46:59.764 163655 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpv5pj6f28/privsep.sock']
Nov 29 07:46:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:46:59.765 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:46:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:46:59.766 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:46:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:46:59.766 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:46:59 compute-0 systemd-udevd[266068]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:46:59 compute-0 NetworkManager[48962]: <info>  [1764402419.7964] device (tapcf01fba8-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:46:59 compute-0 NetworkManager[48962]: <info>  [1764402419.8001] device (tapcf01fba8-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:46:59 compute-0 systemd-machined[217781]: New machine qemu-1-instance-00000001.
Nov 29 07:46:59 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 29 07:46:59 compute-0 ovn_controller[153383]: 2025-11-29T07:46:59Z|00029|binding|INFO|Setting lport cf01fba8-1ce4-4048-8b70-76060249d02d ovn-installed in OVS
Nov 29 07:46:59 compute-0 ovn_controller[153383]: 2025-11-29T07:46:59Z|00030|binding|INFO|Setting lport cf01fba8-1ce4-4048-8b70-76060249d02d up in Southbound
Nov 29 07:46:59 compute-0 nova_compute[256729]: 2025-11-29 07:46:59.908 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:00 compute-0 sshd-session[265802]: Connection closed by authenticating user root 143.14.121.41 port 57932 [preauth]
Nov 29 07:47:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 29 07:47:00 compute-0 podman[266020]: 2025-11-29 07:47:00.137157134 +0000 UTC m=+0.672626872 container create c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:47:00 compute-0 ceph-mon[75050]: pgmap v1170: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.3 KiB/s wr, 38 op/s
Nov 29 07:47:00 compute-0 ceph-mon[75050]: osdmap e134: 3 total, 3 up, 3 in
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.158 256736 DEBUG nova.compute.manager [req-1bcf9e61-73ea-4595-a91c-a6157e28abdd req-28c338fc-c167-4cb3-bbe7-4553592655d0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.159 256736 DEBUG oslo_concurrency.lockutils [req-1bcf9e61-73ea-4595-a91c-a6157e28abdd req-28c338fc-c167-4cb3-bbe7-4553592655d0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.159 256736 DEBUG oslo_concurrency.lockutils [req-1bcf9e61-73ea-4595-a91c-a6157e28abdd req-28c338fc-c167-4cb3-bbe7-4553592655d0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.159 256736 DEBUG oslo_concurrency.lockutils [req-1bcf9e61-73ea-4595-a91c-a6157e28abdd req-28c338fc-c167-4cb3-bbe7-4553592655d0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.159 256736 DEBUG nova.compute.manager [req-1bcf9e61-73ea-4595-a91c-a6157e28abdd req-28c338fc-c167-4cb3-bbe7-4553592655d0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Processing event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:47:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 07:47:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:47:00 compute-0 systemd[1]: Started libpod-conmon-c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583.scope.
Nov 29 07:47:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.435 163655 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.437 163655 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpv5pj6f28/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.322 266092 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.326 266092 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.328 266092 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.328 266092 INFO oslo.privsep.daemon [-] privsep daemon running as pid 266092
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.440 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[bc270756-fca8-4e51-8e70-8066f608204e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:00 compute-0 podman[266020]: 2025-11-29 07:47:00.450171 +0000 UTC m=+0.985640758 container init c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:47:00 compute-0 podman[266020]: 2025-11-29 07:47:00.459050742 +0000 UTC m=+0.994520480 container start c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_tharp, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:47:00 compute-0 podman[266020]: 2025-11-29 07:47:00.463582465 +0000 UTC m=+0.999052233 container attach c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:47:00 compute-0 gifted_tharp[266088]: 167 167
Nov 29 07:47:00 compute-0 systemd[1]: libpod-c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583.scope: Deactivated successfully.
Nov 29 07:47:00 compute-0 conmon[266088]: conmon c9a25e3afe983c01389c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583.scope/container/memory.events
Nov 29 07:47:00 compute-0 podman[266098]: 2025-11-29 07:47:00.51037984 +0000 UTC m=+0.027240463 container died c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_tharp, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:47:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a13267c494d553add045c9c8cd07cc32b1ecaea07ff77fad9ab8906fdf4a7240-merged.mount: Deactivated successfully.
Nov 29 07:47:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 31 op/s
Nov 29 07:47:00 compute-0 podman[266098]: 2025-11-29 07:47:00.543115522 +0000 UTC m=+0.059976115 container remove c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:47:00 compute-0 systemd[1]: libpod-conmon-c9a25e3afe983c01389c1ad12990d77555ed71b3235444a38f590dd97594c583.scope: Deactivated successfully.
Nov 29 07:47:00 compute-0 podman[266121]: 2025-11-29 07:47:00.704055525 +0000 UTC m=+0.044891963 container create f958a5f023b30e56f0e6a7557b128cd78a8538fa08c77d0dd17fc68343a563b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.739 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.740 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:00 compute-0 systemd[1]: Started libpod-conmon-f958a5f023b30e56f0e6a7557b128cd78a8538fa08c77d0dd17fc68343a563b5.scope.
Nov 29 07:47:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0951e87bb666185113339973edce8d0304727f70a22ee6abaec16dbf3ea43976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0951e87bb666185113339973edce8d0304727f70a22ee6abaec16dbf3ea43976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0951e87bb666185113339973edce8d0304727f70a22ee6abaec16dbf3ea43976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0951e87bb666185113339973edce8d0304727f70a22ee6abaec16dbf3ea43976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:00 compute-0 podman[266121]: 2025-11-29 07:47:00.684722609 +0000 UTC m=+0.025559097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:47:00 compute-0 podman[266121]: 2025-11-29 07:47:00.791373744 +0000 UTC m=+0.132210212 container init f958a5f023b30e56f0e6a7557b128cd78a8538fa08c77d0dd17fc68343a563b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:47:00 compute-0 podman[266121]: 2025-11-29 07:47:00.798620951 +0000 UTC m=+0.139457389 container start f958a5f023b30e56f0e6a7557b128cd78a8538fa08c77d0dd17fc68343a563b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:47:00 compute-0 podman[266121]: 2025-11-29 07:47:00.80151215 +0000 UTC m=+0.142348578 container attach f958a5f023b30e56f0e6a7557b128cd78a8538fa08c77d0dd17fc68343a563b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.951 256736 DEBUG nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.952 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402420.9507744, 470f20d7-0c57-4067-a7ff-7f6b0971ad23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.952 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] VM Started (Lifecycle Event)
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.957 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.967 256736 INFO nova.virt.libvirt.driver [-] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Instance spawned successfully.
Nov 29 07:47:00 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.969 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.985 266092 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.985 266092 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:00.986 266092 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:00.999 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.004 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.033 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.036 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402420.951032, 470f20d7-0c57-4067-a7ff-7f6b0971ad23 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.038 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] VM Paused (Lifecycle Event)
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.046 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.046 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.047 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.047 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.048 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.048 256736 DEBUG nova.virt.libvirt.driver [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.088 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.092 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402420.9567435, 470f20d7-0c57-4067-a7ff-7f6b0971ad23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.092 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] VM Resumed (Lifecycle Event)
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.116 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.119 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.128 256736 INFO nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Took 24.77 seconds to spawn the instance on the hypervisor.
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.129 256736 DEBUG nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.156 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:47:01 compute-0 ceph-mon[75050]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 07:47:01 compute-0 ceph-mon[75050]: pgmap v1173: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 31 op/s
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.221 256736 INFO nova.compute.manager [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Took 25.76 seconds to build instance.
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.244 256736 DEBUG oslo_concurrency.lockutils [None req-7b9247cb-af9f-4b51-ab1b-84e74d2c2ca4 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:01 compute-0 nova_compute[256729]: 2025-11-29 07:47:01.248 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:47:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1602692490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:47:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1602692490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:01 compute-0 admiring_euler[266144]: {
Nov 29 07:47:01 compute-0 admiring_euler[266144]:     "0": [
Nov 29 07:47:01 compute-0 admiring_euler[266144]:         {
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "devices": [
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "/dev/loop3"
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             ],
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_name": "ceph_lv0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_size": "21470642176",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "name": "ceph_lv0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "tags": {
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.cluster_name": "ceph",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.crush_device_class": "",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.encrypted": "0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.osd_id": "0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.type": "block",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.vdo": "0"
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             },
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "type": "block",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "vg_name": "ceph_vg0"
Nov 29 07:47:01 compute-0 admiring_euler[266144]:         }
Nov 29 07:47:01 compute-0 admiring_euler[266144]:     ],
Nov 29 07:47:01 compute-0 admiring_euler[266144]:     "1": [
Nov 29 07:47:01 compute-0 admiring_euler[266144]:         {
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "devices": [
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "/dev/loop4"
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             ],
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_name": "ceph_lv1",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_size": "21470642176",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "name": "ceph_lv1",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "tags": {
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.cluster_name": "ceph",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.crush_device_class": "",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.encrypted": "0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.osd_id": "1",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.type": "block",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.vdo": "0"
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             },
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "type": "block",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "vg_name": "ceph_vg1"
Nov 29 07:47:01 compute-0 admiring_euler[266144]:         }
Nov 29 07:47:01 compute-0 admiring_euler[266144]:     ],
Nov 29 07:47:01 compute-0 admiring_euler[266144]:     "2": [
Nov 29 07:47:01 compute-0 admiring_euler[266144]:         {
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "devices": [
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "/dev/loop5"
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             ],
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_name": "ceph_lv2",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_size": "21470642176",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "name": "ceph_lv2",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "tags": {
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.cluster_name": "ceph",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.crush_device_class": "",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.encrypted": "0",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.osd_id": "2",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.type": "block",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:                 "ceph.vdo": "0"
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             },
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "type": "block",
Nov 29 07:47:01 compute-0 admiring_euler[266144]:             "vg_name": "ceph_vg2"
Nov 29 07:47:01 compute-0 admiring_euler[266144]:         }
Nov 29 07:47:01 compute-0 admiring_euler[266144]:     ]
Nov 29 07:47:01 compute-0 admiring_euler[266144]: }
Nov 29 07:47:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:01.568 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d0e2c4-dd9d-4435-beb3-14c6bece7c97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:01.570 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa027e4c7-11 in ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:47:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:01.572 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa027e4c7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:47:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:01.572 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[641a65ec-66f1-409a-93b6-98d340b87853]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:01.575 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[cae824e4-8c7e-4a6b-8f4b-c55aa99e681d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:01 compute-0 systemd[1]: libpod-f958a5f023b30e56f0e6a7557b128cd78a8538fa08c77d0dd17fc68343a563b5.scope: Deactivated successfully.
Nov 29 07:47:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:01.605 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf82897-c07e-4baa-ab66-30136f240660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:01.621 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9ec922-0005-4eaf-870b-2577fb1f581e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:01.626 163655 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp_ylguf3k/privsep.sock']
Nov 29 07:47:01 compute-0 podman[266193]: 2025-11-29 07:47:01.650561447 +0000 UTC m=+0.041545043 container died f958a5f023b30e56f0e6a7557b128cd78a8538fa08c77d0dd17fc68343a563b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:47:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0951e87bb666185113339973edce8d0304727f70a22ee6abaec16dbf3ea43976-merged.mount: Deactivated successfully.
Nov 29 07:47:01 compute-0 podman[266193]: 2025-11-29 07:47:01.706443119 +0000 UTC m=+0.097426705 container remove f958a5f023b30e56f0e6a7557b128cd78a8538fa08c77d0dd17fc68343a563b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:47:01 compute-0 systemd[1]: libpod-conmon-f958a5f023b30e56f0e6a7557b128cd78a8538fa08c77d0dd17fc68343a563b5.scope: Deactivated successfully.
Nov 29 07:47:01 compute-0 sudo[265955]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:01 compute-0 sudo[266212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:01 compute-0 sudo[266212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:01 compute-0 sudo[266212]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:01 compute-0 sudo[266237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:47:01 compute-0 sudo[266237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:01 compute-0 sudo[266237]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:01 compute-0 sudo[266262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:01 compute-0 sudo[266262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:01 compute-0 sudo[266262]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:01 compute-0 sudo[266288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:47:01 compute-0 sudo[266288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 29 07:47:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 07:47:02 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 29 07:47:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1602692490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1602692490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:02 compute-0 podman[266351]: 2025-11-29 07:47:02.315975512 +0000 UTC m=+0.047217116 container create a3fb45032b189b3352ee1cc0e1eb9673cab8984a76913504c3df931f23763225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:47:02 compute-0 systemd[1]: Started libpod-conmon-a3fb45032b189b3352ee1cc0e1eb9673cab8984a76913504c3df931f23763225.scope.
Nov 29 07:47:02 compute-0 nova_compute[256729]: 2025-11-29 07:47:02.368 256736 DEBUG nova.compute.manager [req-bf1a0799-934f-4a29-a7da-59132bbdaddb req-4beb9a0b-827e-453f-9d0c-77e282adb45a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:02 compute-0 nova_compute[256729]: 2025-11-29 07:47:02.369 256736 DEBUG oslo_concurrency.lockutils [req-bf1a0799-934f-4a29-a7da-59132bbdaddb req-4beb9a0b-827e-453f-9d0c-77e282adb45a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:02 compute-0 nova_compute[256729]: 2025-11-29 07:47:02.369 256736 DEBUG oslo_concurrency.lockutils [req-bf1a0799-934f-4a29-a7da-59132bbdaddb req-4beb9a0b-827e-453f-9d0c-77e282adb45a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:02 compute-0 nova_compute[256729]: 2025-11-29 07:47:02.371 256736 DEBUG oslo_concurrency.lockutils [req-bf1a0799-934f-4a29-a7da-59132bbdaddb req-4beb9a0b-827e-453f-9d0c-77e282adb45a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:02 compute-0 nova_compute[256729]: 2025-11-29 07:47:02.374 256736 DEBUG nova.compute.manager [req-bf1a0799-934f-4a29-a7da-59132bbdaddb req-4beb9a0b-827e-453f-9d0c-77e282adb45a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] No waiting events found dispatching network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:47:02 compute-0 nova_compute[256729]: 2025-11-29 07:47:02.374 256736 WARNING nova.compute.manager [req-bf1a0799-934f-4a29-a7da-59132bbdaddb req-4beb9a0b-827e-453f-9d0c-77e282adb45a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received unexpected event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d for instance with vm_state active and task_state None.
Nov 29 07:47:02 compute-0 podman[266351]: 2025-11-29 07:47:02.294162998 +0000 UTC m=+0.025404632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:47:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:02 compute-0 podman[266351]: 2025-11-29 07:47:02.412595734 +0000 UTC m=+0.143837368 container init a3fb45032b189b3352ee1cc0e1eb9673cab8984a76913504c3df931f23763225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:47:02 compute-0 podman[266351]: 2025-11-29 07:47:02.420078118 +0000 UTC m=+0.151319722 container start a3fb45032b189b3352ee1cc0e1eb9673cab8984a76913504c3df931f23763225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.422 163655 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 07:47:02 compute-0 zen_euclid[266368]: 167 167
Nov 29 07:47:02 compute-0 systemd[1]: libpod-a3fb45032b189b3352ee1cc0e1eb9673cab8984a76913504c3df931f23763225.scope: Deactivated successfully.
Nov 29 07:47:02 compute-0 podman[266351]: 2025-11-29 07:47:02.424355954 +0000 UTC m=+0.155597618 container attach a3fb45032b189b3352ee1cc0e1eb9673cab8984a76913504c3df931f23763225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.424 163655 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_ylguf3k/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 07:47:02 compute-0 podman[266351]: 2025-11-29 07:47:02.425337362 +0000 UTC m=+0.156579006 container died a3fb45032b189b3352ee1cc0e1eb9673cab8984a76913504c3df931f23763225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.272 266358 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.275 266358 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.277 266358 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.277 266358 INFO oslo.privsep.daemon [-] privsep daemon running as pid 266358
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.427 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5c5d1e-ed21-4ddb-ae2f-1c94bb2a36bf]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-14f544c0ab09040bf9ec28d86cdfe084b5fe55a69cf755c8b99cbbd08b4d333e-merged.mount: Deactivated successfully.
Nov 29 07:47:02 compute-0 podman[266351]: 2025-11-29 07:47:02.464244081 +0000 UTC m=+0.195485685 container remove a3fb45032b189b3352ee1cc0e1eb9673cab8984a76913504c3df931f23763225 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_euclid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:47:02 compute-0 systemd[1]: libpod-conmon-a3fb45032b189b3352ee1cc0e1eb9673cab8984a76913504c3df931f23763225.scope: Deactivated successfully.
Nov 29 07:47:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 4.2 KiB/s wr, 87 op/s
Nov 29 07:47:02 compute-0 podman[266395]: 2025-11-29 07:47:02.642547198 +0000 UTC m=+0.048450161 container create fcc9634d92eb3e0af6db727ebd2f30c5d155a759babc47b51c930cda74f3208c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 29 07:47:02 compute-0 systemd[1]: Started libpod-conmon-fcc9634d92eb3e0af6db727ebd2f30c5d155a759babc47b51c930cda74f3208c.scope.
Nov 29 07:47:02 compute-0 podman[266395]: 2025-11-29 07:47:02.62282312 +0000 UTC m=+0.028726073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:47:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232d69065ed0fca49606a52d152cfacf4680ce3183815d3871a6632d4b73206a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232d69065ed0fca49606a52d152cfacf4680ce3183815d3871a6632d4b73206a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232d69065ed0fca49606a52d152cfacf4680ce3183815d3871a6632d4b73206a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232d69065ed0fca49606a52d152cfacf4680ce3183815d3871a6632d4b73206a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:02 compute-0 podman[266395]: 2025-11-29 07:47:02.739691774 +0000 UTC m=+0.145594727 container init fcc9634d92eb3e0af6db727ebd2f30c5d155a759babc47b51c930cda74f3208c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:47:02 compute-0 podman[266395]: 2025-11-29 07:47:02.750464018 +0000 UTC m=+0.156366951 container start fcc9634d92eb3e0af6db727ebd2f30c5d155a759babc47b51c930cda74f3208c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 07:47:02 compute-0 podman[266395]: 2025-11-29 07:47:02.757234772 +0000 UTC m=+0.163137865 container attach fcc9634d92eb3e0af6db727ebd2f30c5d155a759babc47b51c930cda74f3208c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.956 266358 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.956 266358 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:02.957 266358 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:03 compute-0 ceph-mon[75050]: osdmap e136: 3 total, 3 up, 3 in
Nov 29 07:47:03 compute-0 ceph-mon[75050]: pgmap v1175: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 4.2 KiB/s wr, 87 op/s
Nov 29 07:47:03 compute-0 sshd-session[266091]: Connection closed by authenticating user root 143.14.121.41 port 57940 [preauth]
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.539 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[df08f91a-9139-4e4a-af3c-0f065e9daf7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 NetworkManager[48962]: <info>  [1764402423.5660] manager: (tapa027e4c7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.565 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[917e233a-8c6e-42af-a692-08a8cbfb467d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 systemd-udevd[266441]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.595 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[83e89cc6-7f22-422e-8e5f-748c089c1fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.597 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[98ceee45-4649-4df2-9d3b-6da8fe82ad56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 NetworkManager[48962]: <info>  [1764402423.6200] device (tapa027e4c7-10): carrier: link connected
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.625 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[343cbb05-821d-424e-99db-85dc5932fe7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.658 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6671b3-9c5d-47bb-b5c6-bf239f8ce40f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa027e4c7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:65:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473868, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266464, 'error': None, 'target': 'ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.675 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c512b912-b7b3-45f3-8c7f-f9c28128bc72]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:65d8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473868, 'tstamp': 473868}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266468, 'error': None, 'target': 'ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.692 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[31c967cb-64ad-4540-b7a6-b5799d6b6fcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa027e4c7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:65:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473868, 'reachable_time': 17368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266469, 'error': None, 'target': 'ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]: {
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "osd_id": 2,
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "type": "bluestore"
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:     },
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "osd_id": 1,
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "type": "bluestore"
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:     },
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "osd_id": 0,
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:         "type": "bluestore"
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]:     }
Nov 29 07:47:03 compute-0 gifted_proskuriakova[266413]: }
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.722 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d34dda-b274-479a-b60e-ba6200b49443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 systemd[1]: libpod-fcc9634d92eb3e0af6db727ebd2f30c5d155a759babc47b51c930cda74f3208c.scope: Deactivated successfully.
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.783 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[de2f10f8-bb0e-415d-9a3a-bbd0ec642ca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.785 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa027e4c7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.785 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.786 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa027e4c7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:03 compute-0 kernel: tapa027e4c7-10: entered promiscuous mode
Nov 29 07:47:03 compute-0 nova_compute[256729]: 2025-11-29 07:47:03.789 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:03 compute-0 NetworkManager[48962]: <info>  [1764402423.7901] manager: (tapa027e4c7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.795 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa027e4c7-10, col_values=(('external_ids', {'iface-id': '94fe3e58-3f42-4a72-a498-8a77e4982797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:03 compute-0 ovn_controller[153383]: 2025-11-29T07:47:03Z|00031|binding|INFO|Releasing lport 94fe3e58-3f42-4a72-a498-8a77e4982797 from this chassis (sb_readonly=0)
Nov 29 07:47:03 compute-0 nova_compute[256729]: 2025-11-29 07:47:03.797 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:03 compute-0 podman[266479]: 2025-11-29 07:47:03.798033722 +0000 UTC m=+0.028990571 container died fcc9634d92eb3e0af6db727ebd2f30c5d155a759babc47b51c930cda74f3208c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:47:03 compute-0 nova_compute[256729]: 2025-11-29 07:47:03.811 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.812 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a027e4c7-144b-44ef-882c-4c6ddedeae6f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a027e4c7-144b-44ef-882c-4c6ddedeae6f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.814 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[03676bfe-c0bf-45e8-bd3a-9b0853484f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.816 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-a027e4c7-144b-44ef-882c-4c6ddedeae6f
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/a027e4c7-144b-44ef-882c-4c6ddedeae6f.pid.haproxy
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID a027e4c7-144b-44ef-882c-4c6ddedeae6f
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:47:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:03.817 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'env', 'PROCESS_TAG=haproxy-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a027e4c7-144b-44ef-882c-4c6ddedeae6f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:47:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-232d69065ed0fca49606a52d152cfacf4680ce3183815d3871a6632d4b73206a-merged.mount: Deactivated successfully.
Nov 29 07:47:03 compute-0 podman[266479]: 2025-11-29 07:47:03.859563447 +0000 UTC m=+0.090520276 container remove fcc9634d92eb3e0af6db727ebd2f30c5d155a759babc47b51c930cda74f3208c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:47:03 compute-0 systemd[1]: libpod-conmon-fcc9634d92eb3e0af6db727ebd2f30c5d155a759babc47b51c930cda74f3208c.scope: Deactivated successfully.
Nov 29 07:47:03 compute-0 sudo[266288]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:47:03 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:47:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:47:03 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:47:03 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 8eb3719b-91ac-4981-aaf6-a978f66750f5 does not exist
Nov 29 07:47:03 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b2f16034-7c52-4998-b649-07bc2512b249 does not exist
Nov 29 07:47:03 compute-0 sudo[266497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:47:03 compute-0 sudo[266497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:03 compute-0 sudo[266497]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:04 compute-0 sudo[266522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:47:04 compute-0 sudo[266522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:47:04 compute-0 sudo[266522]: pam_unix(sudo:session): session closed for user root
Nov 29 07:47:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 29 07:47:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 07:47:04 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 29 07:47:04 compute-0 podman[266569]: 2025-11-29 07:47:04.251443832 +0000 UTC m=+0.075874788 container create b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:47:04 compute-0 systemd[1]: Started libpod-conmon-b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f.scope.
Nov 29 07:47:04 compute-0 podman[266569]: 2025-11-29 07:47:04.210045915 +0000 UTC m=+0.034476951 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:47:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:47:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222c00220a79864af282d15534f5b17945b4a92f766ab93ce1153c4619605736/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:47:04 compute-0 podman[266569]: 2025-11-29 07:47:04.337757943 +0000 UTC m=+0.162188889 container init b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:47:04 compute-0 podman[266569]: 2025-11-29 07:47:04.344761504 +0000 UTC m=+0.169192450 container start b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 07:47:04 compute-0 neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f[266584]: [NOTICE]   (266588) : New worker (266590) forked
Nov 29 07:47:04 compute-0 neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f[266584]: [NOTICE]   (266588) : Loading success.
Nov 29 07:47:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:04.414 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:47:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 31 KiB/s wr, 111 op/s
Nov 29 07:47:04 compute-0 nova_compute[256729]: 2025-11-29 07:47:04.618 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:04 compute-0 nova_compute[256729]: 2025-11-29 07:47:04.797 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:04 compute-0 NetworkManager[48962]: <info>  [1764402424.7985] manager: (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Nov 29 07:47:04 compute-0 NetworkManager[48962]: <info>  [1764402424.7991] device (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:47:04 compute-0 NetworkManager[48962]: <info>  [1764402424.8002] manager: (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Nov 29 07:47:04 compute-0 NetworkManager[48962]: <info>  [1764402424.8005] device (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 07:47:04 compute-0 NetworkManager[48962]: <info>  [1764402424.8013] manager: (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 29 07:47:04 compute-0 NetworkManager[48962]: <info>  [1764402424.8019] manager: (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 29 07:47:04 compute-0 NetworkManager[48962]: <info>  [1764402424.8024] device (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 07:47:04 compute-0 NetworkManager[48962]: <info>  [1764402424.8027] device (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 07:47:04 compute-0 nova_compute[256729]: 2025-11-29 07:47:04.886 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:04 compute-0 ovn_controller[153383]: 2025-11-29T07:47:04Z|00032|binding|INFO|Releasing lport 94fe3e58-3f42-4a72-a498-8a77e4982797 from this chassis (sb_readonly=0)
Nov 29 07:47:04 compute-0 nova_compute[256729]: 2025-11-29 07:47:04.903 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:04 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:47:04 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:47:04 compute-0 ceph-mon[75050]: osdmap e137: 3 total, 3 up, 3 in
Nov 29 07:47:04 compute-0 ceph-mon[75050]: pgmap v1177: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 31 KiB/s wr, 111 op/s
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:47:05
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.log', '.mgr', 'volumes', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 29 07:47:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:47:05 compute-0 nova_compute[256729]: 2025-11-29 07:47:05.619 256736 DEBUG nova.compute.manager [req-acc3181d-4ba6-445c-a876-10cff4811c61 req-ff88846f-336d-448b-9fdf-4694db669df8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-changed-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:05 compute-0 nova_compute[256729]: 2025-11-29 07:47:05.619 256736 DEBUG nova.compute.manager [req-acc3181d-4ba6-445c-a876-10cff4811c61 req-ff88846f-336d-448b-9fdf-4694db669df8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Refreshing instance network info cache due to event network-changed-cf01fba8-1ce4-4048-8b70-76060249d02d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:47:05 compute-0 nova_compute[256729]: 2025-11-29 07:47:05.620 256736 DEBUG oslo_concurrency.lockutils [req-acc3181d-4ba6-445c-a876-10cff4811c61 req-ff88846f-336d-448b-9fdf-4694db669df8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:47:05 compute-0 nova_compute[256729]: 2025-11-29 07:47:05.620 256736 DEBUG oslo_concurrency.lockutils [req-acc3181d-4ba6-445c-a876-10cff4811c61 req-ff88846f-336d-448b-9fdf-4694db669df8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:47:05 compute-0 nova_compute[256729]: 2025-11-29 07:47:05.620 256736 DEBUG nova.network.neutron [req-acc3181d-4ba6-445c-a876-10cff4811c61 req-ff88846f-336d-448b-9fdf-4694db669df8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Refreshing network info cache for port cf01fba8-1ce4-4048-8b70-76060249d02d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:47:06 compute-0 nova_compute[256729]: 2025-11-29 07:47:06.251 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 29 07:47:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 07:47:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 32 KiB/s wr, 176 op/s
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:47:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:47:07 compute-0 nova_compute[256729]: 2025-11-29 07:47:07.539 256736 DEBUG nova.network.neutron [req-acc3181d-4ba6-445c-a876-10cff4811c61 req-ff88846f-336d-448b-9fdf-4694db669df8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Updated VIF entry in instance network info cache for port cf01fba8-1ce4-4048-8b70-76060249d02d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:47:07 compute-0 nova_compute[256729]: 2025-11-29 07:47:07.540 256736 DEBUG nova.network.neutron [req-acc3181d-4ba6-445c-a876-10cff4811c61 req-ff88846f-336d-448b-9fdf-4694db669df8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Updating instance_info_cache with network_info: [{"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:47:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 29 07:47:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 07:47:07 compute-0 ceph-mon[75050]: osdmap e138: 3 total, 3 up, 3 in
Nov 29 07:47:07 compute-0 ceph-mon[75050]: pgmap v1179: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 32 KiB/s wr, 176 op/s
Nov 29 07:47:07 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 07:47:08 compute-0 nova_compute[256729]: 2025-11-29 07:47:08.080 256736 DEBUG oslo_concurrency.lockutils [req-acc3181d-4ba6-445c-a876-10cff4811c61 req-ff88846f-336d-448b-9fdf-4694db669df8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:47:08 compute-0 sshd-session[266463]: Connection closed by authenticating user root 143.14.121.41 port 49322 [preauth]
Nov 29 07:47:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:08.416 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 30 KiB/s wr, 179 op/s
Nov 29 07:47:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:47:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1835873863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:47:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1835873863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:09 compute-0 ceph-mon[75050]: osdmap e139: 3 total, 3 up, 3 in
Nov 29 07:47:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1835873863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1835873863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:09 compute-0 nova_compute[256729]: 2025-11-29 07:47:09.620 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:09 compute-0 podman[266603]: 2025-11-29 07:47:09.707769406 +0000 UTC m=+0.073882373 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 07:47:09 compute-0 podman[266604]: 2025-11-29 07:47:09.707895109 +0000 UTC m=+0.071316043 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 29 07:47:09 compute-0 podman[266602]: 2025-11-29 07:47:09.72847347 +0000 UTC m=+0.098141735 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 07:47:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 29 07:47:10 compute-0 ceph-mon[75050]: pgmap v1181: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 30 KiB/s wr, 179 op/s
Nov 29 07:47:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 07:47:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 29 07:47:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.7 KiB/s wr, 132 op/s
Nov 29 07:47:11 compute-0 nova_compute[256729]: 2025-11-29 07:47:11.253 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 29 07:47:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 KiB/s wr, 92 op/s
Nov 29 07:47:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 07:47:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 29 07:47:13 compute-0 sshd-session[266601]: Connection closed by authenticating user root 143.14.121.41 port 49338 [preauth]
Nov 29 07:47:13 compute-0 ceph-mon[75050]: osdmap e140: 3 total, 3 up, 3 in
Nov 29 07:47:13 compute-0 ceph-mon[75050]: pgmap v1183: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.7 KiB/s wr, 132 op/s
Nov 29 07:47:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Nov 29 07:47:14 compute-0 nova_compute[256729]: 2025-11-29 07:47:14.623 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.642121670899264e-06 of space, bias 1.0, pg target 0.0013926365012697794 quantized to 32 (current 32)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:47:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:47:16 compute-0 nova_compute[256729]: 2025-11-29 07:47:16.256 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 88 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 29 07:47:17 compute-0 ceph-mon[75050]: pgmap v1184: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.3 KiB/s wr, 92 op/s
Nov 29 07:47:17 compute-0 ceph-mon[75050]: osdmap e141: 3 total, 3 up, 3 in
Nov 29 07:47:17 compute-0 ceph-mon[75050]: pgmap v1186: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Nov 29 07:47:17 compute-0 sshd-session[266664]: Connection closed by authenticating user root 143.14.121.41 port 60228 [preauth]
Nov 29 07:47:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 89 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 272 KiB/s wr, 46 op/s
Nov 29 07:47:19 compute-0 ceph-mon[75050]: pgmap v1187: 305 pgs: 305 active+clean; 88 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 29 07:47:19 compute-0 nova_compute[256729]: 2025-11-29 07:47:19.624 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:19 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 07:47:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 29 07:47:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 07:47:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 29 07:47:20 compute-0 ceph-mon[75050]: pgmap v1188: 305 pgs: 305 active+clean; 89 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 272 KiB/s wr, 46 op/s
Nov 29 07:47:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 89 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 271 KiB/s wr, 25 op/s
Nov 29 07:47:21 compute-0 nova_compute[256729]: 2025-11-29 07:47:21.259 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:21 compute-0 ovn_controller[153383]: 2025-11-29T07:47:21Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:90:9b 10.100.0.11
Nov 29 07:47:21 compute-0 ovn_controller[153383]: 2025-11-29T07:47:21Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:90:9b 10.100.0.11
Nov 29 07:47:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 29 07:47:22 compute-0 sshd-session[266666]: Invalid user pi from 143.14.121.41 port 60244
Nov 29 07:47:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 07:47:22 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 29 07:47:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 105 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 1.7 MiB/s wr, 67 op/s
Nov 29 07:47:22 compute-0 sshd-session[266666]: Connection closed by invalid user pi 143.14.121.41 port 60244 [preauth]
Nov 29 07:47:22 compute-0 ceph-mon[75050]: osdmap e142: 3 total, 3 up, 3 in
Nov 29 07:47:22 compute-0 ceph-mon[75050]: pgmap v1190: 305 pgs: 305 active+clean; 89 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 271 KiB/s wr, 25 op/s
Nov 29 07:47:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 29 07:47:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 07:47:23 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 29 07:47:24 compute-0 ceph-mon[75050]: osdmap e143: 3 total, 3 up, 3 in
Nov 29 07:47:24 compute-0 ceph-mon[75050]: pgmap v1192: 305 pgs: 305 active+clean; 105 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 1.7 MiB/s wr, 67 op/s
Nov 29 07:47:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 113 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 597 KiB/s rd, 3.7 MiB/s wr, 121 op/s
Nov 29 07:47:24 compute-0 nova_compute[256729]: 2025-11-29 07:47:24.627 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:25 compute-0 ceph-mon[75050]: osdmap e144: 3 total, 3 up, 3 in
Nov 29 07:47:25 compute-0 ceph-mon[75050]: pgmap v1194: 305 pgs: 305 active+clean; 113 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 597 KiB/s rd, 3.7 MiB/s wr, 121 op/s
Nov 29 07:47:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:47:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/117983254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:47:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/117983254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:25 compute-0 sshd-session[266670]: Invalid user oracle from 143.14.121.41 port 56766
Nov 29 07:47:25 compute-0 sshd-session[266670]: Connection closed by invalid user oracle 143.14.121.41 port 56766 [preauth]
Nov 29 07:47:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/117983254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/117983254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:26 compute-0 nova_compute[256729]: 2025-11-29 07:47:26.262 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 115 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 662 KiB/s rd, 3.7 MiB/s wr, 147 op/s
Nov 29 07:47:27 compute-0 ceph-mon[75050]: pgmap v1195: 305 pgs: 305 active+clean; 115 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 662 KiB/s rd, 3.7 MiB/s wr, 147 op/s
Nov 29 07:47:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 29 07:47:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 07:47:28 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 29 07:47:28 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 07:47:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:47:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3222977709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:47:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3222977709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:28 compute-0 sshd-session[266672]: Invalid user mcserver from 143.14.121.41 port 56782
Nov 29 07:47:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 540 KiB/s rd, 2.0 MiB/s wr, 139 op/s
Nov 29 07:47:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:29 compute-0 sshd-session[266672]: Connection closed by invalid user mcserver 143.14.121.41 port 56782 [preauth]
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.118165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402449118231, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1444, "num_deletes": 265, "total_data_size": 2051252, "memory_usage": 2078896, "flush_reason": "Manual Compaction"}
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402449152367, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 2016042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18997, "largest_seqno": 20440, "table_properties": {"data_size": 2008884, "index_size": 4233, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14826, "raw_average_key_size": 20, "raw_value_size": 1994487, "raw_average_value_size": 2713, "num_data_blocks": 188, "num_entries": 735, "num_filter_entries": 735, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402345, "oldest_key_time": 1764402345, "file_creation_time": 1764402449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 34276 microseconds, and 4984 cpu microseconds.
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.152441) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 2016042 bytes OK
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.152467) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.158335) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.158365) EVENT_LOG_v1 {"time_micros": 1764402449158357, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.158387) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2044527, prev total WAL file size 2044527, number of live WAL files 2.
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.159328) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1968KB)], [44(6804KB)]
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402449159410, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 8984093, "oldest_snapshot_seqno": -1}
Nov 29 07:47:29 compute-0 nova_compute[256729]: 2025-11-29 07:47:29.691 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4657 keys, 8859421 bytes, temperature: kUnknown
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402449789211, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8859421, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8825722, "index_size": 20981, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 115964, "raw_average_key_size": 24, "raw_value_size": 8738852, "raw_average_value_size": 1876, "num_data_blocks": 879, "num_entries": 4657, "num_filter_entries": 4657, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764402449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:47:29 compute-0 ceph-mon[75050]: osdmap e145: 3 total, 3 up, 3 in
Nov 29 07:47:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3222977709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3222977709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:29 compute-0 ceph-mon[75050]: pgmap v1197: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 540 KiB/s rd, 2.0 MiB/s wr, 139 op/s
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.789520) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8859421 bytes
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.802450) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 14.3 rd, 14.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 6.6 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(8.9) write-amplify(4.4) OK, records in: 5195, records dropped: 538 output_compression: NoCompression
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.802487) EVENT_LOG_v1 {"time_micros": 1764402449802469, "job": 22, "event": "compaction_finished", "compaction_time_micros": 629892, "compaction_time_cpu_micros": 37234, "output_level": 6, "num_output_files": 1, "total_output_size": 8859421, "num_input_records": 5195, "num_output_records": 4657, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402449803459, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402449805982, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.159174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.806123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.806133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.806138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.806142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:29 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:47:29.806146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:47:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 1.5 MiB/s wr, 105 op/s
Nov 29 07:47:31 compute-0 ceph-mon[75050]: pgmap v1198: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 1.5 MiB/s wr, 105 op/s
Nov 29 07:47:31 compute-0 nova_compute[256729]: 2025-11-29 07:47:31.265 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 29 07:47:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 07:47:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 29 07:47:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 140 KiB/s wr, 81 op/s
Nov 29 07:47:33 compute-0 ceph-mon[75050]: osdmap e146: 3 total, 3 up, 3 in
Nov 29 07:47:33 compute-0 ceph-mon[75050]: pgmap v1200: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 140 KiB/s wr, 81 op/s
Nov 29 07:47:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 29 07:47:34 compute-0 sshd-session[266674]: Invalid user adam from 143.14.121.41 port 56790
Nov 29 07:47:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 07:47:34 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 29 07:47:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 124 KiB/s wr, 48 op/s
Nov 29 07:47:34 compute-0 sshd-session[266674]: Connection closed by invalid user adam 143.14.121.41 port 56790 [preauth]
Nov 29 07:47:34 compute-0 nova_compute[256729]: 2025-11-29 07:47:34.693 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:34 compute-0 ovn_controller[153383]: 2025-11-29T07:47:34Z|00033|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 07:47:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:47:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:47:35 compute-0 ceph-mon[75050]: osdmap e147: 3 total, 3 up, 3 in
Nov 29 07:47:35 compute-0 ceph-mon[75050]: pgmap v1202: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 124 KiB/s wr, 48 op/s
Nov 29 07:47:36 compute-0 nova_compute[256729]: 2025-11-29 07:47:36.268 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 20 KiB/s wr, 31 op/s
Nov 29 07:47:37 compute-0 ceph-mon[75050]: pgmap v1203: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 20 KiB/s wr, 31 op/s
Nov 29 07:47:37 compute-0 sshd-session[266676]: Invalid user ts3 from 143.14.121.41 port 33630
Nov 29 07:47:38 compute-0 nova_compute[256729]: 2025-11-29 07:47:38.040 256736 DEBUG oslo_concurrency.lockutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:38 compute-0 nova_compute[256729]: 2025-11-29 07:47:38.041 256736 DEBUG oslo_concurrency.lockutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:38 compute-0 sshd-session[266676]: Connection closed by invalid user ts3 143.14.121.41 port 33630 [preauth]
Nov 29 07:47:38 compute-0 nova_compute[256729]: 2025-11-29 07:47:38.453 256736 DEBUG nova.objects.instance [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lazy-loading 'flavor' on Instance uuid 470f20d7-0c57-4067-a7ff-7f6b0971ad23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:47:38 compute-0 nova_compute[256729]: 2025-11-29 07:47:38.539 256736 INFO nova.virt.libvirt.driver [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Ignoring supplied device name: /dev/vdb
Nov 29 07:47:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 32 op/s
Nov 29 07:47:38 compute-0 nova_compute[256729]: 2025-11-29 07:47:38.617 256736 DEBUG oslo_concurrency.lockutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 29 07:47:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 07:47:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 29 07:47:38 compute-0 ceph-mon[75050]: pgmap v1204: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 32 op/s
Nov 29 07:47:38 compute-0 ceph-mon[75050]: osdmap e148: 3 total, 3 up, 3 in
Nov 29 07:47:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 29 07:47:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 07:47:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 29 07:47:39 compute-0 nova_compute[256729]: 2025-11-29 07:47:39.696 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:39 compute-0 nova_compute[256729]: 2025-11-29 07:47:39.919 256736 DEBUG oslo_concurrency.lockutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:39 compute-0 nova_compute[256729]: 2025-11-29 07:47:39.920 256736 DEBUG oslo_concurrency.lockutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:39 compute-0 nova_compute[256729]: 2025-11-29 07:47:39.920 256736 INFO nova.compute.manager [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Attaching volume 5ad58a37-fcad-47dc-82b6-d266b6409b30 to /dev/vdb
Nov 29 07:47:40 compute-0 nova_compute[256729]: 2025-11-29 07:47:40.314 256736 DEBUG os_brick.utils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:47:40 compute-0 nova_compute[256729]: 2025-11-29 07:47:40.316 256736 INFO oslo.privsep.daemon [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp2b17b584/privsep.sock']
Nov 29 07:47:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 813 B/s wr, 16 op/s
Nov 29 07:47:40 compute-0 podman[266686]: 2025-11-29 07:47:40.745550579 +0000 UTC m=+0.094092745 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 07:47:40 compute-0 ceph-mon[75050]: osdmap e149: 3 total, 3 up, 3 in
Nov 29 07:47:40 compute-0 podman[266685]: 2025-11-29 07:47:40.763501327 +0000 UTC m=+0.111469987 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:47:40 compute-0 podman[266684]: 2025-11-29 07:47:40.785210499 +0000 UTC m=+0.131255737 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.072 256736 INFO oslo.privsep.daemon [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Spawned new privsep daemon via rootwrap
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:40.910 266745 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:40.915 266745 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:40.917 266745 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:40.917 266745 INFO oslo.privsep.daemon [-] privsep daemon running as pid 266745
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.076 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[a406789c-064b-4ae7-bdd7-048e1a9ff25c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.184 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.202 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.202 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[6161ee1c-1200-49ef-aee6-e8eb822394c9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.203 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.209 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.210 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[6a59056c-91c4-4436-8bea-fa5db0938e07]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.211 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.224 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.225 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[eb799210-d669-46e7-b696-1ad96f077044]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.227 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f0d967-48db-42d3-a871-45b188d26574]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.227 256736 DEBUG oslo_concurrency.processutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.253 256736 DEBUG oslo_concurrency.processutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.257 256736 DEBUG os_brick.initiator.connectors.lightos [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.258 256736 DEBUG os_brick.initiator.connectors.lightos [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.258 256736 DEBUG os_brick.initiator.connectors.lightos [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.259 256736 DEBUG os_brick.utils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] <== get_connector_properties: return (943ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.259 256736 DEBUG nova.virt.block_device [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Updating existing volume attachment record: 988c994a-6e12-4892-9f8e-8daecb49c8e3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:47:41 compute-0 nova_compute[256729]: 2025-11-29 07:47:41.270 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:41 compute-0 ceph-mon[75050]: pgmap v1207: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 813 B/s wr, 16 op/s
Nov 29 07:47:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:47:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3909815838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:47:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3909815838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:42 compute-0 sshd-session[266678]: Invalid user test from 143.14.121.41 port 33642
Nov 29 07:47:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Nov 29 07:47:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:47:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1011565696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:42 compute-0 sshd-session[266678]: Connection closed by invalid user test 143.14.121.41 port 33642 [preauth]
Nov 29 07:47:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3909815838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3909815838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1011565696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.084 256736 DEBUG os_brick.encryptors [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Using volume encryption metadata '{'encryption_key_id': '7a3375ad-612c-4827-8275-69605e224c79', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5ad58a37-fcad-47dc-82b6-d266b6409b30', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '470f20d7-0c57-4067-a7ff-7f6b0971ad23', 'attached_at': '', 'detached_at': '', 'volume_id': '5ad58a37-fcad-47dc-82b6-d266b6409b30', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.087 256736 DEBUG oslo_concurrency.lockutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.087 256736 DEBUG oslo_concurrency.lockutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.089 256736 DEBUG oslo_concurrency.lockutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.101 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.126 256736 DEBUG barbicanclient.v1.secrets [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/7a3375ad-612c-4827-8275-69605e224c79 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.127 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.160 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.161 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.189 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.189 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.219 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.219 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.254 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.255 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.300 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.301 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.326 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.326 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.351 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.352 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.374 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.375 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.397 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.398 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.437 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.437 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.459 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.460 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.477 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.477 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.497 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.498 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.530 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.530 256736 INFO barbicanclient.base [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Calculated Secrets uuid ref: secrets/7a3375ad-612c-4827-8275-69605e224c79
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.566 256736 DEBUG barbicanclient.client [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.567 256736 DEBUG nova.virt.libvirt.host [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 07:47:43 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 07:47:43 compute-0 nova_compute[256729]:     <volume>5ad58a37-fcad-47dc-82b6-d266b6409b30</volume>
Nov 29 07:47:43 compute-0 nova_compute[256729]:   </usage>
Nov 29 07:47:43 compute-0 nova_compute[256729]: </secret>
Nov 29 07:47:43 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.582 256736 DEBUG nova.objects.instance [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lazy-loading 'flavor' on Instance uuid 470f20d7-0c57-4067-a7ff-7f6b0971ad23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.609 256736 DEBUG nova.virt.libvirt.driver [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Attempting to attach volume 5ad58a37-fcad-47dc-82b6-d266b6409b30 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:47:43 compute-0 nova_compute[256729]: 2025-11-29 07:47:43.615 256736 DEBUG nova.virt.libvirt.guest [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:47:43 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:47:43 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30">
Nov 29 07:47:43 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:47:43 compute-0 nova_compute[256729]:   </source>
Nov 29 07:47:43 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 07:47:43 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:47:43 compute-0 nova_compute[256729]:   </auth>
Nov 29 07:47:43 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:47:43 compute-0 nova_compute[256729]:   <serial>5ad58a37-fcad-47dc-82b6-d266b6409b30</serial>
Nov 29 07:47:43 compute-0 nova_compute[256729]:   <encryption format="luks">
Nov 29 07:47:43 compute-0 nova_compute[256729]:     <secret type="passphrase" uuid="e6bf0989-fc0f-405d-b396-285e39402820"/>
Nov 29 07:47:43 compute-0 nova_compute[256729]:   </encryption>
Nov 29 07:47:43 compute-0 nova_compute[256729]: </disk>
Nov 29 07:47:43 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:47:43 compute-0 ceph-mon[75050]: pgmap v1208: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 25 op/s
Nov 29 07:47:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 3.6 KiB/s wr, 16 op/s
Nov 29 07:47:44 compute-0 nova_compute[256729]: 2025-11-29 07:47:44.699 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:45 compute-0 nova_compute[256729]: 2025-11-29 07:47:45.142 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:45 compute-0 ceph-mon[75050]: pgmap v1209: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 3.6 KiB/s wr, 16 op/s
Nov 29 07:47:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:47:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1873590565' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:47:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1873590565' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:46 compute-0 nova_compute[256729]: 2025-11-29 07:47:46.140 256736 DEBUG nova.virt.libvirt.driver [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:47:46 compute-0 nova_compute[256729]: 2025-11-29 07:47:46.140 256736 DEBUG nova.virt.libvirt.driver [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:47:46 compute-0 nova_compute[256729]: 2025-11-29 07:47:46.141 256736 DEBUG nova.virt.libvirt.driver [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:47:46 compute-0 nova_compute[256729]: 2025-11-29 07:47:46.141 256736 DEBUG nova.virt.libvirt.driver [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] No VIF found with MAC fa:16:3e:08:90:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:47:46 compute-0 nova_compute[256729]: 2025-11-29 07:47:46.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:46 compute-0 nova_compute[256729]: 2025-11-29 07:47:46.271 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.6 KiB/s wr, 29 op/s
Nov 29 07:47:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1873590565' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1873590565' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:47 compute-0 sshd-session[266754]: Connection closed by authenticating user root 143.14.121.41 port 37956 [preauth]
Nov 29 07:47:47 compute-0 nova_compute[256729]: 2025-11-29 07:47:47.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:47 compute-0 nova_compute[256729]: 2025-11-29 07:47:47.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:47 compute-0 nova_compute[256729]: 2025-11-29 07:47:47.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:47 compute-0 nova_compute[256729]: 2025-11-29 07:47:47.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:47:47 compute-0 ceph-mon[75050]: pgmap v1210: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.6 KiB/s wr, 29 op/s
Nov 29 07:47:47 compute-0 nova_compute[256729]: 2025-11-29 07:47:47.927 256736 DEBUG oslo_concurrency.lockutils [None req-69c153cc-dbb9-4f1e-9ea3-bfb146e78cba a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 8.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.1 KiB/s wr, 27 op/s
Nov 29 07:47:48 compute-0 ceph-mon[75050]: pgmap v1211: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.1 KiB/s wr, 27 op/s
Nov 29 07:47:49 compute-0 nova_compute[256729]: 2025-11-29 07:47:49.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 29 07:47:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 07:47:49 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 29 07:47:49 compute-0 nova_compute[256729]: 2025-11-29 07:47:49.492 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:49 compute-0 nova_compute[256729]: 2025-11-29 07:47:49.493 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:49 compute-0 nova_compute[256729]: 2025-11-29 07:47:49.493 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:49 compute-0 nova_compute[256729]: 2025-11-29 07:47:49.494 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:47:49 compute-0 nova_compute[256729]: 2025-11-29 07:47:49.494 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:49 compute-0 nova_compute[256729]: 2025-11-29 07:47:49.703 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:47:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/66238583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:49 compute-0 nova_compute[256729]: 2025-11-29 07:47:49.979 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:50 compute-0 sshd-session[266776]: Connection closed by authenticating user root 143.14.121.41 port 37970 [preauth]
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.178 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.180 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.180 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:47:50 compute-0 ceph-mon[75050]: osdmap e150: 3 total, 3 up, 3 in
Nov 29 07:47:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/66238583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.334 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.335 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4632MB free_disk=59.94269943237305GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.335 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.336 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.433 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 470f20d7-0c57-4067-a7ff-7f6b0971ad23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.434 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.434 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:47:50 compute-0 nova_compute[256729]: 2025-11-29 07:47:50.479 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:47:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 26 op/s
Nov 29 07:47:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:47:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114883525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.011 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.022 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.050 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.082 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.083 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.272 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.284 256736 DEBUG nova.compute.manager [req-8d353866-77c4-42aa-9f25-5dd4dc3507d8 req-98a74da9-edb3-40fe-bd16-113f36cbdb61 ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event volume-extended-5ad58a37-fcad-47dc-82b6-d266b6409b30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.309 256736 DEBUG nova.compute.manager [req-8d353866-77c4-42aa-9f25-5dd4dc3507d8 req-98a74da9-edb3-40fe-bd16-113f36cbdb61 ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Handling volume-extended event for volume 5ad58a37-fcad-47dc-82b6-d266b6409b30 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.337 256736 INFO nova.compute.manager [req-8d353866-77c4-42aa-9f25-5dd4dc3507d8 req-98a74da9-edb3-40fe-bd16-113f36cbdb61 ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Cinder extended volume 5ad58a37-fcad-47dc-82b6-d266b6409b30; extending it to detect new size
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.873 256736 DEBUG os_brick.encryptors [req-8d353866-77c4-42aa-9f25-5dd4dc3507d8 req-98a74da9-edb3-40fe-bd16-113f36cbdb61 ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] Using volume encryption metadata '{'encryption_key_id': '7a3375ad-612c-4827-8275-69605e224c79', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5ad58a37-fcad-47dc-82b6-d266b6409b30', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '470f20d7-0c57-4067-a7ff-7f6b0971ad23', 'attached_at': '', 'detached_at': '', 'volume_id': '5ad58a37-fcad-47dc-82b6-d266b6409b30', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 07:47:51 compute-0 nova_compute[256729]: 2025-11-29 07:47:51.875 256736 INFO oslo.privsep.daemon [req-8d353866-77c4-42aa-9f25-5dd4dc3507d8 req-98a74da9-edb3-40fe-bd16-113f36cbdb61 ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmparx_pv8r/privsep.sock']
Nov 29 07:47:52 compute-0 ceph-mon[75050]: pgmap v1213: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 26 op/s
Nov 29 07:47:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3114883525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.083 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.084 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.084 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:47:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 KiB/s wr, 18 op/s
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.558 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.561 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.561 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.574 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 470f20d7-0c57-4067-a7ff-7f6b0971ad23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.597 256736 INFO oslo.privsep.daemon [req-8d353866-77c4-42aa-9f25-5dd4dc3507d8 req-98a74da9-edb3-40fe-bd16-113f36cbdb61 ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] Spawned new privsep daemon via rootwrap
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.486 266829 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.489 266829 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.491 266829 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 29 07:47:52 compute-0 nova_compute[256729]: 2025-11-29 07:47:52.491 266829 INFO oslo.privsep.daemon [-] privsep daemon running as pid 266829
Nov 29 07:47:52 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 29 07:47:52 compute-0 systemd[1]: Started Process Core Dump (PID 266850/UID 0).
Nov 29 07:47:53 compute-0 ceph-mon[75050]: pgmap v1214: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 KiB/s wr, 18 op/s
Nov 29 07:47:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:54 compute-0 systemd-coredump[266851]: Process 266831 (qemu-img) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 266841:
                                                    #0  0x00007f8ef59a703c __pthread_kill_implementation (libc.so.6 + 0x8d03c)
                                                    #1  0x00007f8ef5959b86 raise (libc.so.6 + 0x3fb86)
                                                    #2  0x00007f8ef5943873 abort (libc.so.6 + 0x29873)
                                                    #3  0x00005648e4ddd5df ___interceptor_pthread_create (qemu-img + 0x4f5df)
                                                    #4  0x00007f8ef2b7dff4 _ZN6Thread10try_createEm (libceph-common.so.2 + 0x258ff4)
                                                    #5  0x00007f8ef2b806ae _ZN6Thread6createEPKcm (libceph-common.so.2 + 0x25b6ae)
                                                    #6  0x00007f8ef3a8726b _ZNSt8_Rb_treeISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt10type_indexES0_IKS8_N4ceph12immobile_anyILm576EEEESt10_Select1stISD_ENSA_6common11CephContext19associated_objs_cmpESaISD_EE22_M_emplace_hint_uniqueIJRKSt21piecewise_construct_tSt5tupleIJRSt17basic_string_viewIcS4_ERS7_EESP_IJRKSt15in_place_type_tIN6librbd21TaskFinisherSingletonEERPSH_EEEEESt17_Rb_tree_iteratorISD_ESt23_Rb_tree_const_iteratorISD_EDpOT_.constprop.0 (librbd.so.1 + 0x51126b)
                                                    #7  0x00007f8ef36b47a6 _ZN6librbd8ImageCtx4initEv (librbd.so.1 + 0x13e7a6)
                                                    #8  0x00007f8ef378e2d3 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE12send_refreshEv (librbd.so.1 + 0x2182d3)
                                                    #9  0x00007f8ef378ef46 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE23handle_v2_get_data_poolEPi (librbd.so.1 + 0x218f46)
                                                    #10 0x00007f8ef378f2a7 _ZN6librbd4util6detail20rados_state_callbackINS_5image11OpenRequestINS_8ImageCtxEEEXadL_ZNS6_23handle_v2_get_data_poolEPiEELb1EEEvPvS8_ (librbd.so.1 + 0x2192a7)
                                                    #11 0x00007f8ef348d0ac _ZN5boost4asio6detail18completion_handlerINS1_7binder0IN8librados14CB_AioCompleteEEENS0_10io_context19basic_executor_typeISaIvELm0EEEE11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xad0ac)
                                                    #12 0x00007f8ef348c585 _ZN5boost4asio6detail14strand_service11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xac585)
                                                    #13 0x00007f8ef3507498 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127498)
                                                    #14 0x00007f8ef34a64e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)
                                                    #15 0x00007f8ef2214ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #16 0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #17 0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266833:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f8ef220e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f8ef2d900a2 _ZN4ceph7logging3Log5entryEv (libceph-common.so.2 + 0x46b0a2)
                                                    #4  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266831:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f8ef220e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f8ef36bbeb3 _ZN6librbd10ImageStateINS_8ImageCtxEE4openEm (librbd.so.1 + 0x145eb3)
                                                    #4  0x00007f8ef368bfcb rbd_open (librbd.so.1 + 0x115fcb)
                                                    #5  0x00007f8ef3c3689d qemu_rbd_open (block-rbd.so + 0x489d)
                                                    #6  0x00005648e4dee25c bdrv_open_driver.llvm.1535778247189356743 (qemu-img + 0x6025c)
                                                    #7  0x00005648e4df34b7 bdrv_open_inherit.llvm.1535778247189356743 (qemu-img + 0x654b7)
                                                    #8  0x00005648e4e00de1 bdrv_open_child_bs.llvm.1535778247189356743 (qemu-img + 0x72de1)
                                                    #9  0x00005648e4df2c36 bdrv_open_inherit.llvm.1535778247189356743 (qemu-img + 0x64c36)
                                                    #10 0x00005648e4e224b3 blk_new_open (qemu-img + 0x944b3)
                                                    #11 0x00005648e4ee2516 img_open_file (qemu-img + 0x154516)
                                                    #12 0x00005648e4ee20c0 img_open (qemu-img + 0x1540c0)
                                                    #13 0x00005648e4ede03b img_info (qemu-img + 0x15003b)
                                                    #14 0x00005648e4ed76ca main (qemu-img + 0x1496ca)
                                                    #15 0x00007f8ef5944610 __libc_start_call_main (libc.so.6 + 0x2a610)
                                                    #16 0x00007f8ef59446c0 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x2a6c0)
                                                    #17 0x00005648e4ddd285 _start (qemu-img + 0x4f285)
                                                    
                                                    Stack trace of thread 266832:
                                                    #0  0x00007f8ef5a2282d syscall (libc.so.6 + 0x10882d)
                                                    #1  0x00005648e4f68193 qemu_event_wait (qemu-img + 0x1da193)
                                                    #2  0x00005648e4f732e7 call_rcu_thread (qemu-img + 0x1e52e7)
                                                    #3  0x00005648e4f662aa qemu_thread_start.llvm.12875871551448449403 (qemu-img + 0x1d82aa)
                                                    #4  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266840:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a4cc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)
                                                    #2  0x00007f8ef2b9e150 _ZN4ceph6common24CephContextServiceThread5entryEv (libceph-common.so.2 + 0x279150)
                                                    #3  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #4  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266835:
                                                    #0  0x00007f8ef5a29a3e epoll_wait (libc.so.6 + 0x10fa3e)
                                                    #1  0x00007f8ef2d65618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f8ef2d63702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f8ef2d642c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f8ef2214ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266847:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f8ef220e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f8ef2b837f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)
                                                    #4  0x00007f8ef2b83f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #5  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266848:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f8ef220e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f8ef2b837f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)
                                                    #4  0x00007f8ef2b83f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #5  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266836:
                                                    #0  0x00007f8ef5a29a3e epoll_wait (libc.so.6 + 0x10fa3e)
                                                    #1  0x00007f8ef2d65618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f8ef2d63702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f8ef2d642c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f8ef2214ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266845:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f8ef220e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f8ef2c8c0b9 _ZN13DispatchQueue18run_local_deliveryEv (libceph-common.so.2 + 0x3670b9)
                                                    #4  0x00007f8ef2d1d431 _ZN13DispatchQueue19LocalDeliveryThread5entryEv (libceph-common.so.2 + 0x3f8431)
                                                    #5  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266842:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f8ef3507266 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127266)
                                                    #3  0x00007f8ef34a64e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)
                                                    #4  0x00007f8ef2214ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266844:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f8ef220e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f8ef2c8c49f _ZN13DispatchQueue5entryEv (libceph-common.so.2 + 0x36749f)
                                                    #4  0x00007f8ef2d1d411 _ZN13DispatchQueue14DispatchThread5entryEv (libceph-common.so.2 + 0x3f8411)
                                                    #5  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266846:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a4cc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)
                                                    #2  0x00007f8ef2b83b23 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25eb23)
                                                    #3  0x00007f8ef2b83f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #4  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266849:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f8ef220e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f8ef2b837f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)
                                                    #4  0x00007f8ef2b83f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #5  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266843:
                                                    #0  0x00007f8ef59a238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f8ef59a4cc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)
                                                    #2  0x00007f8ef34df364 _ZN4ceph5timerINS_17coarse_mono_clockEE12timer_threadEv (librados.so.2 + 0xff364)
                                                    #3  0x00007f8ef2214ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #4  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 266834:
                                                    #0  0x00007f8ef5a29a3e epoll_wait (libc.so.6 + 0x10fa3e)
                                                    #1  0x00007f8ef2d65618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f8ef2d63702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f8ef2d642c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f8ef2214ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f8ef59a52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f8ef5a2a400 __clone3 (libc.so.6 + 0x110400)
                                                    ELF object binary architecture: AMD x86-64
Nov 29 07:47:54 compute-0 sshd-session[266801]: Connection closed by authenticating user root 143.14.121.41 port 37976 [preauth]
Nov 29 07:47:54 compute-0 systemd[1]: systemd-coredump@0-266850-0.service: Deactivated successfully.
Nov 29 07:47:54 compute-0 systemd[1]: systemd-coredump@0-266850-0.service: Consumed 1.436s CPU time.
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [req-8d353866-77c4-42aa-9f25-5dd4dc3507d8 req-98a74da9-edb3-40fe-bd16-113f36cbdb61 ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Unknown error when attempting to find the payload_offset for LUKSv1 encrypted disk rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack.: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack : Unexpected error while running command.
Nov 29 07:47:54 compute-0 nova_compute[256729]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack --force-share --output=json
Nov 29 07:47:54 compute-0 nova_compute[256729]: Exit code: -6
Nov 29 07:47:54 compute-0 nova_compute[256729]: Stdout: ''
Nov 29 07:47:54 compute-0 nova_compute[256729]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Traceback (most recent call last):
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23]     info = images.privileged_qemu_img_info(path)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23]   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23]     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23]   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23]     return self.channel.remote_call(name, args, kwargs,
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23]   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23]     raise exc_type(*result[2])
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack : Unexpected error while running command.
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack --force-share --output=json
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Exit code: -6
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Stdout: ''
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.423 256736 ERROR nova.virt.libvirt.driver [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] 
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.427 256736 WARNING nova.compute.manager [req-8d353866-77c4-42aa-9f25-5dd4dc3507d8 req-98a74da9-edb3-40fe-bd16-113f36cbdb61 ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Extend volume failed, volume_id=5ad58a37-fcad-47dc-82b6-d266b6409b30, reason: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack : Unexpected error while running command.
Nov 29 07:47:54 compute-0 nova_compute[256729]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack --force-share --output=json
Nov 29 07:47:54 compute-0 nova_compute[256729]: Exit code: -6
Nov 29 07:47:54 compute-0 nova_compute[256729]: Stdout: ''
Nov 29 07:47:54 compute-0 nova_compute[256729]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n': nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack : Unexpected error while running command.
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server [req-8d353866-77c4-42aa-9f25-5dd4dc3507d8 req-98a74da9-edb3-40fe-bd16-113f36cbdb61 ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] Exception during message handling: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack : Unexpected error while running command.
Nov 29 07:47:54 compute-0 nova_compute[256729]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack --force-share --output=json
Nov 29 07:47:54 compute-0 nova_compute[256729]: Exit code: -6
Nov 29 07:47:54 compute-0 nova_compute[256729]: Stdout: ''
Nov 29 07:47:54 compute-0 nova_compute[256729]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11073, in external_instance_event
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     self.extend_volume(context, instance, event.tag)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10930, in extend_volume
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     self.driver.extend_volume(context, connection_info, instance,
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2865, in extend_volume
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     self._resize_attached_encrypted_volume(
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2804, in _resize_attached_encrypted_volume
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     LOG.exception('Unknown error when attempting to find the '
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     info = images.privileged_qemu_img_info(path)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     return self.channel.remote_call(name, args, kwargs,
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server     raise exc_type(*result[2])
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack : Unexpected error while running command.
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30:id=openstack --force-share --output=json
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server Exit code: -6
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server Stdout: ''
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-21.1.3.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.522 256736 ERROR oslo_messaging.rpc.server 
Nov 29 07:47:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 511 B/s wr, 21 op/s
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.797 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Updating instance_info_cache with network_info: [{"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:47:54 compute-0 nova_compute[256729]: 2025-11-29 07:47:54.798 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:55 compute-0 nova_compute[256729]: 2025-11-29 07:47:55.156 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-470f20d7-0c57-4067-a7ff-7f6b0971ad23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:47:55 compute-0 nova_compute[256729]: 2025-11-29 07:47:55.157 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:47:55 compute-0 nova_compute[256729]: 2025-11-29 07:47:55.158 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:47:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:47:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4189437256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:47:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4189437256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:55 compute-0 ceph-mon[75050]: pgmap v1215: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 511 B/s wr, 21 op/s
Nov 29 07:47:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4189437256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4189437256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.187 256736 DEBUG oslo_concurrency.lockutils [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.187 256736 DEBUG oslo_concurrency.lockutils [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.219 256736 INFO nova.compute.manager [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Detaching volume 5ad58a37-fcad-47dc-82b6-d266b6409b30
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.273 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.382 256736 INFO nova.virt.block_device [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Attempting to driver detach volume 5ad58a37-fcad-47dc-82b6-d266b6409b30 from mountpoint /dev/vdb
Nov 29 07:47:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 614 B/s wr, 9 op/s
Nov 29 07:47:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:47:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3429483934' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:47:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3429483934' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.790 256736 DEBUG os_brick.encryptors [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Using volume encryption metadata '{'encryption_key_id': '7a3375ad-612c-4827-8275-69605e224c79', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5ad58a37-fcad-47dc-82b6-d266b6409b30', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '470f20d7-0c57-4067-a7ff-7f6b0971ad23', 'attached_at': '', 'detached_at': '', 'volume_id': '5ad58a37-fcad-47dc-82b6-d266b6409b30', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.798 256736 DEBUG nova.virt.libvirt.driver [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Attempting to detach device vdb from instance 470f20d7-0c57-4067-a7ff-7f6b0971ad23 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.798 256736 DEBUG nova.virt.libvirt.guest [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30">
Nov 29 07:47:56 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   </source>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <serial>5ad58a37-fcad-47dc-82b6-d266b6409b30</serial>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <encryption format="luks">
Nov 29 07:47:56 compute-0 nova_compute[256729]:     <secret type="passphrase" uuid="e6bf0989-fc0f-405d-b396-285e39402820"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   </encryption>
Nov 29 07:47:56 compute-0 nova_compute[256729]: </disk>
Nov 29 07:47:56 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.807 256736 INFO nova.virt.libvirt.driver [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Successfully detached device vdb from instance 470f20d7-0c57-4067-a7ff-7f6b0971ad23 from the persistent domain config.
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.808 256736 DEBUG nova.virt.libvirt.driver [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 470f20d7-0c57-4067-a7ff-7f6b0971ad23 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.809 256736 DEBUG nova.virt.libvirt.guest [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-5ad58a37-fcad-47dc-82b6-d266b6409b30">
Nov 29 07:47:56 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   </source>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <serial>5ad58a37-fcad-47dc-82b6-d266b6409b30</serial>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   <encryption format="luks">
Nov 29 07:47:56 compute-0 nova_compute[256729]:     <secret type="passphrase" uuid="e6bf0989-fc0f-405d-b396-285e39402820"/>
Nov 29 07:47:56 compute-0 nova_compute[256729]:   </encryption>
Nov 29 07:47:56 compute-0 nova_compute[256729]: </disk>
Nov 29 07:47:56 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.858 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764402476.8579247, 470f20d7-0c57-4067-a7ff-7f6b0971ad23 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.860 256736 DEBUG nova.virt.libvirt.driver [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 470f20d7-0c57-4067-a7ff-7f6b0971ad23 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:47:56 compute-0 nova_compute[256729]: 2025-11-29 07:47:56.863 256736 INFO nova.virt.libvirt.driver [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Successfully detached device vdb from instance 470f20d7-0c57-4067-a7ff-7f6b0971ad23 from the live domain config.
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.170 256736 DEBUG nova.objects.instance [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lazy-loading 'flavor' on Instance uuid 470f20d7-0c57-4067-a7ff-7f6b0971ad23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.233 256736 DEBUG oslo_concurrency.lockutils [None req-d11b9ae9-4453-48b7-abbe-04bbec0b947a a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.585 256736 DEBUG oslo_concurrency.lockutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.586 256736 DEBUG oslo_concurrency.lockutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.586 256736 DEBUG oslo_concurrency.lockutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.586 256736 DEBUG oslo_concurrency.lockutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.586 256736 DEBUG oslo_concurrency.lockutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.587 256736 INFO nova.compute.manager [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Terminating instance
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.588 256736 DEBUG nova.compute.manager [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:47:57 compute-0 kernel: tapcf01fba8-1c (unregistering): left promiscuous mode
Nov 29 07:47:57 compute-0 NetworkManager[48962]: <info>  [1764402477.6304] device (tapcf01fba8-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00034|binding|INFO|Releasing lport cf01fba8-1ce4-4048-8b70-76060249d02d from this chassis (sb_readonly=0)
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00035|binding|INFO|Setting lport cf01fba8-1ce4-4048-8b70-76060249d02d down in Southbound
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.638 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00036|binding|INFO|Removing iface tapcf01fba8-1c ovn-installed in OVS
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.640 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.646 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:90:9b 10.100.0.11'], port_security=['fa:16:3e:08:90:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '470f20d7-0c57-4067-a7ff-7f6b0971ad23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7b8fec760c4dfeabbf878615dc25ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '154de6d7-629c-4424-ada2-df33289aad97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14e16434-16e3-4beb-afc5-e4129659b07b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=cf01fba8-1ce4-4048-8b70-76060249d02d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.647 163655 INFO neutron.agent.ovn.metadata.agent [-] Port cf01fba8-1ce4-4048-8b70-76060249d02d in datapath a027e4c7-144b-44ef-882c-4c6ddedeae6f unbound from our chassis
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.648 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a027e4c7-144b-44ef-882c-4c6ddedeae6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.649 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b2da242f-acea-4290-9916-0d70d777d049]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.650 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f namespace which is not needed anymore
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.660 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 ceph-mon[75050]: pgmap v1216: 305 pgs: 305 active+clean; 121 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 614 B/s wr, 9 op/s
Nov 29 07:47:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3429483934' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:47:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3429483934' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:47:57 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 29 07:47:57 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 18.438s CPU time.
Nov 29 07:47:57 compute-0 systemd-machined[217781]: Machine qemu-1-instance-00000001 terminated.
Nov 29 07:47:57 compute-0 neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f[266584]: [NOTICE]   (266588) : haproxy version is 2.8.14-c23fe91
Nov 29 07:47:57 compute-0 neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f[266584]: [NOTICE]   (266588) : path to executable is /usr/sbin/haproxy
Nov 29 07:47:57 compute-0 neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f[266584]: [WARNING]  (266588) : Exiting Master process...
Nov 29 07:47:57 compute-0 neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f[266584]: [WARNING]  (266588) : Exiting Master process...
Nov 29 07:47:57 compute-0 neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f[266584]: [ALERT]    (266588) : Current worker (266590) exited with code 143 (Terminated)
Nov 29 07:47:57 compute-0 neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f[266584]: [WARNING]  (266588) : All workers exited. Exiting... (0)
Nov 29 07:47:57 compute-0 systemd[1]: libpod-b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f.scope: Deactivated successfully.
Nov 29 07:47:57 compute-0 podman[266887]: 2025-11-29 07:47:57.801181307 +0000 UTC m=+0.048716469 container died b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 07:47:57 compute-0 kernel: tapcf01fba8-1c: entered promiscuous mode
Nov 29 07:47:57 compute-0 kernel: tapcf01fba8-1c (unregistering): left promiscuous mode
Nov 29 07:47:57 compute-0 NetworkManager[48962]: <info>  [1764402477.8105] manager: (tapcf01fba8-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/29)
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.811 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00037|binding|INFO|Claiming lport cf01fba8-1ce4-4048-8b70-76060249d02d for this chassis.
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00038|binding|INFO|cf01fba8-1ce4-4048-8b70-76060249d02d: Claiming fa:16:3e:08:90:9b 10.100.0.11
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.823 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:90:9b 10.100.0.11'], port_security=['fa:16:3e:08:90:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '470f20d7-0c57-4067-a7ff-7f6b0971ad23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7b8fec760c4dfeabbf878615dc25ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '154de6d7-629c-4424-ada2-df33289aad97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14e16434-16e3-4beb-afc5-e4129659b07b, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=cf01fba8-1ce4-4048-8b70-76060249d02d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f-userdata-shm.mount: Deactivated successfully.
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00039|binding|INFO|Setting lport cf01fba8-1ce4-4048-8b70-76060249d02d ovn-installed in OVS
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00040|binding|INFO|Setting lport cf01fba8-1ce4-4048-8b70-76060249d02d up in Southbound
Nov 29 07:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-222c00220a79864af282d15534f5b17945b4a92f766ab93ce1153c4619605736-merged.mount: Deactivated successfully.
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.878 256736 DEBUG nova.compute.manager [req-f24aec0d-c097-4abe-b54f-0f667cd14ac3 req-b1b7ea60-5e8e-4266-bad0-2342b6717424 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-unplugged-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.878 256736 DEBUG oslo_concurrency.lockutils [req-f24aec0d-c097-4abe-b54f-0f667cd14ac3 req-b1b7ea60-5e8e-4266-bad0-2342b6717424 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.878 256736 DEBUG oslo_concurrency.lockutils [req-f24aec0d-c097-4abe-b54f-0f667cd14ac3 req-b1b7ea60-5e8e-4266-bad0-2342b6717424 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.879 256736 DEBUG oslo_concurrency.lockutils [req-f24aec0d-c097-4abe-b54f-0f667cd14ac3 req-b1b7ea60-5e8e-4266-bad0-2342b6717424 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.879 256736 DEBUG nova.compute.manager [req-f24aec0d-c097-4abe-b54f-0f667cd14ac3 req-b1b7ea60-5e8e-4266-bad0-2342b6717424 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] No waiting events found dispatching network-vif-unplugged-cf01fba8-1ce4-4048-8b70-76060249d02d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.879 256736 DEBUG nova.compute.manager [req-f24aec0d-c097-4abe-b54f-0f667cd14ac3 req-b1b7ea60-5e8e-4266-bad0-2342b6717424 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-unplugged-cf01fba8-1ce4-4048-8b70-76060249d02d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.886 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00041|binding|INFO|Releasing lport cf01fba8-1ce4-4048-8b70-76060249d02d from this chassis (sb_readonly=0)
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00042|binding|INFO|Setting lport cf01fba8-1ce4-4048-8b70-76060249d02d down in Southbound
Nov 29 07:47:57 compute-0 ovn_controller[153383]: 2025-11-29T07:47:57Z|00043|binding|INFO|Removing iface tapcf01fba8-1c ovn-installed in OVS
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.889 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.890 256736 INFO nova.virt.libvirt.driver [-] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Instance destroyed successfully.
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.890 256736 DEBUG nova.objects.instance [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lazy-loading 'resources' on Instance uuid 470f20d7-0c57-4067-a7ff-7f6b0971ad23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.896 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:90:9b 10.100.0.11'], port_security=['fa:16:3e:08:90:9b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '470f20d7-0c57-4067-a7ff-7f6b0971ad23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed7b8fec760c4dfeabbf878615dc25ec', 'neutron:revision_number': '5', 'neutron:security_group_ids': '154de6d7-629c-4424-ada2-df33289aad97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14e16434-16e3-4beb-afc5-e4129659b07b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=cf01fba8-1ce4-4048-8b70-76060249d02d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:47:57 compute-0 podman[266887]: 2025-11-29 07:47:57.899909718 +0000 UTC m=+0.147444900 container cleanup b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.904 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 systemd[1]: libpod-conmon-b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f.scope: Deactivated successfully.
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.911 256736 DEBUG nova.virt.libvirt.vif [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:46:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1543412800',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1543412800',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1543412800',id=1,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPWpd+BDpiEvsb+/Y7B4qemwFzbHqOHZXcqLb3Lc82301t4mUHmYZZ6kFaiNduZ2VKKfDBVWcULnlQXy+O4iuVoSPVyYZy38PgEwdp/PE9meJZz5C2NLzf3taJFY/Vnc4A==',key_name='tempest-keypair-913396319',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:47:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed7b8fec760c4dfeabbf878615dc25ec',ramdisk_id='',reservation_id='r-bj5gu0hr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-1640910800',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-1640910800-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:47:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a53522f9f2b14db5b3b2ead64c730558',uuid=470f20d7-0c57-4067-a7ff-7f6b0971ad23,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.911 256736 DEBUG nova.network.os_vif_util [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Converting VIF {"id": "cf01fba8-1ce4-4048-8b70-76060249d02d", "address": "fa:16:3e:08:90:9b", "network": {"id": "a027e4c7-144b-44ef-882c-4c6ddedeae6f", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-551807572-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed7b8fec760c4dfeabbf878615dc25ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf01fba8-1c", "ovs_interfaceid": "cf01fba8-1ce4-4048-8b70-76060249d02d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.912 256736 DEBUG nova.network.os_vif_util [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:90:9b,bridge_name='br-int',has_traffic_filtering=True,id=cf01fba8-1ce4-4048-8b70-76060249d02d,network=Network(a027e4c7-144b-44ef-882c-4c6ddedeae6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf01fba8-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.913 256736 DEBUG os_vif [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:90:9b,bridge_name='br-int',has_traffic_filtering=True,id=cf01fba8-1ce4-4048-8b70-76060249d02d,network=Network(a027e4c7-144b-44ef-882c-4c6ddedeae6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf01fba8-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.915 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.915 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf01fba8-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.917 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.918 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.921 256736 INFO os_vif [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:90:9b,bridge_name='br-int',has_traffic_filtering=True,id=cf01fba8-1ce4-4048-8b70-76060249d02d,network=Network(a027e4c7-144b-44ef-882c-4c6ddedeae6f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf01fba8-1c')
Nov 29 07:47:57 compute-0 podman[266923]: 2025-11-29 07:47:57.970735228 +0000 UTC m=+0.047866006 container remove b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.976 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[eb84e6ce-f44e-4080-b36b-680ea8190d53]: (4, ('Sat Nov 29 07:47:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f (b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f)\nb4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f\nSat Nov 29 07:47:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f (b4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f)\nb4e42d7d2fae1ec94ebac01cc8a57bfd0455c2e2a14e380cc738223743a2517f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.978 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0057e9a9-b5cd-4dcb-9aa3-a726627638a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.979 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa027e4c7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.980 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 kernel: tapa027e4c7-10: left promiscuous mode
Nov 29 07:47:57 compute-0 nova_compute[256729]: 2025-11-29 07:47:57.994 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:47:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:57.996 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[78e7eec3-e745-484e-bf35-76a20f40acdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.012 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[60051d6f-9a3a-4d3c-86d4-139a81915663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.014 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[594f6f1f-5a64-49db-a24b-1657d8a4e088]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.028 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[49ce597a-9337-43cf-850a-8e4084820a8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473859, 'reachable_time': 42008, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266957, 'error': None, 'target': 'ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:58 compute-0 systemd[1]: run-netns-ovnmeta\x2da027e4c7\x2d144b\x2d44ef\x2d882c\x2d4c6ddedeae6f.mount: Deactivated successfully.
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.041 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a027e4c7-144b-44ef-882c-4c6ddedeae6f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.043 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[7641d674-b820-467d-ad53-3cac3fbf1bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.044 163655 INFO neutron.agent.ovn.metadata.agent [-] Port cf01fba8-1ce4-4048-8b70-76060249d02d in datapath a027e4c7-144b-44ef-882c-4c6ddedeae6f unbound from our chassis
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.045 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a027e4c7-144b-44ef-882c-4c6ddedeae6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.046 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7def6a2b-08e3-463c-b7d2-f5390bbe1dc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.047 163655 INFO neutron.agent.ovn.metadata.agent [-] Port cf01fba8-1ce4-4048-8b70-76060249d02d in datapath a027e4c7-144b-44ef-882c-4c6ddedeae6f unbound from our chassis
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.047 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a027e4c7-144b-44ef-882c-4c6ddedeae6f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:47:58 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:58.048 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e52218a5-093b-4918-a45f-a0b63c299c91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:47:58 compute-0 nova_compute[256729]: 2025-11-29 07:47:58.260 256736 INFO nova.virt.libvirt.driver [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Deleting instance files /var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23_del
Nov 29 07:47:58 compute-0 nova_compute[256729]: 2025-11-29 07:47:58.261 256736 INFO nova.virt.libvirt.driver [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Deletion of /var/lib/nova/instances/470f20d7-0c57-4067-a7ff-7f6b0971ad23_del complete
Nov 29 07:47:58 compute-0 nova_compute[256729]: 2025-11-29 07:47:58.355 256736 DEBUG nova.virt.libvirt.host [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 29 07:47:58 compute-0 nova_compute[256729]: 2025-11-29 07:47:58.355 256736 INFO nova.virt.libvirt.host [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] UEFI support detected
Nov 29 07:47:58 compute-0 nova_compute[256729]: 2025-11-29 07:47:58.357 256736 INFO nova.compute.manager [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Took 0.77 seconds to destroy the instance on the hypervisor.
Nov 29 07:47:58 compute-0 nova_compute[256729]: 2025-11-29 07:47:58.358 256736 DEBUG oslo.service.loopingcall [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:47:58 compute-0 nova_compute[256729]: 2025-11-29 07:47:58.358 256736 DEBUG nova.compute.manager [-] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:47:58 compute-0 nova_compute[256729]: 2025-11-29 07:47:58.359 256736 DEBUG nova.network.neutron [-] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:47:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 87 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Nov 29 07:47:59 compute-0 sshd-session[266858]: Connection closed by authenticating user root 143.14.121.41 port 50004 [preauth]
Nov 29 07:47:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:47:59 compute-0 ceph-mon[75050]: pgmap v1217: 305 pgs: 305 active+clean; 87 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Nov 29 07:47:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:59.766 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:59.766 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:47:59.766 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.985 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.986 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.987 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.987 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.987 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] No waiting events found dispatching network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.987 256736 WARNING nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received unexpected event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d for instance with vm_state active and task_state deleting.
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.988 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.988 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.988 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.988 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.988 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] No waiting events found dispatching network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.989 256736 WARNING nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received unexpected event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d for instance with vm_state active and task_state deleting.
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.989 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.989 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.989 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.989 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.990 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] No waiting events found dispatching network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.990 256736 WARNING nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received unexpected event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d for instance with vm_state active and task_state deleting.
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.990 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-unplugged-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.990 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.990 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.991 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.991 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] No waiting events found dispatching network-vif-unplugged-cf01fba8-1ce4-4048-8b70-76060249d02d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.991 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-unplugged-cf01fba8-1ce4-4048-8b70-76060249d02d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.991 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.991 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.992 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.992 256736 DEBUG oslo_concurrency.lockutils [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.992 256736 DEBUG nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] No waiting events found dispatching network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:47:59 compute-0 nova_compute[256729]: 2025-11-29 07:47:59.992 256736 WARNING nova.compute.manager [req-5cbc4222-dfc6-45dd-8625-319ab32d922a req-8062cd46-25e5-496e-89de-7f6c7606f470 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received unexpected event network-vif-plugged-cf01fba8-1ce4-4048-8b70-76060249d02d for instance with vm_state active and task_state deleting.
Nov 29 07:48:00 compute-0 nova_compute[256729]: 2025-11-29 07:48:00.498 256736 DEBUG nova.network.neutron [-] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:48:00 compute-0 nova_compute[256729]: 2025-11-29 07:48:00.516 256736 INFO nova.compute.manager [-] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Took 2.16 seconds to deallocate network for instance.
Nov 29 07:48:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 87 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 4.3 KiB/s wr, 25 op/s
Nov 29 07:48:00 compute-0 nova_compute[256729]: 2025-11-29 07:48:00.709 256736 DEBUG oslo_concurrency.lockutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:00 compute-0 nova_compute[256729]: 2025-11-29 07:48:00.709 256736 DEBUG oslo_concurrency.lockutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:00 compute-0 nova_compute[256729]: 2025-11-29 07:48:00.781 256736 DEBUG oslo_concurrency.processutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:01 compute-0 nova_compute[256729]: 2025-11-29 07:48:01.047 256736 DEBUG nova.compute.manager [req-04299f85-d0f0-4677-a25b-9bfd5f20d7b6 req-aeb7770d-2f46-44cf-80db-d3975c3e3cd6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Received event network-vif-deleted-cf01fba8-1ce4-4048-8b70-76060249d02d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:48:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017029291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:01 compute-0 nova_compute[256729]: 2025-11-29 07:48:01.207 256736 DEBUG oslo_concurrency.processutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:01 compute-0 nova_compute[256729]: 2025-11-29 07:48:01.217 256736 DEBUG nova.compute.provider_tree [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:48:01 compute-0 nova_compute[256729]: 2025-11-29 07:48:01.337 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:01 compute-0 nova_compute[256729]: 2025-11-29 07:48:01.422 256736 DEBUG nova.scheduler.client.report [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:48:01 compute-0 nova_compute[256729]: 2025-11-29 07:48:01.456 256736 DEBUG oslo_concurrency.lockutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:48:01.464 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:48:01 compute-0 nova_compute[256729]: 2025-11-29 07:48:01.464 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:48:01.465 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:48:01 compute-0 nova_compute[256729]: 2025-11-29 07:48:01.505 256736 INFO nova.scheduler.client.report [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Deleted allocations for instance 470f20d7-0c57-4067-a7ff-7f6b0971ad23
Nov 29 07:48:01 compute-0 nova_compute[256729]: 2025-11-29 07:48:01.584 256736 DEBUG oslo_concurrency.lockutils [None req-36b0824a-3123-4c81-ad5e-138e9cd0c409 a53522f9f2b14db5b3b2ead64c730558 ed7b8fec760c4dfeabbf878615dc25ec - - default default] Lock "470f20d7-0c57-4067-a7ff-7f6b0971ad23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:01 compute-0 ceph-mon[75050]: pgmap v1218: 305 pgs: 305 active+clean; 87 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 4.3 KiB/s wr, 25 op/s
Nov 29 07:48:01 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1017029291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:48:02.468 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:48:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 42 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.3 KiB/s wr, 47 op/s
Nov 29 07:48:02 compute-0 nova_compute[256729]: 2025-11-29 07:48:02.918 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:03 compute-0 sshd-session[266960]: Connection closed by authenticating user root 143.14.121.41 port 50008 [preauth]
Nov 29 07:48:04 compute-0 sudo[266986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:04 compute-0 sudo[266986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:04 compute-0 sudo[266986]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:04 compute-0 sudo[267011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:48:04 compute-0 sudo[267011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:04 compute-0 sudo[267011]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:04 compute-0 sudo[267036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:04 compute-0 sudo[267036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:04 compute-0 sudo[267036]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:04 compute-0 sudo[267061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:48:04 compute-0 sudo[267061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:04 compute-0 ceph-mon[75050]: pgmap v1219: 305 pgs: 305 active+clean; 42 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.3 KiB/s wr, 47 op/s
Nov 29 07:48:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 42 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 4.2 KiB/s wr, 46 op/s
Nov 29 07:48:05 compute-0 podman[267157]: 2025-11-29 07:48:05.06019184 +0000 UTC m=+0.204108094 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:48:05 compute-0 podman[267157]: 2025-11-29 07:48:05.180405777 +0000 UTC m=+0.324321941 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:05 compute-0 ceph-mon[75050]: pgmap v1220: 305 pgs: 305 active+clean; 42 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 4.2 KiB/s wr, 46 op/s
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:48:05
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta']
Nov 29 07:48:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:48:05 compute-0 sshd-session[266984]: Connection closed by authenticating user root 143.14.121.41 port 45786 [preauth]
Nov 29 07:48:05 compute-0 sudo[267061]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:06 compute-0 sudo[267318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:06 compute-0 sudo[267318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:06 compute-0 sudo[267318]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:06 compute-0 sudo[267343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:48:06 compute-0 sudo[267343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:06 compute-0 sudo[267343]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:06 compute-0 sudo[267369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:06 compute-0 sudo[267369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:06 compute-0 sudo[267369]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:06 compute-0 sudo[267394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:48:06 compute-0 sudo[267394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:06 compute-0 nova_compute[256729]: 2025-11-29 07:48:06.338 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1446161914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1446161914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.0 KiB/s wr, 42 op/s
Nov 29 07:48:06 compute-0 sudo[267394]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev f49f4f31-d9a1-45ad-8412-cecec82de6b6 does not exist
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 629da16a-1954-4156-af18-7b054815245b does not exist
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev bcff26bb-9ebc-49fc-8ae3-d552c5c7c021 does not exist
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:48:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:48:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:06 compute-0 sudo[267450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:48:06 compute-0 sudo[267450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:48:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:48:06 compute-0 sudo[267450]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:06 compute-0 sudo[267475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:48:07 compute-0 sudo[267475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:07 compute-0 sudo[267475]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1446161914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1446161914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:07 compute-0 ceph-mon[75050]: pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 228 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 4.0 KiB/s wr, 42 op/s
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:48:07 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:48:07 compute-0 sudo[267500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:07 compute-0 sudo[267500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:07 compute-0 sudo[267500]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:07 compute-0 sudo[267525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:48:07 compute-0 sudo[267525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:07 compute-0 podman[267589]: 2025-11-29 07:48:07.480451744 +0000 UTC m=+0.023368158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:07 compute-0 podman[267589]: 2025-11-29 07:48:07.899644419 +0000 UTC m=+0.442560833 container create c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:48:07 compute-0 nova_compute[256729]: 2025-11-29 07:48:07.921 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:08 compute-0 systemd[1]: Started libpod-conmon-c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3.scope.
Nov 29 07:48:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:08 compute-0 podman[267589]: 2025-11-29 07:48:08.105840308 +0000 UTC m=+0.648756782 container init c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:48:08 compute-0 podman[267589]: 2025-11-29 07:48:08.118773861 +0000 UTC m=+0.661690245 container start c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:48:08 compute-0 mystifying_saha[267606]: 167 167
Nov 29 07:48:08 compute-0 systemd[1]: libpod-c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3.scope: Deactivated successfully.
Nov 29 07:48:08 compute-0 conmon[267606]: conmon c504badea912e67bf14f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3.scope/container/memory.events
Nov 29 07:48:08 compute-0 sshd-session[267317]: Connection closed by authenticating user root 143.14.121.41 port 45794 [preauth]
Nov 29 07:48:08 compute-0 podman[267589]: 2025-11-29 07:48:08.253476792 +0000 UTC m=+0.796393236 container attach c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:48:08 compute-0 podman[267589]: 2025-11-29 07:48:08.255332133 +0000 UTC m=+0.798248557 container died c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2300e4779514834c6fe69adcbd6c4430edf85978e83e16311e593b57960f6e16-merged.mount: Deactivated successfully.
Nov 29 07:48:08 compute-0 podman[267589]: 2025-11-29 07:48:08.315180154 +0000 UTC m=+0.858096548 container remove c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_saha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:48:08 compute-0 systemd[1]: libpod-conmon-c504badea912e67bf14f596958dfda4aa175c20fb4220fa0454f61becaa20fb3.scope: Deactivated successfully.
Nov 29 07:48:08 compute-0 podman[267630]: 2025-11-29 07:48:08.495327273 +0000 UTC m=+0.050335803 container create aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:48:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:48:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3190331928' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:48:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3190331928' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:08 compute-0 systemd[1]: Started libpod-conmon-aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9.scope.
Nov 29 07:48:08 compute-0 podman[267630]: 2025-11-29 07:48:08.469259503 +0000 UTC m=+0.024268053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.8 KiB/s wr, 62 op/s
Nov 29 07:48:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94ba815d91a5f8e11229a9da691b5697beea84d8e58e48db76d25b13a109d64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94ba815d91a5f8e11229a9da691b5697beea84d8e58e48db76d25b13a109d64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94ba815d91a5f8e11229a9da691b5697beea84d8e58e48db76d25b13a109d64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94ba815d91a5f8e11229a9da691b5697beea84d8e58e48db76d25b13a109d64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d94ba815d91a5f8e11229a9da691b5697beea84d8e58e48db76d25b13a109d64/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3190331928' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3190331928' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:08 compute-0 podman[267630]: 2025-11-29 07:48:08.628872173 +0000 UTC m=+0.183880703 container init aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:48:08 compute-0 podman[267630]: 2025-11-29 07:48:08.641668382 +0000 UTC m=+0.196676902 container start aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:48:08 compute-0 podman[267630]: 2025-11-29 07:48:08.647706757 +0000 UTC m=+0.202715297 container attach aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 29 07:48:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:48:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3254561979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:48:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3254561979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:09 compute-0 nova_compute[256729]: 2025-11-29 07:48:09.653 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:09 compute-0 silly_chatterjee[267648]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:48:09 compute-0 silly_chatterjee[267648]: --> relative data size: 1.0
Nov 29 07:48:09 compute-0 silly_chatterjee[267648]: --> All data devices are unavailable
Nov 29 07:48:09 compute-0 systemd[1]: libpod-aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9.scope: Deactivated successfully.
Nov 29 07:48:09 compute-0 systemd[1]: libpod-aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9.scope: Consumed 1.009s CPU time.
Nov 29 07:48:09 compute-0 podman[267630]: 2025-11-29 07:48:09.705596659 +0000 UTC m=+1.260605179 container died aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:48:09 compute-0 nova_compute[256729]: 2025-11-29 07:48:09.818 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:09 compute-0 ceph-mon[75050]: pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.8 KiB/s wr, 62 op/s
Nov 29 07:48:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3254561979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3254561979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d94ba815d91a5f8e11229a9da691b5697beea84d8e58e48db76d25b13a109d64-merged.mount: Deactivated successfully.
Nov 29 07:48:10 compute-0 podman[267630]: 2025-11-29 07:48:10.214052577 +0000 UTC m=+1.769061137 container remove aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:48:10 compute-0 sudo[267525]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:10 compute-0 systemd[1]: libpod-conmon-aacbcd7088ea31a173ca911a190a7757f019b4027024efabb6e4b4744ab7c2f9.scope: Deactivated successfully.
Nov 29 07:48:10 compute-0 sudo[267692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:10 compute-0 sudo[267692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:10 compute-0 sudo[267692]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:10 compute-0 sudo[267717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:48:10 compute-0 sudo[267717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:10 compute-0 sudo[267717]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:10 compute-0 sudo[267742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:10 compute-0 sudo[267742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:10 compute-0 sudo[267742]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:10 compute-0 sudo[267767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:48:10 compute-0 sudo[267767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Nov 29 07:48:10 compute-0 podman[267832]: 2025-11-29 07:48:10.965109816 +0000 UTC m=+0.066755620 container create e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:48:11 compute-0 systemd[1]: Started libpod-conmon-e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3.scope.
Nov 29 07:48:11 compute-0 podman[267832]: 2025-11-29 07:48:10.937631997 +0000 UTC m=+0.039277881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:11 compute-0 podman[267832]: 2025-11-29 07:48:11.048296154 +0000 UTC m=+0.149941988 container init e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:48:11 compute-0 podman[267832]: 2025-11-29 07:48:11.055257063 +0000 UTC m=+0.156902867 container start e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:48:11 compute-0 systemd[1]: libpod-e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3.scope: Deactivated successfully.
Nov 29 07:48:11 compute-0 conmon[267862]: conmon e6847b13960d01b4ddf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3.scope/container/memory.events
Nov 29 07:48:11 compute-0 elastic_albattani[267862]: 167 167
Nov 29 07:48:11 compute-0 podman[267832]: 2025-11-29 07:48:11.061414811 +0000 UTC m=+0.163060645 container attach e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:48:11 compute-0 podman[267832]: 2025-11-29 07:48:11.062406598 +0000 UTC m=+0.164052412 container died e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:48:11 compute-0 podman[267849]: 2025-11-29 07:48:11.07164306 +0000 UTC m=+0.068113107 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 07:48:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7882ba1474d5181b258403b7c97743df62f20962e7f9def4bf54b7861bec0a55-merged.mount: Deactivated successfully.
Nov 29 07:48:11 compute-0 podman[267832]: 2025-11-29 07:48:11.104122276 +0000 UTC m=+0.205768080 container remove e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:48:11 compute-0 podman[267850]: 2025-11-29 07:48:11.110973312 +0000 UTC m=+0.103442781 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:48:11 compute-0 podman[267846]: 2025-11-29 07:48:11.113239444 +0000 UTC m=+0.109777603 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:48:11 compute-0 systemd[1]: libpod-conmon-e6847b13960d01b4ddf967772488ecd50f06bd057f0ce71b06143f14610bf4a3.scope: Deactivated successfully.
Nov 29 07:48:11 compute-0 nova_compute[256729]: 2025-11-29 07:48:11.340 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:11 compute-0 podman[267933]: 2025-11-29 07:48:11.253540288 +0000 UTC m=+0.020657265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:11 compute-0 podman[267933]: 2025-11-29 07:48:11.460772846 +0000 UTC m=+0.227889843 container create 3414b7b349c9d896ecf24242f89bdeba97b7d0d9707e4488f6dbe47ed3062926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_easley, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:48:11 compute-0 systemd[1]: Started libpod-conmon-3414b7b349c9d896ecf24242f89bdeba97b7d0d9707e4488f6dbe47ed3062926.scope.
Nov 29 07:48:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2fd978dc8a2256088564b9354dcf05dc779f87524d3a7bc7b0a6550649b3a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2fd978dc8a2256088564b9354dcf05dc779f87524d3a7bc7b0a6550649b3a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2fd978dc8a2256088564b9354dcf05dc779f87524d3a7bc7b0a6550649b3a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2fd978dc8a2256088564b9354dcf05dc779f87524d3a7bc7b0a6550649b3a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:11 compute-0 podman[267933]: 2025-11-29 07:48:11.576603702 +0000 UTC m=+0.343720669 container init 3414b7b349c9d896ecf24242f89bdeba97b7d0d9707e4488f6dbe47ed3062926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:48:11 compute-0 podman[267933]: 2025-11-29 07:48:11.583028317 +0000 UTC m=+0.350145284 container start 3414b7b349c9d896ecf24242f89bdeba97b7d0d9707e4488f6dbe47ed3062926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:48:11 compute-0 podman[267933]: 2025-11-29 07:48:11.65431463 +0000 UTC m=+0.421431587 container attach 3414b7b349c9d896ecf24242f89bdeba97b7d0d9707e4488f6dbe47ed3062926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_easley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:48:11 compute-0 sshd-session[267632]: Connection closed by authenticating user root 143.14.121.41 port 45804 [preauth]
Nov 29 07:48:11 compute-0 ceph-mon[75050]: pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Nov 29 07:48:12 compute-0 exciting_easley[267949]: {
Nov 29 07:48:12 compute-0 exciting_easley[267949]:     "0": [
Nov 29 07:48:12 compute-0 exciting_easley[267949]:         {
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "devices": [
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "/dev/loop3"
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             ],
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_name": "ceph_lv0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_size": "21470642176",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "name": "ceph_lv0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "tags": {
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.cluster_name": "ceph",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.crush_device_class": "",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.encrypted": "0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.osd_id": "0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.type": "block",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.vdo": "0"
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             },
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "type": "block",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "vg_name": "ceph_vg0"
Nov 29 07:48:12 compute-0 exciting_easley[267949]:         }
Nov 29 07:48:12 compute-0 exciting_easley[267949]:     ],
Nov 29 07:48:12 compute-0 exciting_easley[267949]:     "1": [
Nov 29 07:48:12 compute-0 exciting_easley[267949]:         {
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "devices": [
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "/dev/loop4"
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             ],
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_name": "ceph_lv1",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_size": "21470642176",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "name": "ceph_lv1",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "tags": {
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.cluster_name": "ceph",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.crush_device_class": "",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.encrypted": "0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.osd_id": "1",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.type": "block",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.vdo": "0"
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             },
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "type": "block",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "vg_name": "ceph_vg1"
Nov 29 07:48:12 compute-0 exciting_easley[267949]:         }
Nov 29 07:48:12 compute-0 exciting_easley[267949]:     ],
Nov 29 07:48:12 compute-0 exciting_easley[267949]:     "2": [
Nov 29 07:48:12 compute-0 exciting_easley[267949]:         {
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "devices": [
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "/dev/loop5"
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             ],
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_name": "ceph_lv2",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_size": "21470642176",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "name": "ceph_lv2",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "tags": {
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.cluster_name": "ceph",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.crush_device_class": "",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.encrypted": "0",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.osd_id": "2",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.type": "block",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:                 "ceph.vdo": "0"
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             },
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "type": "block",
Nov 29 07:48:12 compute-0 exciting_easley[267949]:             "vg_name": "ceph_vg2"
Nov 29 07:48:12 compute-0 exciting_easley[267949]:         }
Nov 29 07:48:12 compute-0 exciting_easley[267949]:     ]
Nov 29 07:48:12 compute-0 exciting_easley[267949]: }
Nov 29 07:48:12 compute-0 systemd[1]: libpod-3414b7b349c9d896ecf24242f89bdeba97b7d0d9707e4488f6dbe47ed3062926.scope: Deactivated successfully.
Nov 29 07:48:12 compute-0 podman[267933]: 2025-11-29 07:48:12.405080222 +0000 UTC m=+1.172197179 container died 3414b7b349c9d896ecf24242f89bdeba97b7d0d9707e4488f6dbe47ed3062926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:48:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.5 KiB/s wr, 47 op/s
Nov 29 07:48:12 compute-0 nova_compute[256729]: 2025-11-29 07:48:12.841 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402477.8397548, 470f20d7-0c57-4067-a7ff-7f6b0971ad23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:48:12 compute-0 nova_compute[256729]: 2025-11-29 07:48:12.843 256736 INFO nova.compute.manager [-] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] VM Stopped (Lifecycle Event)
Nov 29 07:48:12 compute-0 nova_compute[256729]: 2025-11-29 07:48:12.889 256736 DEBUG nova.compute.manager [None req-a205945d-4f57-4dda-bd8a-38523182d362 - - - - - -] [instance: 470f20d7-0c57-4067-a7ff-7f6b0971ad23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:48:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd2fd978dc8a2256088564b9354dcf05dc779f87524d3a7bc7b0a6550649b3a9-merged.mount: Deactivated successfully.
Nov 29 07:48:12 compute-0 nova_compute[256729]: 2025-11-29 07:48:12.955 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:13 compute-0 podman[267933]: 2025-11-29 07:48:13.281444087 +0000 UTC m=+2.048561074 container remove 3414b7b349c9d896ecf24242f89bdeba97b7d0d9707e4488f6dbe47ed3062926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_easley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:48:13 compute-0 ceph-mon[75050]: pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.5 KiB/s wr, 47 op/s
Nov 29 07:48:13 compute-0 systemd[1]: libpod-conmon-3414b7b349c9d896ecf24242f89bdeba97b7d0d9707e4488f6dbe47ed3062926.scope: Deactivated successfully.
Nov 29 07:48:13 compute-0 sudo[267767]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:13 compute-0 sudo[267974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:13 compute-0 sudo[267974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:13 compute-0 sudo[267974]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:13 compute-0 sudo[267999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:48:13 compute-0 sudo[267999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:13 compute-0 sudo[267999]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:13 compute-0 sudo[268024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:13 compute-0 sudo[268024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:13 compute-0 sudo[268024]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:13 compute-0 sudo[268049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:48:13 compute-0 sudo[268049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:14 compute-0 podman[268111]: 2025-11-29 07:48:13.950125341 +0000 UTC m=+0.020051937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:14 compute-0 podman[268111]: 2025-11-29 07:48:14.143401009 +0000 UTC m=+0.213327575 container create ca2a60648cdfb7c7d43bd195501f81d9d9f5a208006bbd6cdf576bc48523a7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_villani, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 07:48:14 compute-0 systemd[1]: Started libpod-conmon-ca2a60648cdfb7c7d43bd195501f81d9d9f5a208006bbd6cdf576bc48523a7f7.scope.
Nov 29 07:48:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:14 compute-0 podman[268111]: 2025-11-29 07:48:14.300049678 +0000 UTC m=+0.369976285 container init ca2a60648cdfb7c7d43bd195501f81d9d9f5a208006bbd6cdf576bc48523a7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:48:14 compute-0 podman[268111]: 2025-11-29 07:48:14.306422253 +0000 UTC m=+0.376348829 container start ca2a60648cdfb7c7d43bd195501f81d9d9f5a208006bbd6cdf576bc48523a7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:48:14 compute-0 hopeful_villani[268128]: 167 167
Nov 29 07:48:14 compute-0 systemd[1]: libpod-ca2a60648cdfb7c7d43bd195501f81d9d9f5a208006bbd6cdf576bc48523a7f7.scope: Deactivated successfully.
Nov 29 07:48:14 compute-0 podman[268111]: 2025-11-29 07:48:14.320826255 +0000 UTC m=+0.390752841 container attach ca2a60648cdfb7c7d43bd195501f81d9d9f5a208006bbd6cdf576bc48523a7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:48:14 compute-0 podman[268111]: 2025-11-29 07:48:14.322410088 +0000 UTC m=+0.392336684 container died ca2a60648cdfb7c7d43bd195501f81d9d9f5a208006bbd6cdf576bc48523a7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:48:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-af32cba93f03e959ec7413f12cee92fc35574ba9053b65c51e07548636cb0ecc-merged.mount: Deactivated successfully.
Nov 29 07:48:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 29 07:48:14 compute-0 podman[268111]: 2025-11-29 07:48:14.755011149 +0000 UTC m=+0.824937735 container remove ca2a60648cdfb7c7d43bd195501f81d9d9f5a208006bbd6cdf576bc48523a7f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:48:14 compute-0 systemd[1]: libpod-conmon-ca2a60648cdfb7c7d43bd195501f81d9d9f5a208006bbd6cdf576bc48523a7f7.scope: Deactivated successfully.
Nov 29 07:48:15 compute-0 podman[268152]: 2025-11-29 07:48:15.047423088 +0000 UTC m=+0.116781144 container create 3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mcclintock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:48:15 compute-0 podman[268152]: 2025-11-29 07:48:14.954697991 +0000 UTC m=+0.024056047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:48:15 compute-0 sshd-session[267954]: Connection closed by authenticating user root 143.14.121.41 port 44848 [preauth]
Nov 29 07:48:15 compute-0 systemd[1]: Started libpod-conmon-3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc.scope.
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:48:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:48:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886a4768f54db2ab6305b2009f22c122a584ad1e9b3043cf3a408265a10549c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886a4768f54db2ab6305b2009f22c122a584ad1e9b3043cf3a408265a10549c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886a4768f54db2ab6305b2009f22c122a584ad1e9b3043cf3a408265a10549c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886a4768f54db2ab6305b2009f22c122a584ad1e9b3043cf3a408265a10549c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:48:15 compute-0 podman[268152]: 2025-11-29 07:48:15.194087576 +0000 UTC m=+0.263445682 container init 3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mcclintock, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:48:15 compute-0 podman[268152]: 2025-11-29 07:48:15.20267435 +0000 UTC m=+0.272032416 container start 3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 07:48:15 compute-0 podman[268152]: 2025-11-29 07:48:15.211350337 +0000 UTC m=+0.280708393 container attach 3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mcclintock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:48:15 compute-0 ceph-mon[75050]: pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]: {
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "osd_id": 2,
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "type": "bluestore"
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:     },
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "osd_id": 1,
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "type": "bluestore"
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:     },
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "osd_id": 0,
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:         "type": "bluestore"
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]:     }
Nov 29 07:48:16 compute-0 magical_mcclintock[268168]: }
Nov 29 07:48:16 compute-0 systemd[1]: libpod-3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc.scope: Deactivated successfully.
Nov 29 07:48:16 compute-0 systemd[1]: libpod-3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc.scope: Consumed 1.019s CPU time.
Nov 29 07:48:16 compute-0 podman[268152]: 2025-11-29 07:48:16.217765536 +0000 UTC m=+1.287123562 container died 3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-886a4768f54db2ab6305b2009f22c122a584ad1e9b3043cf3a408265a10549c7-merged.mount: Deactivated successfully.
Nov 29 07:48:16 compute-0 podman[268152]: 2025-11-29 07:48:16.29131856 +0000 UTC m=+1.360676586 container remove 3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mcclintock, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 07:48:16 compute-0 systemd[1]: libpod-conmon-3b298c4193c042623a2393eb6844b616c25d3c82f8cb9199944c158d9447e0cc.scope: Deactivated successfully.
Nov 29 07:48:16 compute-0 sudo[268049]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:48:16 compute-0 nova_compute[256729]: 2025-11-29 07:48:16.342 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:48:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:16 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 38d042c1-8f13-4d2d-9b29-abb4ed289cb8 does not exist
Nov 29 07:48:16 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b9d0d87f-34b0-4912-aa25-0b63b919aee2 does not exist
Nov 29 07:48:16 compute-0 sudo[268217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:48:16 compute-0 sudo[268217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:16 compute-0 sudo[268217]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:16 compute-0 sudo[268242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:48:16 compute-0 sudo[268242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:48:16 compute-0 sudo[268242]: pam_unix(sudo:session): session closed for user root
Nov 29 07:48:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 29 07:48:17 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:17 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:48:17 compute-0 ceph-mon[75050]: pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Nov 29 07:48:17 compute-0 nova_compute[256729]: 2025-11-29 07:48:17.958 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:18 compute-0 sshd-session[268173]: Connection closed by authenticating user root 143.14.121.41 port 44864 [preauth]
Nov 29 07:48:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Nov 29 07:48:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:19 compute-0 ceph-mon[75050]: pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Nov 29 07:48:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 596 B/s wr, 9 op/s
Nov 29 07:48:21 compute-0 nova_compute[256729]: 2025-11-29 07:48:21.344 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:21 compute-0 ceph-mon[75050]: pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 596 B/s wr, 9 op/s
Nov 29 07:48:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:48:22 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4193382646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:48:22 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4193382646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:22 compute-0 sshd-session[268267]: Connection closed by authenticating user root 143.14.121.41 port 44878 [preauth]
Nov 29 07:48:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 852 B/s wr, 10 op/s
Nov 29 07:48:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4193382646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4193382646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:22 compute-0 nova_compute[256729]: 2025-11-29 07:48:22.961 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:23 compute-0 ceph-mon[75050]: pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 852 B/s wr, 10 op/s
Nov 29 07:48:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 596 B/s wr, 20 op/s
Nov 29 07:48:26 compute-0 ceph-mon[75050]: pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 596 B/s wr, 20 op/s
Nov 29 07:48:26 compute-0 nova_compute[256729]: 2025-11-29 07:48:26.386 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Nov 29 07:48:27 compute-0 sshd-session[268269]: Connection closed by authenticating user root 143.14.121.41 port 59638 [preauth]
Nov 29 07:48:27 compute-0 nova_compute[256729]: 2025-11-29 07:48:27.964 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:28 compute-0 ceph-mon[75050]: pgmap v1231: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Nov 29 07:48:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 937 B/s wr, 15 op/s
Nov 29 07:48:29 compute-0 ceph-mon[75050]: pgmap v1232: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 937 B/s wr, 15 op/s
Nov 29 07:48:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 07:48:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:48:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1222642821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:48:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1222642821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:31 compute-0 nova_compute[256729]: 2025-11-29 07:48:31.389 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:31 compute-0 ceph-mon[75050]: pgmap v1233: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s
Nov 29 07:48:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1222642821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1222642821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:32 compute-0 sshd-session[268271]: Connection closed by authenticating user root 143.14.121.41 port 59648 [preauth]
Nov 29 07:48:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 852 B/s wr, 26 op/s
Nov 29 07:48:33 compute-0 nova_compute[256729]: 2025-11-29 07:48:33.013 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:33 compute-0 ceph-mon[75050]: pgmap v1234: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 852 B/s wr, 26 op/s
Nov 29 07:48:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 596 B/s wr, 26 op/s
Nov 29 07:48:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:48:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:48:35 compute-0 ceph-mon[75050]: pgmap v1235: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 596 B/s wr, 26 op/s
Nov 29 07:48:35 compute-0 sshd-session[268273]: Connection closed by authenticating user root 143.14.121.41 port 57128 [preauth]
Nov 29 07:48:36 compute-0 nova_compute[256729]: 2025-11-29 07:48:36.443 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Nov 29 07:48:38 compute-0 nova_compute[256729]: 2025-11-29 07:48:38.016 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 14 op/s
Nov 29 07:48:38 compute-0 ceph-mon[75050]: pgmap v1236: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Nov 29 07:48:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:40 compute-0 ceph-mon[75050]: pgmap v1237: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 14 op/s
Nov 29 07:48:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Nov 29 07:48:41 compute-0 sshd-session[268275]: Connection closed by authenticating user root 143.14.121.41 port 57144 [preauth]
Nov 29 07:48:41 compute-0 nova_compute[256729]: 2025-11-29 07:48:41.446 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:41 compute-0 ceph-mon[75050]: pgmap v1238: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 426 B/s wr, 13 op/s
Nov 29 07:48:41 compute-0 podman[268279]: 2025-11-29 07:48:41.716549855 +0000 UTC m=+0.080155164 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 29 07:48:41 compute-0 podman[268280]: 2025-11-29 07:48:41.741642799 +0000 UTC m=+0.100274614 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 07:48:41 compute-0 podman[268278]: 2025-11-29 07:48:41.768896352 +0000 UTC m=+0.140397437 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:48:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:48:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296903406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Nov 29 07:48:43 compute-0 nova_compute[256729]: 2025-11-29 07:48:43.019 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:43 compute-0 nova_compute[256729]: 2025-11-29 07:48:43.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:43 compute-0 nova_compute[256729]: 2025-11-29 07:48:43.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 07:48:43 compute-0 nova_compute[256729]: 2025-11-29 07:48:43.183 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 07:48:43 compute-0 nova_compute[256729]: 2025-11-29 07:48:43.183 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:43 compute-0 nova_compute[256729]: 2025-11-29 07:48:43.184 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 07:48:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 29 07:48:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 07:48:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/296903406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:43 compute-0 ceph-mon[75050]: pgmap v1239: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Nov 29 07:48:44 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 07:48:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 614 B/s wr, 2 op/s
Nov 29 07:48:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:44 compute-0 sshd-session[268277]: Connection closed by authenticating user root 143.14.121.41 port 57146 [preauth]
Nov 29 07:48:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 29 07:48:45 compute-0 ceph-mon[75050]: osdmap e151: 3 total, 3 up, 3 in
Nov 29 07:48:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 29 07:48:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 29 07:48:45 compute-0 nova_compute[256729]: 2025-11-29 07:48:45.194 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:45 compute-0 nova_compute[256729]: 2025-11-29 07:48:45.194 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:46 compute-0 nova_compute[256729]: 2025-11-29 07:48:46.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:46 compute-0 nova_compute[256729]: 2025-11-29 07:48:46.173 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:46 compute-0 ceph-mon[75050]: pgmap v1241: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 614 B/s wr, 2 op/s
Nov 29 07:48:46 compute-0 ceph-mon[75050]: osdmap e152: 3 total, 3 up, 3 in
Nov 29 07:48:46 compute-0 nova_compute[256729]: 2025-11-29 07:48:46.448 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 767 B/s wr, 5 op/s
Nov 29 07:48:47 compute-0 sshd-session[268344]: Invalid user huawei from 143.14.121.41 port 49212
Nov 29 07:48:47 compute-0 nova_compute[256729]: 2025-11-29 07:48:47.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:47 compute-0 nova_compute[256729]: 2025-11-29 07:48:47.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:47 compute-0 nova_compute[256729]: 2025-11-29 07:48:47.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:48:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 29 07:48:47 compute-0 sshd-session[268344]: Connection closed by invalid user huawei 143.14.121.41 port 49212 [preauth]
Nov 29 07:48:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 29 07:48:47 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 29 07:48:47 compute-0 ceph-mon[75050]: pgmap v1243: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 767 B/s wr, 5 op/s
Nov 29 07:48:48 compute-0 nova_compute[256729]: 2025-11-29 07:48:48.022 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:48 compute-0 nova_compute[256729]: 2025-11-29 07:48:48.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 KiB/s wr, 21 op/s
Nov 29 07:48:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 29 07:48:48 compute-0 ceph-mon[75050]: osdmap e153: 3 total, 3 up, 3 in
Nov 29 07:48:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 29 07:48:48 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 29 07:48:48 compute-0 ovn_controller[153383]: 2025-11-29T07:48:48Z|00044|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 07:48:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 29 07:48:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 29 07:48:49 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 29 07:48:49 compute-0 ceph-mon[75050]: pgmap v1245: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 KiB/s wr, 21 op/s
Nov 29 07:48:49 compute-0 ceph-mon[75050]: osdmap e154: 3 total, 3 up, 3 in
Nov 29 07:48:49 compute-0 ceph-mon[75050]: osdmap e155: 3 total, 3 up, 3 in
Nov 29 07:48:50 compute-0 nova_compute[256729]: 2025-11-29 07:48:50.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:50 compute-0 nova_compute[256729]: 2025-11-29 07:48:50.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:48:50 compute-0 nova_compute[256729]: 2025-11-29 07:48:50.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:48:50 compute-0 sshd-session[268346]: Invalid user sysadmin from 143.14.121.41 port 49214
Nov 29 07:48:50 compute-0 nova_compute[256729]: 2025-11-29 07:48:50.235 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:48:50 compute-0 nova_compute[256729]: 2025-11-29 07:48:50.236 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.8 KiB/s wr, 21 op/s
Nov 29 07:48:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 29 07:48:50 compute-0 sshd-session[268346]: Connection closed by invalid user sysadmin 143.14.121.41 port 49214 [preauth]
Nov 29 07:48:51 compute-0 nova_compute[256729]: 2025-11-29 07:48:51.160 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 29 07:48:51 compute-0 nova_compute[256729]: 2025-11-29 07:48:51.205 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:51 compute-0 nova_compute[256729]: 2025-11-29 07:48:51.205 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:51 compute-0 nova_compute[256729]: 2025-11-29 07:48:51.205 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:51 compute-0 nova_compute[256729]: 2025-11-29 07:48:51.205 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:48:51 compute-0 nova_compute[256729]: 2025-11-29 07:48:51.206 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:51 compute-0 nova_compute[256729]: 2025-11-29 07:48:51.449 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 29 07:48:52 compute-0 ceph-mon[75050]: pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.8 KiB/s wr, 21 op/s
Nov 29 07:48:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:52 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680449855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:52 compute-0 nova_compute[256729]: 2025-11-29 07:48:52.375 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:52 compute-0 nova_compute[256729]: 2025-11-29 07:48:52.548 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:48:52 compute-0 nova_compute[256729]: 2025-11-29 07:48:52.550 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4768MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:48:52 compute-0 nova_compute[256729]: 2025-11-29 07:48:52.550 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:52 compute-0 nova_compute[256729]: 2025-11-29 07:48:52.550 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 42 op/s
Nov 29 07:48:53 compute-0 nova_compute[256729]: 2025-11-29 07:48:53.000 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:48:53 compute-0 nova_compute[256729]: 2025-11-29 07:48:53.000 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:48:53 compute-0 nova_compute[256729]: 2025-11-29 07:48:53.024 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:53 compute-0 nova_compute[256729]: 2025-11-29 07:48:53.109 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:48:53 compute-0 ceph-mon[75050]: osdmap e156: 3 total, 3 up, 3 in
Nov 29 07:48:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/680449855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:53 compute-0 ceph-mon[75050]: pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 42 op/s
Nov 29 07:48:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:48:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040519230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:53 compute-0 nova_compute[256729]: 2025-11-29 07:48:53.517 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:48:53 compute-0 nova_compute[256729]: 2025-11-29 07:48:53.525 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:48:53 compute-0 nova_compute[256729]: 2025-11-29 07:48:53.543 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:48:53 compute-0 nova_compute[256729]: 2025-11-29 07:48:53.592 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:48:53 compute-0 nova_compute[256729]: 2025-11-29 07:48:53.592 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:48:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/218967165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:48:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260760958' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:48:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260760958' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3040519230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:48:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/218967165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:48:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4260760958' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4260760958' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:54 compute-0 nova_compute[256729]: 2025-11-29 07:48:54.581 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:48:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.0 KiB/s wr, 56 op/s
Nov 29 07:48:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:54 compute-0 sshd-session[268348]: Connection closed by authenticating user root 143.14.121.41 port 49222 [preauth]
Nov 29 07:48:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 29 07:48:55 compute-0 ceph-mon[75050]: pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.0 KiB/s wr, 56 op/s
Nov 29 07:48:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 29 07:48:55 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 29 07:48:56 compute-0 ceph-mon[75050]: osdmap e157: 3 total, 3 up, 3 in
Nov 29 07:48:56 compute-0 nova_compute[256729]: 2025-11-29 07:48:56.452 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 51 op/s
Nov 29 07:48:57 compute-0 ceph-mon[75050]: pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 51 op/s
Nov 29 07:48:58 compute-0 nova_compute[256729]: 2025-11-29 07:48:58.026 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:48:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 29 07:48:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 29 07:48:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 29 07:48:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.8 KiB/s wr, 84 op/s
Nov 29 07:48:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:48:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1960506351' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:48:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1960506351' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:59 compute-0 sshd-session[268395]: Connection closed by authenticating user root 143.14.121.41 port 34904 [preauth]
Nov 29 07:48:59 compute-0 ceph-mon[75050]: osdmap e158: 3 total, 3 up, 3 in
Nov 29 07:48:59 compute-0 ceph-mon[75050]: pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.8 KiB/s wr, 84 op/s
Nov 29 07:48:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1960506351' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:48:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1960506351' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.289432) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402539289517, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1177, "num_deletes": 255, "total_data_size": 1643803, "memory_usage": 1671024, "flush_reason": "Manual Compaction"}
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402539479848, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1614965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20441, "largest_seqno": 21617, "table_properties": {"data_size": 1609199, "index_size": 3097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13014, "raw_average_key_size": 20, "raw_value_size": 1597412, "raw_average_value_size": 2527, "num_data_blocks": 139, "num_entries": 632, "num_filter_entries": 632, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402449, "oldest_key_time": 1764402449, "file_creation_time": 1764402539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 190723 microseconds, and 8722 cpu microseconds.
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:48:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:48:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.480162) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1614965 bytes OK
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.480186) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.665840) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.665890) EVENT_LOG_v1 {"time_micros": 1764402539665877, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.665922) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1638225, prev total WAL file size 1645660, number of live WAL files 2.
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.694002) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1577KB)], [47(8651KB)]
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402539694065, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 10474386, "oldest_snapshot_seqno": -1}
Nov 29 07:48:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 29 07:48:59 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 29 07:48:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:48:59.767 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:48:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:48:59.768 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:48:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:48:59.768 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4764 keys, 8643102 bytes, temperature: kUnknown
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402539827949, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8643102, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8608838, "index_size": 21226, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 119114, "raw_average_key_size": 25, "raw_value_size": 8520223, "raw_average_value_size": 1788, "num_data_blocks": 883, "num_entries": 4764, "num_filter_entries": 4764, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764402539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.828242) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8643102 bytes
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.831115) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.2 rd, 64.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.4 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 5289, records dropped: 525 output_compression: NoCompression
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.831174) EVENT_LOG_v1 {"time_micros": 1764402539831148, "job": 24, "event": "compaction_finished", "compaction_time_micros": 134002, "compaction_time_cpu_micros": 49016, "output_level": 6, "num_output_files": 1, "total_output_size": 8643102, "num_input_records": 5289, "num_output_records": 4764, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402539831855, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402539834864, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.693849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.834931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.834939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.834942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.834945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:48:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:48:59.834949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:49:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 1.8 KiB/s wr, 62 op/s
Nov 29 07:49:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 29 07:49:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 29 07:49:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 29 07:49:01 compute-0 ceph-mon[75050]: osdmap e159: 3 total, 3 up, 3 in
Nov 29 07:49:01 compute-0 nova_compute[256729]: 2025-11-29 07:49:01.491 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:02.033 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:49:02 compute-0 nova_compute[256729]: 2025-11-29 07:49:02.033 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:02.034 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:49:02 compute-0 ceph-mon[75050]: pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 1.8 KiB/s wr, 62 op/s
Nov 29 07:49:02 compute-0 ceph-mon[75050]: osdmap e160: 3 total, 3 up, 3 in
Nov 29 07:49:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 2.8 KiB/s wr, 84 op/s
Nov 29 07:49:02 compute-0 sshd-session[268397]: Connection closed by authenticating user root 143.14.121.41 port 34920 [preauth]
Nov 29 07:49:03 compute-0 nova_compute[256729]: 2025-11-29 07:49:03.029 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:03 compute-0 ceph-mon[75050]: pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 2.8 KiB/s wr, 84 op/s
Nov 29 07:49:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.3 KiB/s wr, 49 op/s
Nov 29 07:49:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:05 compute-0 ceph-mon[75050]: pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.3 KiB/s wr, 49 op/s
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:49:05
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', 'backups', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr']
Nov 29 07:49:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:49:06 compute-0 nova_compute[256729]: 2025-11-29 07:49:06.535 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1023 B/s wr, 44 op/s
Nov 29 07:49:06 compute-0 sshd-session[268399]: Connection closed by authenticating user root 143.14.121.41 port 53392 [preauth]
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:49:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:49:08 compute-0 nova_compute[256729]: 2025-11-29 07:49:08.032 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:08 compute-0 ceph-mon[75050]: pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1023 B/s wr, 44 op/s
Nov 29 07:49:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.2 KiB/s wr, 53 op/s
Nov 29 07:49:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1844730806' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1844730806' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 29 07:49:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 29 07:49:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 29 07:49:09 compute-0 ceph-mon[75050]: pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.2 KiB/s wr, 53 op/s
Nov 29 07:49:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1844730806' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1844730806' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:09 compute-0 ceph-mon[75050]: osdmap e161: 3 total, 3 up, 3 in
Nov 29 07:49:09 compute-0 sshd-session[268401]: Connection closed by authenticating user root 143.14.121.41 port 53406 [preauth]
Nov 29 07:49:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:10.036 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.1 KiB/s wr, 48 op/s
Nov 29 07:49:10 compute-0 nova_compute[256729]: 2025-11-29 07:49:10.772 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:11 compute-0 nova_compute[256729]: 2025-11-29 07:49:11.537 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 818 B/s wr, 34 op/s
Nov 29 07:49:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 29 07:49:12 compute-0 podman[268406]: 2025-11-29 07:49:12.730947474 +0000 UTC m=+0.094008323 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:49:12 compute-0 podman[268407]: 2025-11-29 07:49:12.750424225 +0000 UTC m=+0.112226090 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 07:49:12 compute-0 podman[268405]: 2025-11-29 07:49:12.771591002 +0000 UTC m=+0.135776482 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:49:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 29 07:49:12 compute-0 ceph-mon[75050]: pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.1 KiB/s wr, 48 op/s
Nov 29 07:49:13 compute-0 nova_compute[256729]: 2025-11-29 07:49:13.034 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 29 07:49:14 compute-0 sshd-session[268403]: Connection closed by authenticating user root 143.14.121.41 port 53410 [preauth]
Nov 29 07:49:14 compute-0 ceph-mon[75050]: pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 818 B/s wr, 34 op/s
Nov 29 07:49:14 compute-0 ceph-mon[75050]: osdmap e162: 3 total, 3 up, 3 in
Nov 29 07:49:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 18 op/s
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:49:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:49:15 compute-0 ceph-mon[75050]: pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 18 op/s
Nov 29 07:49:16 compute-0 nova_compute[256729]: 2025-11-29 07:49:16.540 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:16 compute-0 sudo[268474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 13 op/s
Nov 29 07:49:16 compute-0 sudo[268474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:16 compute-0 sudo[268474]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:16 compute-0 sudo[268499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:49:16 compute-0 sudo[268499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:16 compute-0 sudo[268499]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:16 compute-0 sudo[268524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:16 compute-0 sudo[268524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:16 compute-0 sudo[268524]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:16 compute-0 sudo[268549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:49:16 compute-0 sudo[268549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:17 compute-0 sshd-session[268470]: Connection closed by authenticating user root 143.14.121.41 port 58796 [preauth]
Nov 29 07:49:17 compute-0 sudo[268549]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:49:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:49:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:49:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:49:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 29 07:49:18 compute-0 nova_compute[256729]: 2025-11-29 07:49:18.036 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:18 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:49:18 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev cae421d9-c60b-4285-ab03-25a37a9e5adf does not exist
Nov 29 07:49:18 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a01afc9c-df44-4e7c-9cad-ce980c224a09 does not exist
Nov 29 07:49:18 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b07c1051-b73e-4cdc-8194-8c032db3bafb does not exist
Nov 29 07:49:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:49:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:49:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:49:18 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:49:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:49:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 994 B/s wr, 22 op/s
Nov 29 07:49:18 compute-0 sudo[268608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:18 compute-0 sudo[268608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:18 compute-0 sudo[268608]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:18 compute-0 sudo[268633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:49:18 compute-0 sudo[268633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:18 compute-0 sudo[268633]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:18 compute-0 sudo[268658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:18 compute-0 sudo[268658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:18 compute-0 sudo[268658]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:18 compute-0 sudo[268683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:49:18 compute-0 sudo[268683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 29 07:49:19 compute-0 ceph-mon[75050]: pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 13 op/s
Nov 29 07:49:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:49:19 compute-0 podman[268750]: 2025-11-29 07:49:19.296798755 +0000 UTC m=+0.035862089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:49:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 639 B/s wr, 23 op/s
Nov 29 07:49:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 29 07:49:20 compute-0 podman[268750]: 2025-11-29 07:49:20.906360393 +0000 UTC m=+1.645423667 container create 40857ee5f1873dacf0b244cfbdf0a4badf7c38e1281ea1e40ef2d5c5579a7c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:49:21 compute-0 nova_compute[256729]: 2025-11-29 07:49:21.546 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:21 compute-0 sshd-session[268606]: Connection closed by authenticating user root 143.14.121.41 port 58804 [preauth]
Nov 29 07:49:21 compute-0 systemd[1]: Started libpod-conmon-40857ee5f1873dacf0b244cfbdf0a4badf7c38e1281ea1e40ef2d5c5579a7c5d.scope.
Nov 29 07:49:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 738 B/s wr, 25 op/s
Nov 29 07:49:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:22 compute-0 podman[268750]: 2025-11-29 07:49:22.802149371 +0000 UTC m=+3.541212725 container init 40857ee5f1873dacf0b244cfbdf0a4badf7c38e1281ea1e40ef2d5c5579a7c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:49:22 compute-0 podman[268750]: 2025-11-29 07:49:22.818164328 +0000 UTC m=+3.557227612 container start 40857ee5f1873dacf0b244cfbdf0a4badf7c38e1281ea1e40ef2d5c5579a7c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:49:22 compute-0 brave_varahamihira[268766]: 167 167
Nov 29 07:49:22 compute-0 systemd[1]: libpod-40857ee5f1873dacf0b244cfbdf0a4badf7c38e1281ea1e40ef2d5c5579a7c5d.scope: Deactivated successfully.
Nov 29 07:49:23 compute-0 nova_compute[256729]: 2025-11-29 07:49:23.039 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:49:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:49:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:49:23 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:49:23 compute-0 ceph-mon[75050]: pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 994 B/s wr, 22 op/s
Nov 29 07:49:23 compute-0 podman[268750]: 2025-11-29 07:49:23.31686116 +0000 UTC m=+4.055924454 container attach 40857ee5f1873dacf0b244cfbdf0a4badf7c38e1281ea1e40ef2d5c5579a7c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:49:23 compute-0 podman[268750]: 2025-11-29 07:49:23.319114561 +0000 UTC m=+4.058177835 container died 40857ee5f1873dacf0b244cfbdf0a4badf7c38e1281ea1e40ef2d5c5579a7c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc7ee19bbbbe08de06de6cb2586043877e6b9a1c5d096a681e5b36af03a440f2-merged.mount: Deactivated successfully.
Nov 29 07:49:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 511 B/s wr, 24 op/s
Nov 29 07:49:24 compute-0 ceph-mon[75050]: pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 639 B/s wr, 23 op/s
Nov 29 07:49:24 compute-0 ceph-mon[75050]: osdmap e163: 3 total, 3 up, 3 in
Nov 29 07:49:24 compute-0 ceph-mon[75050]: pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 738 B/s wr, 25 op/s
Nov 29 07:49:24 compute-0 podman[268750]: 2025-11-29 07:49:24.695859614 +0000 UTC m=+5.434922868 container remove 40857ee5f1873dacf0b244cfbdf0a4badf7c38e1281ea1e40ef2d5c5579a7c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_varahamihira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:49:24 compute-0 systemd[1]: libpod-conmon-40857ee5f1873dacf0b244cfbdf0a4badf7c38e1281ea1e40ef2d5c5579a7c5d.scope: Deactivated successfully.
Nov 29 07:49:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1161966079' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1161966079' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:24 compute-0 podman[268792]: 2025-11-29 07:49:24.914086611 +0000 UTC m=+0.096330056 container create e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:49:24 compute-0 podman[268792]: 2025-11-29 07:49:24.845688378 +0000 UTC m=+0.027931853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:49:24 compute-0 systemd[1]: Started libpod-conmon-e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd.scope.
Nov 29 07:49:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a2dd20b35bdfdcb4921e0bd858e6500691f2ca194b5ac430200cb669ae9100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a2dd20b35bdfdcb4921e0bd858e6500691f2ca194b5ac430200cb669ae9100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a2dd20b35bdfdcb4921e0bd858e6500691f2ca194b5ac430200cb669ae9100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a2dd20b35bdfdcb4921e0bd858e6500691f2ca194b5ac430200cb669ae9100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a2dd20b35bdfdcb4921e0bd858e6500691f2ca194b5ac430200cb669ae9100/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/382026975' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/382026975' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:25 compute-0 sshd-session[268769]: Connection closed by authenticating user root 143.14.121.41 port 41114 [preauth]
Nov 29 07:49:25 compute-0 podman[268792]: 2025-11-29 07:49:25.431497104 +0000 UTC m=+0.613740659 container init e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 07:49:25 compute-0 podman[268792]: 2025-11-29 07:49:25.44346891 +0000 UTC m=+0.625712395 container start e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:49:25 compute-0 podman[268792]: 2025-11-29 07:49:25.518017042 +0000 UTC m=+0.700260597 container attach e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:49:26 compute-0 nova_compute[256729]: 2025-11-29 07:49:26.588 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 511 B/s wr, 17 op/s
Nov 29 07:49:26 compute-0 zealous_mccarthy[268809]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:49:26 compute-0 zealous_mccarthy[268809]: --> relative data size: 1.0
Nov 29 07:49:26 compute-0 zealous_mccarthy[268809]: --> All data devices are unavailable
Nov 29 07:49:26 compute-0 systemd[1]: libpod-e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd.scope: Deactivated successfully.
Nov 29 07:49:26 compute-0 systemd[1]: libpod-e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd.scope: Consumed 1.180s CPU time.
Nov 29 07:49:26 compute-0 podman[268792]: 2025-11-29 07:49:26.703524743 +0000 UTC m=+1.885768248 container died e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:49:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:27 compute-0 ceph-mon[75050]: pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 511 B/s wr, 24 op/s
Nov 29 07:49:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1161966079' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1161966079' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/382026975' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/382026975' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:28 compute-0 sshd-session[268814]: Connection closed by authenticating user root 143.14.121.41 port 41130 [preauth]
Nov 29 07:49:28 compute-0 nova_compute[256729]: 2025-11-29 07:49:28.042 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 511 B/s wr, 14 op/s
Nov 29 07:49:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-19a2dd20b35bdfdcb4921e0bd858e6500691f2ca194b5ac430200cb669ae9100-merged.mount: Deactivated successfully.
Nov 29 07:49:29 compute-0 ceph-mon[75050]: pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 511 B/s wr, 17 op/s
Nov 29 07:49:29 compute-0 ceph-mon[75050]: pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 511 B/s wr, 14 op/s
Nov 29 07:49:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 446 B/s wr, 12 op/s
Nov 29 07:49:31 compute-0 sshd-session[268852]: Connection closed by authenticating user root 143.14.121.41 port 41134 [preauth]
Nov 29 07:49:31 compute-0 nova_compute[256729]: 2025-11-29 07:49:31.592 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 511 B/s wr, 12 op/s
Nov 29 07:49:33 compute-0 nova_compute[256729]: 2025-11-29 07:49:33.045 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 511 B/s wr, 7 op/s
Nov 29 07:49:34 compute-0 sshd-session[268855]: Connection closed by authenticating user root 143.14.121.41 port 52384 [preauth]
Nov 29 07:49:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:49:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:49:36 compute-0 nova_compute[256729]: 2025-11-29 07:49:36.367 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquiring lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:36 compute-0 nova_compute[256729]: 2025-11-29 07:49:36.368 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:36 compute-0 nova_compute[256729]: 2025-11-29 07:49:36.385 256736 DEBUG nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:49:36 compute-0 nova_compute[256729]: 2025-11-29 07:49:36.467 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:36 compute-0 nova_compute[256729]: 2025-11-29 07:49:36.468 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:36 compute-0 nova_compute[256729]: 2025-11-29 07:49:36.481 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:49:36 compute-0 nova_compute[256729]: 2025-11-29 07:49:36.482 256736 INFO nova.compute.claims [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:49:36 compute-0 nova_compute[256729]: 2025-11-29 07:49:36.593 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 597 B/s wr, 7 op/s
Nov 29 07:49:36 compute-0 nova_compute[256729]: 2025-11-29 07:49:36.771 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:49:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1325278089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156839381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.265 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.272 256736 DEBUG nova.compute.provider_tree [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.292 256736 DEBUG nova.scheduler.client.report [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.348 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.349 256736 DEBUG nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.403 256736 DEBUG nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.403 256736 DEBUG nova.network.neutron [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.668 256736 INFO nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.700 256736 DEBUG nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.788 256736 DEBUG nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.790 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.790 256736 INFO nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Creating image(s)
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.818 256736 DEBUG nova.storage.rbd_utils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] rbd image ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.845 256736 DEBUG nova.storage.rbd_utils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] rbd image ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.872 256736 DEBUG nova.storage.rbd_utils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] rbd image ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.878 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.909 256736 DEBUG nova.policy [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96ced15eddb64f1eaf8ec309ebc98411', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f8e71647ebd44d9095d7adf146571b99', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.989 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.991 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.992 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:37 compute-0 nova_compute[256729]: 2025-11-29 07:49:37.993 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:38 compute-0 nova_compute[256729]: 2025-11-29 07:49:38.031 256736 DEBUG nova.storage.rbd_utils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] rbd image ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:38 compute-0 nova_compute[256729]: 2025-11-29 07:49:38.036 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:38 compute-0 nova_compute[256729]: 2025-11-29 07:49:38.068 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 597 B/s wr, 8 op/s
Nov 29 07:49:38 compute-0 podman[268792]: 2025-11-29 07:49:38.719354581 +0000 UTC m=+13.901598066 container remove e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mccarthy, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:49:38 compute-0 sudo[268683]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:38 compute-0 systemd[1]: libpod-conmon-e5f163df8dcf12a8fe99aadeac6e46c4d4925efab957d62f2c172f80bdde37dd.scope: Deactivated successfully.
Nov 29 07:49:38 compute-0 ceph-mon[75050]: pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 446 B/s wr, 12 op/s
Nov 29 07:49:38 compute-0 ceph-mon[75050]: pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 511 B/s wr, 12 op/s
Nov 29 07:49:38 compute-0 ceph-mon[75050]: pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 511 B/s wr, 7 op/s
Nov 29 07:49:38 compute-0 sudo[268972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:38 compute-0 sudo[268972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:38 compute-0 sudo[268972]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:38 compute-0 sudo[268998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:49:38 compute-0 sudo[268998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:38 compute-0 sudo[268998]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:38 compute-0 sshd-session[268857]: Connection closed by authenticating user root 143.14.121.41 port 52386 [preauth]
Nov 29 07:49:39 compute-0 sudo[269023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:39 compute-0 sudo[269023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:39 compute-0 sudo[269023]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:39 compute-0 nova_compute[256729]: 2025-11-29 07:49:39.040 256736 DEBUG nova.network.neutron [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Successfully created port: f2578e55-dacb-433e-8662-059abbd23682 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:49:39 compute-0 sudo[269051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:49:39 compute-0 sudo[269051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:39 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 07:49:39 compute-0 podman[269116]: 2025-11-29 07:49:39.559448567 +0000 UTC m=+0.115656533 container create c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:49:39 compute-0 podman[269116]: 2025-11-29 07:49:39.48472904 +0000 UTC m=+0.040937026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:49:39 compute-0 nova_compute[256729]: 2025-11-29 07:49:39.610 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:39 compute-0 systemd[1]: Started libpod-conmon-c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53.scope.
Nov 29 07:49:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:39 compute-0 podman[269116]: 2025-11-29 07:49:39.675271764 +0000 UTC m=+0.231479730 container init c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 07:49:39 compute-0 podman[269116]: 2025-11-29 07:49:39.683625201 +0000 UTC m=+0.239833177 container start c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mclaren, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:49:39 compute-0 dazzling_mclaren[269134]: 167 167
Nov 29 07:49:39 compute-0 systemd[1]: libpod-c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53.scope: Deactivated successfully.
Nov 29 07:49:39 compute-0 conmon[269134]: conmon c7099a4229a01084e020 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53.scope/container/memory.events
Nov 29 07:49:39 compute-0 podman[269116]: 2025-11-29 07:49:39.694359884 +0000 UTC m=+0.250567890 container attach c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:49:39 compute-0 podman[269116]: 2025-11-29 07:49:39.695476925 +0000 UTC m=+0.251684891 container died c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mclaren, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:49:39 compute-0 nova_compute[256729]: 2025-11-29 07:49:39.701 256736 DEBUG nova.storage.rbd_utils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] resizing rbd image ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a5626312f32d33a51413c459fce23d3e3db9343937ea7d1d50dcc1e35ec095d-merged.mount: Deactivated successfully.
Nov 29 07:49:39 compute-0 podman[269116]: 2025-11-29 07:49:39.774453677 +0000 UTC m=+0.330661643 container remove c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mclaren, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:49:39 compute-0 systemd[1]: libpod-conmon-c7099a4229a01084e020da1f8db4e038f20fd36ec77af71a5c7d54605075af53.scope: Deactivated successfully.
Nov 29 07:49:39 compute-0 ceph-mon[75050]: pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 597 B/s wr, 7 op/s
Nov 29 07:49:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1325278089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2156839381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:39 compute-0 ceph-mon[75050]: pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 597 B/s wr, 8 op/s
Nov 29 07:49:39 compute-0 nova_compute[256729]: 2025-11-29 07:49:39.842 256736 DEBUG nova.objects.instance [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lazy-loading 'migration_context' on Instance uuid ddc42af7-1541-4037-acd0-cdeb260a8cc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:39 compute-0 nova_compute[256729]: 2025-11-29 07:49:39.858 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:49:39 compute-0 nova_compute[256729]: 2025-11-29 07:49:39.858 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Ensure instance console log exists: /var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:49:39 compute-0 nova_compute[256729]: 2025-11-29 07:49:39.859 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:39 compute-0 nova_compute[256729]: 2025-11-29 07:49:39.859 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:39 compute-0 nova_compute[256729]: 2025-11-29 07:49:39.859 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:39 compute-0 podman[269228]: 2025-11-29 07:49:39.948369367 +0000 UTC m=+0.043870407 container create b79958bfb33ee8f02624a5fd8dd40167b12a6e07b68970736d3ec1f3f3f11e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:49:39 compute-0 systemd[1]: Started libpod-conmon-b79958bfb33ee8f02624a5fd8dd40167b12a6e07b68970736d3ec1f3f3f11e63.scope.
Nov 29 07:49:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2825cdefc197973930ebe327b856b3d08bc39615149e0dbf7c3eedeb24fc9761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2825cdefc197973930ebe327b856b3d08bc39615149e0dbf7c3eedeb24fc9761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2825cdefc197973930ebe327b856b3d08bc39615149e0dbf7c3eedeb24fc9761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2825cdefc197973930ebe327b856b3d08bc39615149e0dbf7c3eedeb24fc9761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:40 compute-0 podman[269228]: 2025-11-29 07:49:39.93048294 +0000 UTC m=+0.025984010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:49:40 compute-0 podman[269228]: 2025-11-29 07:49:40.037388884 +0000 UTC m=+0.132889924 container init b79958bfb33ee8f02624a5fd8dd40167b12a6e07b68970736d3ec1f3f3f11e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:49:40 compute-0 podman[269228]: 2025-11-29 07:49:40.043735066 +0000 UTC m=+0.139236116 container start b79958bfb33ee8f02624a5fd8dd40167b12a6e07b68970736d3ec1f3f3f11e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_sinoussi, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:49:40 compute-0 podman[269228]: 2025-11-29 07:49:40.047800057 +0000 UTC m=+0.143301097 container attach b79958bfb33ee8f02624a5fd8dd40167b12a6e07b68970736d3ec1f3f3f11e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:49:40 compute-0 nova_compute[256729]: 2025-11-29 07:49:40.222 256736 DEBUG nova.network.neutron [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Successfully updated port: f2578e55-dacb-433e-8662-059abbd23682 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:49:40 compute-0 nova_compute[256729]: 2025-11-29 07:49:40.248 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquiring lock "refresh_cache-ddc42af7-1541-4037-acd0-cdeb260a8cc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:49:40 compute-0 nova_compute[256729]: 2025-11-29 07:49:40.248 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquired lock "refresh_cache-ddc42af7-1541-4037-acd0-cdeb260a8cc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:49:40 compute-0 nova_compute[256729]: 2025-11-29 07:49:40.248 256736 DEBUG nova.network.neutron [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:49:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 29 07:49:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 29 07:49:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 29 07:49:40 compute-0 nova_compute[256729]: 2025-11-29 07:49:40.343 256736 DEBUG nova.compute.manager [req-247be86b-ebc5-4b27-aff2-39820b6bbe59 req-bc3ca515-d0b4-4625-bfc3-03d4f41f9b91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Received event network-changed-f2578e55-dacb-433e-8662-059abbd23682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:49:40 compute-0 nova_compute[256729]: 2025-11-29 07:49:40.343 256736 DEBUG nova.compute.manager [req-247be86b-ebc5-4b27-aff2-39820b6bbe59 req-bc3ca515-d0b4-4625-bfc3-03d4f41f9b91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Refreshing instance network info cache due to event network-changed-f2578e55-dacb-433e-8662-059abbd23682. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:49:40 compute-0 nova_compute[256729]: 2025-11-29 07:49:40.343 256736 DEBUG oslo_concurrency.lockutils [req-247be86b-ebc5-4b27-aff2-39820b6bbe59 req-bc3ca515-d0b4-4625-bfc3-03d4f41f9b91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-ddc42af7-1541-4037-acd0-cdeb260a8cc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:49:40 compute-0 nova_compute[256729]: 2025-11-29 07:49:40.439 256736 DEBUG nova.network.neutron [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:49:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 409 B/s wr, 3 op/s
Nov 29 07:49:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570546490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570546490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]: {
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:     "0": [
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:         {
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "devices": [
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "/dev/loop3"
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             ],
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_name": "ceph_lv0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_size": "21470642176",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "name": "ceph_lv0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "tags": {
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.cluster_name": "ceph",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.crush_device_class": "",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.encrypted": "0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.osd_id": "0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.type": "block",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.vdo": "0"
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             },
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "type": "block",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "vg_name": "ceph_vg0"
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:         }
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:     ],
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:     "1": [
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:         {
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "devices": [
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "/dev/loop4"
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             ],
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_name": "ceph_lv1",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_size": "21470642176",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "name": "ceph_lv1",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "tags": {
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.cluster_name": "ceph",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.crush_device_class": "",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.encrypted": "0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.osd_id": "1",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.type": "block",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.vdo": "0"
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             },
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "type": "block",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "vg_name": "ceph_vg1"
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:         }
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:     ],
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:     "2": [
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:         {
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "devices": [
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "/dev/loop5"
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             ],
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_name": "ceph_lv2",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_size": "21470642176",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "name": "ceph_lv2",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "tags": {
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.cluster_name": "ceph",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.crush_device_class": "",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.encrypted": "0",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.osd_id": "2",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.type": "block",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:                 "ceph.vdo": "0"
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             },
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "type": "block",
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:             "vg_name": "ceph_vg2"
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:         }
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]:     ]
Nov 29 07:49:40 compute-0 interesting_sinoussi[269244]: }
Nov 29 07:49:40 compute-0 systemd[1]: libpod-b79958bfb33ee8f02624a5fd8dd40167b12a6e07b68970736d3ec1f3f3f11e63.scope: Deactivated successfully.
Nov 29 07:49:40 compute-0 podman[269228]: 2025-11-29 07:49:40.851472911 +0000 UTC m=+0.946973971 container died b79958bfb33ee8f02624a5fd8dd40167b12a6e07b68970736d3ec1f3f3f11e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:49:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2825cdefc197973930ebe327b856b3d08bc39615149e0dbf7c3eedeb24fc9761-merged.mount: Deactivated successfully.
Nov 29 07:49:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.403 256736 DEBUG nova.network.neutron [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Updating instance_info_cache with network_info: [{"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:49:41 compute-0 podman[269228]: 2025-11-29 07:49:41.408346658 +0000 UTC m=+1.503847708 container remove b79958bfb33ee8f02624a5fd8dd40167b12a6e07b68970736d3ec1f3f3f11e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:49:41 compute-0 systemd[1]: libpod-conmon-b79958bfb33ee8f02624a5fd8dd40167b12a6e07b68970736d3ec1f3f3f11e63.scope: Deactivated successfully.
Nov 29 07:49:41 compute-0 sudo[269051]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 07:49:41 compute-0 sudo[269267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.517 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Releasing lock "refresh_cache-ddc42af7-1541-4037-acd0-cdeb260a8cc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.518 256736 DEBUG nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Instance network_info: |[{"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.518 256736 DEBUG oslo_concurrency.lockutils [req-247be86b-ebc5-4b27-aff2-39820b6bbe59 req-bc3ca515-d0b4-4625-bfc3-03d4f41f9b91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-ddc42af7-1541-4037-acd0-cdeb260a8cc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.518 256736 DEBUG nova.network.neutron [req-247be86b-ebc5-4b27-aff2-39820b6bbe59 req-bc3ca515-d0b4-4625-bfc3-03d4f41f9b91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Refreshing network info cache for port f2578e55-dacb-433e-8662-059abbd23682 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:49:41 compute-0 sudo[269267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.521 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Start _get_guest_xml network_info=[{"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:49:41 compute-0 sudo[269267]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.529 256736 WARNING nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.536 256736 DEBUG nova.virt.libvirt.host [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.536 256736 DEBUG nova.virt.libvirt.host [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.540 256736 DEBUG nova.virt.libvirt.host [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.541 256736 DEBUG nova.virt.libvirt.host [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.541 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.541 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.542 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.542 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.542 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.542 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.542 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.542 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.543 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.543 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.543 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.543 256736 DEBUG nova.virt.hardware [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.546 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:41 compute-0 nova_compute[256729]: 2025-11-29 07:49:41.597 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:41 compute-0 sudo[269292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:49:41 compute-0 sudo[269292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:41 compute-0 sudo[269292]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:41 compute-0 sudo[269318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:41 compute-0 sudo[269318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:41 compute-0 sudo[269318]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:41 compute-0 sudo[269353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:49:41 compute-0 sudo[269353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:41 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 29 07:49:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:41 compute-0 ceph-mon[75050]: osdmap e164: 3 total, 3 up, 3 in
Nov 29 07:49:41 compute-0 ceph-mon[75050]: pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 409 B/s wr, 3 op/s
Nov 29 07:49:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3570546490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3570546490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:42 compute-0 podman[269426]: 2025-11-29 07:49:42.199684836 +0000 UTC m=+0.047897927 container create 5176bb761dd2ad5abadc7d51ae425dcb2ac9202db6ff061c08e38c3e6ae7e4ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dirac, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:49:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:49:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651434923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:42 compute-0 systemd[1]: Started libpod-conmon-5176bb761dd2ad5abadc7d51ae425dcb2ac9202db6ff061c08e38c3e6ae7e4ea.scope.
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.259 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.714s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:42 compute-0 podman[269426]: 2025-11-29 07:49:42.181914502 +0000 UTC m=+0.030127613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:49:42 compute-0 podman[269426]: 2025-11-29 07:49:42.277779994 +0000 UTC m=+0.125993105 container init 5176bb761dd2ad5abadc7d51ae425dcb2ac9202db6ff061c08e38c3e6ae7e4ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.287 256736 DEBUG nova.storage.rbd_utils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] rbd image ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:42 compute-0 podman[269426]: 2025-11-29 07:49:42.289167165 +0000 UTC m=+0.137380256 container start 5176bb761dd2ad5abadc7d51ae425dcb2ac9202db6ff061c08e38c3e6ae7e4ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.291 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:42 compute-0 podman[269426]: 2025-11-29 07:49:42.292913677 +0000 UTC m=+0.141126768 container attach 5176bb761dd2ad5abadc7d51ae425dcb2ac9202db6ff061c08e38c3e6ae7e4ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:49:42 compute-0 pedantic_dirac[269443]: 167 167
Nov 29 07:49:42 compute-0 systemd[1]: libpod-5176bb761dd2ad5abadc7d51ae425dcb2ac9202db6ff061c08e38c3e6ae7e4ea.scope: Deactivated successfully.
Nov 29 07:49:42 compute-0 podman[269426]: 2025-11-29 07:49:42.297241674 +0000 UTC m=+0.145454765 container died 5176bb761dd2ad5abadc7d51ae425dcb2ac9202db6ff061c08e38c3e6ae7e4ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dirac, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-63271931524e0c48ff507aeec802e1a811cc5ae6074e193a52e70499ae58c174-merged.mount: Deactivated successfully.
Nov 29 07:49:42 compute-0 podman[269426]: 2025-11-29 07:49:42.343583248 +0000 UTC m=+0.191796349 container remove 5176bb761dd2ad5abadc7d51ae425dcb2ac9202db6ff061c08e38c3e6ae7e4ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dirac, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:49:42 compute-0 systemd[1]: libpod-conmon-5176bb761dd2ad5abadc7d51ae425dcb2ac9202db6ff061c08e38c3e6ae7e4ea.scope: Deactivated successfully.
Nov 29 07:49:42 compute-0 podman[269506]: 2025-11-29 07:49:42.490731788 +0000 UTC m=+0.022212115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:49:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 80 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.3 MiB/s wr, 20 op/s
Nov 29 07:49:42 compute-0 sshd-session[269083]: Connection closed by authenticating user root 143.14.121.41 port 52402 [preauth]
Nov 29 07:49:42 compute-0 podman[269506]: 2025-11-29 07:49:42.619532008 +0000 UTC m=+0.151012305 container create f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 07:49:42 compute-0 systemd[1]: Started libpod-conmon-f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c.scope.
Nov 29 07:49:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7a1e421bdc70d9ba7e4089e0d1c2d3a8bceda67e19cc93a854a518945445e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7a1e421bdc70d9ba7e4089e0d1c2d3a8bceda67e19cc93a854a518945445e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7a1e421bdc70d9ba7e4089e0d1c2d3a8bceda67e19cc93a854a518945445e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a7a1e421bdc70d9ba7e4089e0d1c2d3a8bceda67e19cc93a854a518945445e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:49:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148250904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:42 compute-0 podman[269506]: 2025-11-29 07:49:42.74688084 +0000 UTC m=+0.278361157 container init f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.753 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.755 256736 DEBUG nova.virt.libvirt.vif [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1036557567',display_name='tempest-VolumesActionsTest-instance-1036557567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1036557567',id=2,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8e71647ebd44d9095d7adf146571b99',ramdisk_id='',reservation_id='r-wgcrr5lk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1797033739',owner_user_name='tempest-VolumesActionsTest-1797033739-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:37Z,user_data=None,user_id='96ced15eddb64f1eaf8ec309ebc98411',uuid=ddc42af7-1541-4037-acd0-cdeb260a8cc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.755 256736 DEBUG nova.network.os_vif_util [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Converting VIF {"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:49:42 compute-0 podman[269506]: 2025-11-29 07:49:42.756280196 +0000 UTC m=+0.287760493 container start f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.756 256736 DEBUG nova.network.os_vif_util [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:c2:51,bridge_name='br-int',has_traffic_filtering=True,id=f2578e55-dacb-433e-8662-059abbd23682,network=Network(cd2f5137-c41d-4aad-8f2e-12cb0d494722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2578e55-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.758 256736 DEBUG nova.objects.instance [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddc42af7-1541-4037-acd0-cdeb260a8cc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:42 compute-0 podman[269506]: 2025-11-29 07:49:42.77659824 +0000 UTC m=+0.308078637 container attach f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.778 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <uuid>ddc42af7-1541-4037-acd0-cdeb260a8cc8</uuid>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <name>instance-00000002</name>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <nova:name>tempest-VolumesActionsTest-instance-1036557567</nova:name>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:49:41</nova:creationTime>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <nova:user uuid="96ced15eddb64f1eaf8ec309ebc98411">tempest-VolumesActionsTest-1797033739-project-member</nova:user>
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <nova:project uuid="f8e71647ebd44d9095d7adf146571b99">tempest-VolumesActionsTest-1797033739</nova:project>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <nova:port uuid="f2578e55-dacb-433e-8662-059abbd23682">
Nov 29 07:49:42 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <system>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <entry name="serial">ddc42af7-1541-4037-acd0-cdeb260a8cc8</entry>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <entry name="uuid">ddc42af7-1541-4037-acd0-cdeb260a8cc8</entry>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     </system>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <os>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   </os>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <features>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   </features>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk">
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       </source>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk.config">
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       </source>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:49:42 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:27:c2:51"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <target dev="tapf2578e55-da"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8/console.log" append="off"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <video>
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     </video>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:49:42 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:49:42 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:49:42 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:49:42 compute-0 nova_compute[256729]: </domain>
Nov 29 07:49:42 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.779 256736 DEBUG nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Preparing to wait for external event network-vif-plugged-f2578e55-dacb-433e-8662-059abbd23682 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.779 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquiring lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.779 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.779 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.780 256736 DEBUG nova.virt.libvirt.vif [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:49:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1036557567',display_name='tempest-VolumesActionsTest-instance-1036557567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1036557567',id=2,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f8e71647ebd44d9095d7adf146571b99',ramdisk_id='',reservation_id='r-wgcrr5lk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1797033739',owner_user_name='tempest-VolumesActionsTest-1797033739-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:49:37Z,user_data=None,user_id='96ced15eddb64f1eaf8ec309ebc98411',uuid=ddc42af7-1541-4037-acd0-cdeb260a8cc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.781 256736 DEBUG nova.network.os_vif_util [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Converting VIF {"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.782 256736 DEBUG nova.network.os_vif_util [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:c2:51,bridge_name='br-int',has_traffic_filtering=True,id=f2578e55-dacb-433e-8662-059abbd23682,network=Network(cd2f5137-c41d-4aad-8f2e-12cb0d494722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2578e55-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.783 256736 DEBUG os_vif [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:c2:51,bridge_name='br-int',has_traffic_filtering=True,id=f2578e55-dacb-433e-8662-059abbd23682,network=Network(cd2f5137-c41d-4aad-8f2e-12cb0d494722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2578e55-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.784 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.785 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.785 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.792 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.793 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2578e55-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.793 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2578e55-da, col_values=(('external_ids', {'iface-id': 'f2578e55-dacb-433e-8662-059abbd23682', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:c2:51', 'vm-uuid': 'ddc42af7-1541-4037-acd0-cdeb260a8cc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.797 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:49:42 compute-0 NetworkManager[48962]: <info>  [1764402582.7980] manager: (tapf2578e55-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.805 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.807 256736 INFO os_vif [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:c2:51,bridge_name='br-int',has_traffic_filtering=True,id=f2578e55-dacb-433e-8662-059abbd23682,network=Network(cd2f5137-c41d-4aad-8f2e-12cb0d494722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2578e55-da')
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.876 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.876 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.876 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] No VIF found with MAC fa:16:3e:27:c2:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.877 256736 INFO nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Using config drive
Nov 29 07:49:42 compute-0 nova_compute[256729]: 2025-11-29 07:49:42.899 256736 DEBUG nova.storage.rbd_utils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] rbd image ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:42 compute-0 ceph-mon[75050]: osdmap e165: 3 total, 3 up, 3 in
Nov 29 07:49:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1651434923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2148250904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:49:43 compute-0 podman[269561]: 2025-11-29 07:49:43.700019917 +0000 UTC m=+0.061877247 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 07:49:43 compute-0 podman[269560]: 2025-11-29 07:49:43.730678123 +0000 UTC m=+0.096210293 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:49:43 compute-0 podman[269556]: 2025-11-29 07:49:43.731412382 +0000 UTC m=+0.096957552 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:49:43 compute-0 hopeful_gould[269523]: {
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "osd_id": 2,
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "type": "bluestore"
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:     },
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "osd_id": 1,
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "type": "bluestore"
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:     },
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "osd_id": 0,
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:         "type": "bluestore"
Nov 29 07:49:43 compute-0 hopeful_gould[269523]:     }
Nov 29 07:49:43 compute-0 hopeful_gould[269523]: }
Nov 29 07:49:43 compute-0 systemd[1]: libpod-f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c.scope: Deactivated successfully.
Nov 29 07:49:43 compute-0 podman[269506]: 2025-11-29 07:49:43.833556087 +0000 UTC m=+1.365036384 container died f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:49:43 compute-0 systemd[1]: libpod-f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c.scope: Consumed 1.076s CPU time.
Nov 29 07:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-31a7a1e421bdc70d9ba7e4089e0d1c2d3a8bceda67e19cc93a854a518945445e-merged.mount: Deactivated successfully.
Nov 29 07:49:43 compute-0 podman[269506]: 2025-11-29 07:49:43.880320912 +0000 UTC m=+1.411801219 container remove f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:49:43 compute-0 systemd[1]: libpod-conmon-f8d1b260af514bb4b4b57699bc5b7725f9783c979b15e6e2218d08609534840c.scope: Deactivated successfully.
Nov 29 07:49:43 compute-0 sudo[269353]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:49:43 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:49:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:49:43 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:49:43 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 55107b78-468d-44d8-a187-574d25b6ed18 does not exist
Nov 29 07:49:43 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c96adcd2-d86c-4389-a535-494e24787f8a does not exist
Nov 29 07:49:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 29 07:49:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 07:49:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 29 07:49:43 compute-0 ceph-mon[75050]: pgmap v1284: 305 pgs: 305 active+clean; 80 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.3 MiB/s wr, 20 op/s
Nov 29 07:49:43 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:49:43 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:49:43 compute-0 sudo[269653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:49:43 compute-0 sudo[269653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:43 compute-0 sudo[269653]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.015 256736 INFO nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Creating config drive at /var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8/disk.config
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.020 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx6t0e4md execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:44 compute-0 sudo[269679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:49:44 compute-0 sudo[269679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.048 256736 DEBUG nova.network.neutron [req-247be86b-ebc5-4b27-aff2-39820b6bbe59 req-bc3ca515-d0b4-4625-bfc3-03d4f41f9b91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Updated VIF entry in instance network info cache for port f2578e55-dacb-433e-8662-059abbd23682. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.049 256736 DEBUG nova.network.neutron [req-247be86b-ebc5-4b27-aff2-39820b6bbe59 req-bc3ca515-d0b4-4625-bfc3-03d4f41f9b91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Updating instance_info_cache with network_info: [{"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:49:44 compute-0 sudo[269679]: pam_unix(sudo:session): session closed for user root
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.070 256736 DEBUG oslo_concurrency.lockutils [req-247be86b-ebc5-4b27-aff2-39820b6bbe59 req-bc3ca515-d0b4-4625-bfc3-03d4f41f9b91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-ddc42af7-1541-4037-acd0-cdeb260a8cc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.157 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx6t0e4md" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.188 256736 DEBUG nova.storage.rbd_utils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] rbd image ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.195 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8/disk.config ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.353 256736 DEBUG oslo_concurrency.processutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8/disk.config ddc42af7-1541-4037-acd0-cdeb260a8cc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.355 256736 INFO nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Deleting local config drive /var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8/disk.config because it was imported into RBD.
Nov 29 07:49:44 compute-0 kernel: tapf2578e55-da: entered promiscuous mode
Nov 29 07:49:44 compute-0 NetworkManager[48962]: <info>  [1764402584.4048] manager: (tapf2578e55-da): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Nov 29 07:49:44 compute-0 ovn_controller[153383]: 2025-11-29T07:49:44Z|00045|binding|INFO|Claiming lport f2578e55-dacb-433e-8662-059abbd23682 for this chassis.
Nov 29 07:49:44 compute-0 ovn_controller[153383]: 2025-11-29T07:49:44Z|00046|binding|INFO|f2578e55-dacb-433e-8662-059abbd23682: Claiming fa:16:3e:27:c2:51 10.100.0.11
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.405 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.408 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.410 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.424 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:c2:51 10.100.0.11'], port_security=['fa:16:3e:27:c2:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ddc42af7-1541-4037-acd0-cdeb260a8cc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd2f5137-c41d-4aad-8f2e-12cb0d494722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8e71647ebd44d9095d7adf146571b99', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26857cc1-f609-4235-a949-1736183bbfe1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf82e7ee-d147-471f-9987-9f76a6284b72, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=f2578e55-dacb-433e-8662-059abbd23682) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.426 163655 INFO neutron.agent.ovn.metadata.agent [-] Port f2578e55-dacb-433e-8662-059abbd23682 in datapath cd2f5137-c41d-4aad-8f2e-12cb0d494722 bound to our chassis
Nov 29 07:49:44 compute-0 systemd-machined[217781]: New machine qemu-2-instance-00000002.
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.428 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd2f5137-c41d-4aad-8f2e-12cb0d494722
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.440 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[67147c84-a9f2-426b-b874-cc1ec7063f7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.441 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd2f5137-c1 in ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.443 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd2f5137-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.443 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[31de5274-51f3-4014-b496-80ae685d08c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.443 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a9324eb1-2575-4e58-8f27-4a5291d0f752]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 29 07:49:44 compute-0 systemd-udevd[269759]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.462 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[a52b0f62-23fb-4b30-8e8e-86e302b3e861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 NetworkManager[48962]: <info>  [1764402584.4701] device (tapf2578e55-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:49:44 compute-0 NetworkManager[48962]: <info>  [1764402584.4715] device (tapf2578e55-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.474 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:44 compute-0 ovn_controller[153383]: 2025-11-29T07:49:44Z|00047|binding|INFO|Setting lport f2578e55-dacb-433e-8662-059abbd23682 ovn-installed in OVS
Nov 29 07:49:44 compute-0 ovn_controller[153383]: 2025-11-29T07:49:44Z|00048|binding|INFO|Setting lport f2578e55-dacb-433e-8662-059abbd23682 up in Southbound
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.480 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.486 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f73599-2b38-45ea-aa8b-1fd616be5fba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.518 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[aadec08d-8cd3-4760-94f6-87007e64f9df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.523 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9cee0586-ce1b-463f-9db8-f015b49f1f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 NetworkManager[48962]: <info>  [1764402584.5248] manager: (tapcd2f5137-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/32)
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.556 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[48762ae9-73b3-46d6-ac87-d683e43e1ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.559 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[6a613280-3863-4071-86aa-56da557c00e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 NetworkManager[48962]: <info>  [1764402584.5816] device (tapcd2f5137-c0): carrier: link connected
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.582 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[883d251e-6f19-4bf7-95b5-1027b753e149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.598 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b235f15f-9c29-4eea-8a1a-0927f64d6634]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd2f5137-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:5d:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489964, 'reachable_time': 23305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269791, 'error': None, 'target': 'ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.5 MiB/s wr, 104 op/s
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.615 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dd035617-454d-4edd-b443-3a7239d75e56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:5d2c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489964, 'tstamp': 489964}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269793, 'error': None, 'target': 'ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.633 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[64b77b30-748f-4dc2-9012-d9b12122c2c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd2f5137-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:5d:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489964, 'reachable_time': 23305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269794, 'error': None, 'target': 'ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.660 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[453b42c0-88eb-429f-b004-55b372dda833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.705 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[50a778b9-598b-44f5-b08d-71259e2fe63d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.706 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd2f5137-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.706 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.707 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd2f5137-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.708 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:44 compute-0 NetworkManager[48962]: <info>  [1764402584.7094] manager: (tapcd2f5137-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 29 07:49:44 compute-0 kernel: tapcd2f5137-c0: entered promiscuous mode
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.712 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.713 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd2f5137-c0, col_values=(('external_ids', {'iface-id': '5acbb3be-fc60-4bbd-a11c-23b8823f80e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.713 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:44 compute-0 ovn_controller[153383]: 2025-11-29T07:49:44Z|00049|binding|INFO|Releasing lport 5acbb3be-fc60-4bbd-a11c-23b8823f80e1 from this chassis (sb_readonly=0)
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.727 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.728 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd2f5137-c41d-4aad-8f2e-12cb0d494722.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd2f5137-c41d-4aad-8f2e-12cb0d494722.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.728 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[77cd477b-5e63-4e42-9f8c-15c995f34bfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.729 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-cd2f5137-c41d-4aad-8f2e-12cb0d494722
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/cd2f5137-c41d-4aad-8f2e-12cb0d494722.pid.haproxy
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID cd2f5137-c41d-4aad-8f2e-12cb0d494722
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:49:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:44.730 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722', 'env', 'PROCESS_TAG=haproxy-cd2f5137-c41d-4aad-8f2e-12cb0d494722', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd2f5137-c41d-4aad-8f2e-12cb0d494722.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.874 256736 DEBUG nova.compute.manager [req-5b99ffdd-6fe9-4db0-8542-8857f4fc9339 req-032ac24f-206e-4c62-95bb-b7c81d7e24ac ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Received event network-vif-plugged-f2578e55-dacb-433e-8662-059abbd23682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.875 256736 DEBUG oslo_concurrency.lockutils [req-5b99ffdd-6fe9-4db0-8542-8857f4fc9339 req-032ac24f-206e-4c62-95bb-b7c81d7e24ac ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.876 256736 DEBUG oslo_concurrency.lockutils [req-5b99ffdd-6fe9-4db0-8542-8857f4fc9339 req-032ac24f-206e-4c62-95bb-b7c81d7e24ac ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.878 256736 DEBUG oslo_concurrency.lockutils [req-5b99ffdd-6fe9-4db0-8542-8857f4fc9339 req-032ac24f-206e-4c62-95bb-b7c81d7e24ac ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.880 256736 DEBUG nova.compute.manager [req-5b99ffdd-6fe9-4db0-8542-8857f4fc9339 req-032ac24f-206e-4c62-95bb-b7c81d7e24ac ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Processing event network-vif-plugged-f2578e55-dacb-433e-8662-059abbd23682 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.915 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402584.9149332, ddc42af7-1541-4037-acd0-cdeb260a8cc8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.916 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] VM Started (Lifecycle Event)
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.919 256736 DEBUG nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.923 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.926 256736 INFO nova.virt.libvirt.driver [-] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Instance spawned successfully.
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.927 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.949 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:44 compute-0 ceph-mon[75050]: osdmap e166: 3 total, 3 up, 3 in
Nov 29 07:49:44 compute-0 ceph-mon[75050]: pgmap v1286: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.5 MiB/s wr, 104 op/s
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.959 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.962 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.963 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.964 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.964 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.965 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.965 256736 DEBUG nova.virt.libvirt.driver [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.995 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.995 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402584.9160361, ddc42af7-1541-4037-acd0-cdeb260a8cc8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:44 compute-0 nova_compute[256729]: 2025-11-29 07:49:44.996 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] VM Paused (Lifecycle Event)
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.029 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.034 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402584.9215949, ddc42af7-1541-4037-acd0-cdeb260a8cc8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.034 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] VM Resumed (Lifecycle Event)
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.037 256736 INFO nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Took 7.25 seconds to spawn the instance on the hypervisor.
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.038 256736 DEBUG nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.065 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.068 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.095 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.107 256736 INFO nova.compute.manager [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Took 8.68 seconds to build instance.
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.124 256736 DEBUG oslo_concurrency.lockutils [None req-9b3a8b20-51f1-4ff0-b941-434dd554b7ab 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:45 compute-0 podman[269868]: 2025-11-29 07:49:45.14462396 +0000 UTC m=+0.064245412 container create 0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 07:49:45 compute-0 nova_compute[256729]: 2025-11-29 07:49:45.180 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:45 compute-0 systemd[1]: Started libpod-conmon-0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d.scope.
Nov 29 07:49:45 compute-0 podman[269868]: 2025-11-29 07:49:45.11238539 +0000 UTC m=+0.032006872 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:49:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97e4c1be194c1995d3188f356ed768325efd853365872e74d16fb0420ac3ae52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:49:45 compute-0 podman[269868]: 2025-11-29 07:49:45.242934089 +0000 UTC m=+0.162555541 container init 0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 07:49:45 compute-0 podman[269868]: 2025-11-29 07:49:45.253490957 +0000 UTC m=+0.173112399 container start 0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:49:45 compute-0 neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722[269883]: [NOTICE]   (269887) : New worker (269889) forked
Nov 29 07:49:45 compute-0 neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722[269883]: [NOTICE]   (269887) : Loading success.
Nov 29 07:49:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 29 07:49:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 07:49:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 29 07:49:46 compute-0 nova_compute[256729]: 2025-11-29 07:49:46.147 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:46 compute-0 nova_compute[256729]: 2025-11-29 07:49:46.599 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 486 KiB/s rd, 3.6 MiB/s wr, 163 op/s
Nov 29 07:49:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2072197467' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2072197467' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:47 compute-0 ceph-mon[75050]: osdmap e167: 3 total, 3 up, 3 in
Nov 29 07:49:47 compute-0 ceph-mon[75050]: pgmap v1288: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 486 KiB/s rd, 3.6 MiB/s wr, 163 op/s
Nov 29 07:49:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2072197467' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2072197467' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:47 compute-0 nova_compute[256729]: 2025-11-29 07:49:47.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:47 compute-0 nova_compute[256729]: 2025-11-29 07:49:47.235 256736 DEBUG nova.compute.manager [req-e81118f2-ff45-44a2-bd7d-d7e688148ab7 req-0e34abfe-2682-485b-9698-135c63aed72d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Received event network-vif-plugged-f2578e55-dacb-433e-8662-059abbd23682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:49:47 compute-0 nova_compute[256729]: 2025-11-29 07:49:47.235 256736 DEBUG oslo_concurrency.lockutils [req-e81118f2-ff45-44a2-bd7d-d7e688148ab7 req-0e34abfe-2682-485b-9698-135c63aed72d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:47 compute-0 nova_compute[256729]: 2025-11-29 07:49:47.238 256736 DEBUG oslo_concurrency.lockutils [req-e81118f2-ff45-44a2-bd7d-d7e688148ab7 req-0e34abfe-2682-485b-9698-135c63aed72d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:47 compute-0 nova_compute[256729]: 2025-11-29 07:49:47.238 256736 DEBUG oslo_concurrency.lockutils [req-e81118f2-ff45-44a2-bd7d-d7e688148ab7 req-0e34abfe-2682-485b-9698-135c63aed72d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:47 compute-0 nova_compute[256729]: 2025-11-29 07:49:47.238 256736 DEBUG nova.compute.manager [req-e81118f2-ff45-44a2-bd7d-d7e688148ab7 req-0e34abfe-2682-485b-9698-135c63aed72d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] No waiting events found dispatching network-vif-plugged-f2578e55-dacb-433e-8662-059abbd23682 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:49:47 compute-0 nova_compute[256729]: 2025-11-29 07:49:47.239 256736 WARNING nova.compute.manager [req-e81118f2-ff45-44a2-bd7d-d7e688148ab7 req-0e34abfe-2682-485b-9698-135c63aed72d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Received unexpected event network-vif-plugged-f2578e55-dacb-433e-8662-059abbd23682 for instance with vm_state active and task_state None.
Nov 29 07:49:47 compute-0 nova_compute[256729]: 2025-11-29 07:49:47.797 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:48 compute-0 sshd-session[269532]: Connection closed by authenticating user root 143.14.121.41 port 39940 [preauth]
Nov 29 07:49:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 504 KiB/s wr, 269 op/s
Nov 29 07:49:49 compute-0 nova_compute[256729]: 2025-11-29 07:49:49.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:49 compute-0 nova_compute[256729]: 2025-11-29 07:49:49.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:49 compute-0 nova_compute[256729]: 2025-11-29 07:49:49.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:49:49 compute-0 ceph-mon[75050]: pgmap v1289: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 504 KiB/s wr, 269 op/s
Nov 29 07:49:50 compute-0 nova_compute[256729]: 2025-11-29 07:49:50.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 425 KiB/s wr, 227 op/s
Nov 29 07:49:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411846065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411846065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.600 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:51 compute-0 ceph-mon[75050]: pgmap v1290: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 425 KiB/s wr, 227 op/s
Nov 29 07:49:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3411846065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3411846065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.779 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-ddc42af7-1541-4037-acd0-cdeb260a8cc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.780 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-ddc42af7-1541-4037-acd0-cdeb260a8cc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.780 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.781 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid ddc42af7-1541-4037-acd0-cdeb260a8cc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.984 256736 DEBUG oslo_concurrency.lockutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquiring lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.984 256736 DEBUG oslo_concurrency.lockutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.985 256736 DEBUG oslo_concurrency.lockutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquiring lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.985 256736 DEBUG oslo_concurrency.lockutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.986 256736 DEBUG oslo_concurrency.lockutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.988 256736 INFO nova.compute.manager [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Terminating instance
Nov 29 07:49:51 compute-0 nova_compute[256729]: 2025-11-29 07:49:51.990 256736 DEBUG nova.compute.manager [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:49:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 07:49:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 29 07:49:52 compute-0 kernel: tapf2578e55-da (unregistering): left promiscuous mode
Nov 29 07:49:52 compute-0 NetworkManager[48962]: <info>  [1764402592.2017] device (tapf2578e55-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:49:52 compute-0 ovn_controller[153383]: 2025-11-29T07:49:52Z|00050|binding|INFO|Releasing lport f2578e55-dacb-433e-8662-059abbd23682 from this chassis (sb_readonly=0)
Nov 29 07:49:52 compute-0 ovn_controller[153383]: 2025-11-29T07:49:52Z|00051|binding|INFO|Setting lport f2578e55-dacb-433e-8662-059abbd23682 down in Southbound
Nov 29 07:49:52 compute-0 ovn_controller[153383]: 2025-11-29T07:49:52Z|00052|binding|INFO|Removing iface tapf2578e55-da ovn-installed in OVS
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.214 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.221 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:c2:51 10.100.0.11'], port_security=['fa:16:3e:27:c2:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ddc42af7-1541-4037-acd0-cdeb260a8cc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd2f5137-c41d-4aad-8f2e-12cb0d494722', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f8e71647ebd44d9095d7adf146571b99', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26857cc1-f609-4235-a949-1736183bbfe1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf82e7ee-d147-471f-9987-9f76a6284b72, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=f2578e55-dacb-433e-8662-059abbd23682) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.223 163655 INFO neutron.agent.ovn.metadata.agent [-] Port f2578e55-dacb-433e-8662-059abbd23682 in datapath cd2f5137-c41d-4aad-8f2e-12cb0d494722 unbound from our chassis
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.224 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd2f5137-c41d-4aad-8f2e-12cb0d494722, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.225 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2d1617-1753-4ba2-a2c3-e41676f6df3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.225 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722 namespace which is not needed anymore
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.254 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:52 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 29 07:49:52 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7.740s CPU time.
Nov 29 07:49:52 compute-0 systemd-machined[217781]: Machine qemu-2-instance-00000002 terminated.
Nov 29 07:49:52 compute-0 neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722[269883]: [NOTICE]   (269887) : haproxy version is 2.8.14-c23fe91
Nov 29 07:49:52 compute-0 neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722[269883]: [NOTICE]   (269887) : path to executable is /usr/sbin/haproxy
Nov 29 07:49:52 compute-0 neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722[269883]: [WARNING]  (269887) : Exiting Master process...
Nov 29 07:49:52 compute-0 neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722[269883]: [WARNING]  (269887) : Exiting Master process...
Nov 29 07:49:52 compute-0 neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722[269883]: [ALERT]    (269887) : Current worker (269889) exited with code 143 (Terminated)
Nov 29 07:49:52 compute-0 neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722[269883]: [WARNING]  (269887) : All workers exited. Exiting... (0)
Nov 29 07:49:52 compute-0 systemd[1]: libpod-0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d.scope: Deactivated successfully.
Nov 29 07:49:52 compute-0 podman[269923]: 2025-11-29 07:49:52.408213207 +0000 UTC m=+0.063872252 container died 0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.424 256736 INFO nova.virt.libvirt.driver [-] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Instance destroyed successfully.
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.424 256736 DEBUG nova.objects.instance [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lazy-loading 'resources' on Instance uuid ddc42af7-1541-4037-acd0-cdeb260a8cc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:49:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d-userdata-shm.mount: Deactivated successfully.
Nov 29 07:49:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-97e4c1be194c1995d3188f356ed768325efd853365872e74d16fb0420ac3ae52-merged.mount: Deactivated successfully.
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.451 256736 DEBUG nova.virt.libvirt.vif [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:49:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1036557567',display_name='tempest-VolumesActionsTest-instance-1036557567',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1036557567',id=2,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:49:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f8e71647ebd44d9095d7adf146571b99',ramdisk_id='',reservation_id='r-wgcrr5lk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1797033739',owner_user_name='tempest-VolumesActionsTest-1797033739-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:49:45Z,user_data=None,user_id='96ced15eddb64f1eaf8ec309ebc98411',uuid=ddc42af7-1541-4037-acd0-cdeb260a8cc8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:49:52 compute-0 podman[269923]: 2025-11-29 07:49:52.452157444 +0000 UTC m=+0.107816469 container cleanup 0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.452 256736 DEBUG nova.network.os_vif_util [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Converting VIF {"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.453 256736 DEBUG nova.network.os_vif_util [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:c2:51,bridge_name='br-int',has_traffic_filtering=True,id=f2578e55-dacb-433e-8662-059abbd23682,network=Network(cd2f5137-c41d-4aad-8f2e-12cb0d494722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2578e55-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.454 256736 DEBUG os_vif [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:c2:51,bridge_name='br-int',has_traffic_filtering=True,id=f2578e55-dacb-433e-8662-059abbd23682,network=Network(cd2f5137-c41d-4aad-8f2e-12cb0d494722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2578e55-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.458 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.459 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2578e55-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:52 compute-0 systemd[1]: libpod-conmon-0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d.scope: Deactivated successfully.
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.462 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.463 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.465 256736 INFO os_vif [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:c2:51,bridge_name='br-int',has_traffic_filtering=True,id=f2578e55-dacb-433e-8662-059abbd23682,network=Network(cd2f5137-c41d-4aad-8f2e-12cb0d494722),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2578e55-da')
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.482 256736 DEBUG nova.compute.manager [req-2af5261b-eadc-4c78-818a-3547bb2d48ff req-3ebd1cca-a7eb-4d9a-a659-c64908721cd6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Received event network-vif-unplugged-f2578e55-dacb-433e-8662-059abbd23682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.482 256736 DEBUG oslo_concurrency.lockutils [req-2af5261b-eadc-4c78-818a-3547bb2d48ff req-3ebd1cca-a7eb-4d9a-a659-c64908721cd6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.482 256736 DEBUG oslo_concurrency.lockutils [req-2af5261b-eadc-4c78-818a-3547bb2d48ff req-3ebd1cca-a7eb-4d9a-a659-c64908721cd6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.483 256736 DEBUG oslo_concurrency.lockutils [req-2af5261b-eadc-4c78-818a-3547bb2d48ff req-3ebd1cca-a7eb-4d9a-a659-c64908721cd6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.483 256736 DEBUG nova.compute.manager [req-2af5261b-eadc-4c78-818a-3547bb2d48ff req-3ebd1cca-a7eb-4d9a-a659-c64908721cd6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] No waiting events found dispatching network-vif-unplugged-f2578e55-dacb-433e-8662-059abbd23682 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.483 256736 DEBUG nova.compute.manager [req-2af5261b-eadc-4c78-818a-3547bb2d48ff req-3ebd1cca-a7eb-4d9a-a659-c64908721cd6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Received event network-vif-unplugged-f2578e55-dacb-433e-8662-059abbd23682 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:49:52 compute-0 podman[269961]: 2025-11-29 07:49:52.525108023 +0000 UTC m=+0.048712769 container remove 0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.530 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6e705e71-5cbb-46d4-9da1-6aee634f217c]: (4, ('Sat Nov 29 07:49:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722 (0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d)\n0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d\nSat Nov 29 07:49:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722 (0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d)\n0736c33096e8257946d98e282857bd20083e1c5af0fe8ebe9f28d2ef6a30ca4d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.532 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[841b6f77-9f2b-4832-af0a-888056b39ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.533 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd2f5137-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.534 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:52 compute-0 kernel: tapcd2f5137-c0: left promiscuous mode
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.558 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.561 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c05c0d90-a1c8-4f80-92ea-788acd682dba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.575 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[72df8a9b-236d-494e-bb0b-127220c1e5ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.576 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[cea34f4c-7a34-40d0-ab42-fdb12276a922]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.590 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[054b35e0-b0ff-4057-bab1-de6bd476ce4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489957, 'reachable_time': 23001, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269994, 'error': None, 'target': 'ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.592 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd2f5137-c41d-4aad-8f2e-12cb0d494722 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:49:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:52.593 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[fbdbe1da-d84f-4605-8435-55a366851ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:49:52 compute-0 systemd[1]: run-netns-ovnmeta\x2dcd2f5137\x2dc41d\x2d4aad\x2d8f2e\x2d12cb0d494722.mount: Deactivated successfully.
Nov 29 07:49:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 185 op/s
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.813 256736 INFO nova.virt.libvirt.driver [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Deleting instance files /var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8_del
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.813 256736 INFO nova.virt.libvirt.driver [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Deletion of /var/lib/nova/instances/ddc42af7-1541-4037-acd0-cdeb260a8cc8_del complete
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.883 256736 INFO nova.compute.manager [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Took 0.89 seconds to destroy the instance on the hypervisor.
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.885 256736 DEBUG oslo.service.loopingcall [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.885 256736 DEBUG nova.compute.manager [-] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:49:52 compute-0 nova_compute[256729]: 2025-11-29 07:49:52.886 256736 DEBUG nova.network.neutron [-] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:49:53 compute-0 ceph-mon[75050]: osdmap e168: 3 total, 3 up, 3 in
Nov 29 07:49:53 compute-0 ceph-mon[75050]: pgmap v1292: 305 pgs: 305 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 185 op/s
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.319 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Updating instance_info_cache with network_info: [{"id": "f2578e55-dacb-433e-8662-059abbd23682", "address": "fa:16:3e:27:c2:51", "network": {"id": "cd2f5137-c41d-4aad-8f2e-12cb0d494722", "bridge": "br-int", "label": "tempest-VolumesActionsTest-368296531-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f8e71647ebd44d9095d7adf146571b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2578e55-da", "ovs_interfaceid": "f2578e55-dacb-433e-8662-059abbd23682", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.455 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-ddc42af7-1541-4037-acd0-cdeb260a8cc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.455 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.456 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.457 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.488 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.489 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.489 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.490 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.490 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.566 256736 DEBUG nova.network.neutron [-] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.589 256736 INFO nova.compute.manager [-] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Took 0.70 seconds to deallocate network for instance.
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.651 256736 DEBUG oslo_concurrency.lockutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.651 256736 DEBUG oslo_concurrency.lockutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.719 256736 DEBUG nova.compute.manager [req-f2732a32-83c9-4192-adbc-6ccec3d5b20c req-00091105-901a-48bb-a963-5bd1ad0c3e05 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Received event network-vif-deleted-f2578e55-dacb-433e-8662-059abbd23682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.726 256736 DEBUG oslo_concurrency.processutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/465290629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/465290629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384055840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:53 compute-0 nova_compute[256729]: 2025-11-29 07:49:53.969 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/465290629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/465290629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/384055840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.149 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.151 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4677MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.151 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3516649485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:54 compute-0 sshd-session[269898]: Connection closed by authenticating user root 143.14.121.41 port 39942 [preauth]
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.220 256736 DEBUG oslo_concurrency.processutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.227 256736 DEBUG nova.compute.provider_tree [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.280 256736 DEBUG nova.scheduler.client.report [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.502 256736 DEBUG oslo_concurrency.lockutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.507 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.550 256736 INFO nova.scheduler.client.report [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Deleted allocations for instance ddc42af7-1541-4037-acd0-cdeb260a8cc8
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.589 256736 DEBUG nova.compute.manager [req-17cf219c-ecbc-448b-b099-bc6f0af0dd5f req-fd879f41-c297-4de9-b154-d0bf6bed7537 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Received event network-vif-plugged-f2578e55-dacb-433e-8662-059abbd23682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.590 256736 DEBUG oslo_concurrency.lockutils [req-17cf219c-ecbc-448b-b099-bc6f0af0dd5f req-fd879f41-c297-4de9-b154-d0bf6bed7537 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.590 256736 DEBUG oslo_concurrency.lockutils [req-17cf219c-ecbc-448b-b099-bc6f0af0dd5f req-fd879f41-c297-4de9-b154-d0bf6bed7537 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.590 256736 DEBUG oslo_concurrency.lockutils [req-17cf219c-ecbc-448b-b099-bc6f0af0dd5f req-fd879f41-c297-4de9-b154-d0bf6bed7537 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.591 256736 DEBUG nova.compute.manager [req-17cf219c-ecbc-448b-b099-bc6f0af0dd5f req-fd879f41-c297-4de9-b154-d0bf6bed7537 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] No waiting events found dispatching network-vif-plugged-f2578e55-dacb-433e-8662-059abbd23682 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.591 256736 WARNING nova.compute.manager [req-17cf219c-ecbc-448b-b099-bc6f0af0dd5f req-fd879f41-c297-4de9-b154-d0bf6bed7537 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Received unexpected event network-vif-plugged-f2578e55-dacb-433e-8662-059abbd23682 for instance with vm_state deleted and task_state None.
Nov 29 07:49:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 80 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 KiB/s wr, 153 op/s
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.617 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.618 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.630 256736 DEBUG oslo_concurrency.lockutils [None req-1dc4ac5c-221f-4333-b66d-6b776f1a532c 96ced15eddb64f1eaf8ec309ebc98411 f8e71647ebd44d9095d7adf146571b99 - - default default] Lock "ddc42af7-1541-4037-acd0-cdeb260a8cc8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:54 compute-0 nova_compute[256729]: 2025-11-29 07:49:54.639 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:49:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1791734343' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1791734343' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:49:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613670613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3516649485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75050]: pgmap v1293: 305 pgs: 305 active+clean; 80 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 KiB/s wr, 153 op/s
Nov 29 07:49:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1791734343' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1791734343' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:55 compute-0 nova_compute[256729]: 2025-11-29 07:49:55.109 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:49:55 compute-0 nova_compute[256729]: 2025-11-29 07:49:55.115 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:49:55 compute-0 nova_compute[256729]: 2025-11-29 07:49:55.136 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:49:55 compute-0 nova_compute[256729]: 2025-11-29 07:49:55.168 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:49:55 compute-0 nova_compute[256729]: 2025-11-29 07:49:55.169 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:49:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3613670613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:49:56 compute-0 nova_compute[256729]: 2025-11-29 07:49:56.601 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 69 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 KiB/s wr, 137 op/s
Nov 29 07:49:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:49:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 29 07:49:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 07:49:57 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 29 07:49:57 compute-0 ceph-mon[75050]: pgmap v1294: 305 pgs: 305 active+clean; 69 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 KiB/s wr, 137 op/s
Nov 29 07:49:57 compute-0 nova_compute[256729]: 2025-11-29 07:49:57.466 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:49:57 compute-0 sshd-session[270041]: Connection closed by authenticating user root 143.14.121.41 port 56542 [preauth]
Nov 29 07:49:58 compute-0 ceph-mon[75050]: osdmap e169: 3 total, 3 up, 3 in
Nov 29 07:49:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 KiB/s wr, 87 op/s
Nov 29 07:49:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:49:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298208123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:49:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298208123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:59 compute-0 ceph-mon[75050]: pgmap v1296: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.6 KiB/s wr, 87 op/s
Nov 29 07:49:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3298208123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:49:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3298208123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:49:59 compute-0 sshd-session[270066]: Invalid user mysql from 143.14.121.41 port 56558
Nov 29 07:49:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:59.768 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:49:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:59.769 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:49:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:49:59.769 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:00 compute-0 sshd-session[270066]: Connection closed by invalid user mysql 143.14.121.41 port 56558 [preauth]
Nov 29 07:50:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.8 KiB/s wr, 63 op/s
Nov 29 07:50:01 compute-0 nova_compute[256729]: 2025-11-29 07:50:01.603 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:01 compute-0 ceph-mon[75050]: pgmap v1297: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.8 KiB/s wr, 63 op/s
Nov 29 07:50:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:02 compute-0 nova_compute[256729]: 2025-11-29 07:50:02.470 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 KiB/s wr, 65 op/s
Nov 29 07:50:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/284706827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/284706827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/284706827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/284706827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:02 compute-0 sshd-session[270068]: Invalid user max from 143.14.121.41 port 56566
Nov 29 07:50:03 compute-0 sshd-session[270068]: Connection closed by invalid user max 143.14.121.41 port 56566 [preauth]
Nov 29 07:50:03 compute-0 ceph-mon[75050]: pgmap v1298: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 KiB/s wr, 65 op/s
Nov 29 07:50:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.4 KiB/s wr, 62 op/s
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:50:05
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.rgw.root', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data']
Nov 29 07:50:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:50:05 compute-0 ceph-mon[75050]: pgmap v1299: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.4 KiB/s wr, 62 op/s
Nov 29 07:50:06 compute-0 sshd-session[270070]: Invalid user master from 143.14.121.41 port 57308
Nov 29 07:50:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3942759853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3942759853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:06 compute-0 sshd-session[270070]: Connection closed by invalid user master 143.14.121.41 port 57308 [preauth]
Nov 29 07:50:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1248793982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1248793982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:06 compute-0 nova_compute[256729]: 2025-11-29 07:50:06.605 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 51 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 600 KiB/s wr, 62 op/s
Nov 29 07:50:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3942759853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3942759853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1248793982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1248793982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:50:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:50:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:07 compute-0 nova_compute[256729]: 2025-11-29 07:50:07.422 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402592.4215364, ddc42af7-1541-4037-acd0-cdeb260a8cc8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:07 compute-0 nova_compute[256729]: 2025-11-29 07:50:07.423 256736 INFO nova.compute.manager [-] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] VM Stopped (Lifecycle Event)
Nov 29 07:50:07 compute-0 nova_compute[256729]: 2025-11-29 07:50:07.448 256736 DEBUG nova.compute.manager [None req-dd5d2e4f-83d8-4330-9440-bd9859074733 - - - - - -] [instance: ddc42af7-1541-4037-acd0-cdeb260a8cc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:07 compute-0 nova_compute[256729]: 2025-11-29 07:50:07.517 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:07.576 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:50:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:07.578 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:50:07 compute-0 nova_compute[256729]: 2025-11-29 07:50:07.577 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:07 compute-0 ceph-mon[75050]: pgmap v1300: 305 pgs: 305 active+clean; 51 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 600 KiB/s wr, 62 op/s
Nov 29 07:50:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1216779657' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1216779657' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 51 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 117 op/s
Nov 29 07:50:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1216779657' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1216779657' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:08 compute-0 sshd-session[270072]: Invalid user ftpadmin from 143.14.121.41 port 57316
Nov 29 07:50:09 compute-0 sshd-session[270072]: Connection closed by invalid user ftpadmin 143.14.121.41 port 57316 [preauth]
Nov 29 07:50:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:09.581 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:09 compute-0 ceph-mon[75050]: pgmap v1301: 305 pgs: 305 active+clean; 51 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 117 op/s
Nov 29 07:50:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3793585046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3793585046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 51 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 751 KiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 29 07:50:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3793585046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3793585046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:10 compute-0 ceph-mon[75050]: pgmap v1302: 305 pgs: 305 active+clean; 51 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 751 KiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 29 07:50:11 compute-0 nova_compute[256729]: 2025-11-29 07:50:11.607 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:12 compute-0 sshd-session[270074]: Invalid user dspace from 143.14.121.41 port 57326
Nov 29 07:50:12 compute-0 nova_compute[256729]: 2025-11-29 07:50:12.520 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 760 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 29 07:50:12 compute-0 sshd-session[270074]: Connection closed by invalid user dspace 143.14.121.41 port 57326 [preauth]
Nov 29 07:50:13 compute-0 ceph-mon[75050]: pgmap v1303: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 760 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 29 07:50:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2413531710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2413531710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 29 07:50:14 compute-0 podman[270080]: 2025-11-29 07:50:14.696654889 +0000 UTC m=+0.056503171 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 07:50:14 compute-0 podman[270079]: 2025-11-29 07:50:14.733207585 +0000 UTC m=+0.096904242 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 07:50:14 compute-0 podman[270078]: 2025-11-29 07:50:14.744683008 +0000 UTC m=+0.105941639 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:50:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2413531710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2413531710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:50:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:50:16 compute-0 sshd-session[270076]: Invalid user admin from 143.14.121.41 port 38464
Nov 29 07:50:16 compute-0 sshd-session[270076]: Connection closed by invalid user admin 143.14.121.41 port 38464 [preauth]
Nov 29 07:50:16 compute-0 nova_compute[256729]: 2025-11-29 07:50:16.611 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 29 07:50:16 compute-0 nova_compute[256729]: 2025-11-29 07:50:16.812 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:17 compute-0 ceph-mon[75050]: pgmap v1304: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 29 07:50:17 compute-0 nova_compute[256729]: 2025-11-29 07:50:17.522 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:18 compute-0 ceph-mon[75050]: pgmap v1305: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 1.8 MiB/s wr, 104 op/s
Nov 29 07:50:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/734942247' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/734942247' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 1.3 MiB/s wr, 114 op/s
Nov 29 07:50:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1073124063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1073124063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/734942247' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/734942247' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:19 compute-0 ceph-mon[75050]: pgmap v1306: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 1.3 MiB/s wr, 114 op/s
Nov 29 07:50:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 KiB/s wr, 41 op/s
Nov 29 07:50:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1073124063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1073124063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:21 compute-0 nova_compute[256729]: 2025-11-29 07:50:21.614 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:21 compute-0 ceph-mon[75050]: pgmap v1307: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 KiB/s wr, 41 op/s
Nov 29 07:50:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:22 compute-0 nova_compute[256729]: 2025-11-29 07:50:22.524 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Nov 29 07:50:23 compute-0 sshd-session[270142]: Connection closed by authenticating user root 143.14.121.41 port 38468 [preauth]
Nov 29 07:50:23 compute-0 ceph-mon[75050]: pgmap v1308: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Nov 29 07:50:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 45 op/s
Nov 29 07:50:26 compute-0 ceph-mon[75050]: pgmap v1309: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 45 op/s
Nov 29 07:50:26 compute-0 nova_compute[256729]: 2025-11-29 07:50:26.617 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.5 KiB/s wr, 42 op/s
Nov 29 07:50:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:27 compute-0 ceph-mon[75050]: pgmap v1310: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.5 KiB/s wr, 42 op/s
Nov 29 07:50:27 compute-0 nova_compute[256729]: 2025-11-29 07:50:27.527 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 29 07:50:29 compute-0 sshd-session[270144]: Connection closed by authenticating user root 143.14.121.41 port 49630 [preauth]
Nov 29 07:50:29 compute-0 ceph-mon[75050]: pgmap v1311: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 29 07:50:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 341 B/s wr, 15 op/s
Nov 29 07:50:31 compute-0 nova_compute[256729]: 2025-11-29 07:50:31.619 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:31 compute-0 ceph-mon[75050]: pgmap v1312: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 341 B/s wr, 15 op/s
Nov 29 07:50:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:32 compute-0 nova_compute[256729]: 2025-11-29 07:50:32.529 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 511 B/s wr, 16 op/s
Nov 29 07:50:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 29 07:50:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 29 07:50:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 29 07:50:33 compute-0 ceph-mon[75050]: pgmap v1313: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 511 B/s wr, 16 op/s
Nov 29 07:50:33 compute-0 sshd-session[270146]: Connection closed by authenticating user root 143.14.121.41 port 49642 [preauth]
Nov 29 07:50:34 compute-0 ceph-mon[75050]: osdmap e170: 3 total, 3 up, 3 in
Nov 29 07:50:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 409 B/s wr, 1 op/s
Nov 29 07:50:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 29 07:50:35 compute-0 ceph-mon[75050]: pgmap v1315: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 409 B/s wr, 1 op/s
Nov 29 07:50:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 29 07:50:35 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 29 07:50:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:50:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:50:36 compute-0 ceph-mon[75050]: osdmap e171: 3 total, 3 up, 3 in
Nov 29 07:50:36 compute-0 nova_compute[256729]: 2025-11-29 07:50:36.621 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Nov 29 07:50:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 29 07:50:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 29 07:50:37 compute-0 ceph-mon[75050]: pgmap v1317: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Nov 29 07:50:37 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 29 07:50:37 compute-0 nova_compute[256729]: 2025-11-29 07:50:37.531 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:37 compute-0 sshd-session[270148]: Connection closed by authenticating user root 143.14.121.41 port 34206 [preauth]
Nov 29 07:50:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 29 07:50:38 compute-0 ceph-mon[75050]: osdmap e172: 3 total, 3 up, 3 in
Nov 29 07:50:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 29 07:50:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 29 07:50:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 41 op/s
Nov 29 07:50:39 compute-0 ceph-mon[75050]: osdmap e173: 3 total, 3 up, 3 in
Nov 29 07:50:39 compute-0 ceph-mon[75050]: pgmap v1320: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 41 op/s
Nov 29 07:50:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/772700945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/772700945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/772700945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/772700945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.5 KiB/s wr, 35 op/s
Nov 29 07:50:41 compute-0 ceph-mon[75050]: pgmap v1321: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.5 KiB/s wr, 35 op/s
Nov 29 07:50:41 compute-0 nova_compute[256729]: 2025-11-29 07:50:41.623 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-0 sshd-session[270150]: Connection closed by authenticating user root 143.14.121.41 port 34212 [preauth]
Nov 29 07:50:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:42 compute-0 nova_compute[256729]: 2025-11-29 07:50:42.533 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 29 07:50:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 29 07:50:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 KiB/s wr, 77 op/s
Nov 29 07:50:42 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 29 07:50:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/994508952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/994508952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1986603414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1986603414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:43 compute-0 ceph-mon[75050]: pgmap v1322: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 5.3 KiB/s wr, 77 op/s
Nov 29 07:50:43 compute-0 ceph-mon[75050]: osdmap e174: 3 total, 3 up, 3 in
Nov 29 07:50:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/994508952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/994508952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1986603414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1986603414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2297887712' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2297887712' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:44 compute-0 sudo[270155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:44 compute-0 sudo[270155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:44 compute-0 sudo[270155]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:44 compute-0 sudo[270180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:50:44 compute-0 sudo[270180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:44 compute-0 sudo[270180]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:44 compute-0 sudo[270205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:44 compute-0 sudo[270205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:44 compute-0 sudo[270205]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:44 compute-0 sudo[270230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:50:44 compute-0 sudo[270230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.4 KiB/s wr, 92 op/s
Nov 29 07:50:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2297887712' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2297887712' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:44 compute-0 sudo[270230]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:50:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:50:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:50:44 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:50:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:50:44 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:50:44 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a6865ffc-6f98-4025-a663-fb55afa010e5 does not exist
Nov 29 07:50:44 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 825a6958-3f9a-4b3c-ab90-6ba71768fcc6 does not exist
Nov 29 07:50:44 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 6231dadd-f9fb-4abe-bbb4-d55c30198271 does not exist
Nov 29 07:50:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:50:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:50:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:50:44 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:50:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:50:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:50:44 compute-0 sudo[270286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:44 compute-0 sudo[270286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:44 compute-0 sudo[270286]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:44 compute-0 nova_compute[256729]: 2025-11-29 07:50:44.978 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:44 compute-0 nova_compute[256729]: 2025-11-29 07:50:44.978 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.009 256736 DEBUG nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:50:45 compute-0 sudo[270330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:50:45 compute-0 sudo[270330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:45 compute-0 podman[270312]: 2025-11-29 07:50:45.023027052 +0000 UTC m=+0.065916698 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 07:50:45 compute-0 sudo[270330]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:45 compute-0 podman[270311]: 2025-11-29 07:50:45.027271148 +0000 UTC m=+0.071505401 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 07:50:45 compute-0 podman[270310]: 2025-11-29 07:50:45.055519067 +0000 UTC m=+0.098292849 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:50:45 compute-0 sudo[270396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:45 compute-0 sudo[270396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:45 compute-0 sudo[270396]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.116 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.116 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.128 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.129 256736 INFO nova.compute.claims [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:50:45 compute-0 sudo[270426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:50:45 compute-0 sudo[270426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.267 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:45 compute-0 podman[270507]: 2025-11-29 07:50:45.58544388 +0000 UTC m=+0.089034838 container create 6272ac9b315772646226d6b71dea7a92645cdeb74bac750a3e2a9b1537515efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lovelace, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 07:50:45 compute-0 podman[270507]: 2025-11-29 07:50:45.528131829 +0000 UTC m=+0.031722807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:50:45 compute-0 systemd[1]: Started libpod-conmon-6272ac9b315772646226d6b71dea7a92645cdeb74bac750a3e2a9b1537515efc.scope.
Nov 29 07:50:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:50:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3541485092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:45 compute-0 podman[270507]: 2025-11-29 07:50:45.710453668 +0000 UTC m=+0.214044666 container init 6272ac9b315772646226d6b71dea7a92645cdeb74bac750a3e2a9b1537515efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:50:45 compute-0 podman[270507]: 2025-11-29 07:50:45.720000428 +0000 UTC m=+0.223591386 container start 6272ac9b315772646226d6b71dea7a92645cdeb74bac750a3e2a9b1537515efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lovelace, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:50:45 compute-0 podman[270507]: 2025-11-29 07:50:45.725426676 +0000 UTC m=+0.229017654 container attach 6272ac9b315772646226d6b71dea7a92645cdeb74bac750a3e2a9b1537515efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lovelace, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:50:45 compute-0 sweet_lovelace[270523]: 167 167
Nov 29 07:50:45 compute-0 systemd[1]: libpod-6272ac9b315772646226d6b71dea7a92645cdeb74bac750a3e2a9b1537515efc.scope: Deactivated successfully.
Nov 29 07:50:45 compute-0 podman[270507]: 2025-11-29 07:50:45.732438207 +0000 UTC m=+0.236029205 container died 6272ac9b315772646226d6b71dea7a92645cdeb74bac750a3e2a9b1537515efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.731 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.740 256736 DEBUG nova.compute.provider_tree [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bebb6691fba04529be6ebf90afb1c0c7ce82cf992784905cffbc71fae7a4eea-merged.mount: Deactivated successfully.
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.764 256736 DEBUG nova.scheduler.client.report [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:50:45 compute-0 podman[270507]: 2025-11-29 07:50:45.778386779 +0000 UTC m=+0.281977747 container remove 6272ac9b315772646226d6b71dea7a92645cdeb74bac750a3e2a9b1537515efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:50:45 compute-0 ceph-mon[75050]: pgmap v1324: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.4 KiB/s wr, 92 op/s
Nov 29 07:50:45 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:50:45 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:50:45 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:50:45 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:50:45 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:50:45 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:50:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3541485092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.792 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:45 compute-0 systemd[1]: libpod-conmon-6272ac9b315772646226d6b71dea7a92645cdeb74bac750a3e2a9b1537515efc.scope: Deactivated successfully.
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.794 256736 DEBUG nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:50:45 compute-0 sshd-session[270153]: Connection closed by authenticating user root 143.14.121.41 port 45878 [preauth]
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.851 256736 DEBUG nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.851 256736 DEBUG nova.network.neutron [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.873 256736 INFO nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.890 256736 DEBUG nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:50:45 compute-0 podman[270549]: 2025-11-29 07:50:45.942983095 +0000 UTC m=+0.043442355 container create 2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:50:45 compute-0 systemd[1]: Started libpod-conmon-2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52.scope.
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.992 256736 DEBUG nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.993 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:50:45 compute-0 nova_compute[256729]: 2025-11-29 07:50:45.994 256736 INFO nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Creating image(s)
Nov 29 07:50:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520321aa6db574c964b3744ca3a0b9492aab10bcaa7d601fd762313c2e539b3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520321aa6db574c964b3744ca3a0b9492aab10bcaa7d601fd762313c2e539b3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520321aa6db574c964b3744ca3a0b9492aab10bcaa7d601fd762313c2e539b3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520321aa6db574c964b3744ca3a0b9492aab10bcaa7d601fd762313c2e539b3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520321aa6db574c964b3744ca3a0b9492aab10bcaa7d601fd762313c2e539b3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:46 compute-0 podman[270549]: 2025-11-29 07:50:45.927241936 +0000 UTC m=+0.027701216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.025 256736 DEBUG nova.storage.rbd_utils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 82592acd-eff0-47b3-9bba-391f395f4bab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:46 compute-0 podman[270549]: 2025-11-29 07:50:46.029935435 +0000 UTC m=+0.130394715 container init 2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:50:46 compute-0 podman[270549]: 2025-11-29 07:50:46.038640172 +0000 UTC m=+0.139099432 container start 2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:50:46 compute-0 podman[270549]: 2025-11-29 07:50:46.042361893 +0000 UTC m=+0.142821163 container attach 2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.052 256736 DEBUG nova.storage.rbd_utils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 82592acd-eff0-47b3-9bba-391f395f4bab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3855194947' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3855194947' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.085 256736 DEBUG nova.storage.rbd_utils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 82592acd-eff0-47b3-9bba-391f395f4bab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.091 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.117 256736 DEBUG nova.policy [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6bef1230e3de4a87aa01df74ec671a23', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8117debb786c4549812cc6e7571f6d4d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.156 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.157 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.158 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.158 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.177 256736 DEBUG nova.storage.rbd_utils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 82592acd-eff0-47b3-9bba-391f395f4bab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.181 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 82592acd-eff0-47b3-9bba-391f395f4bab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.625 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.9 KiB/s wr, 100 op/s
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.637 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 82592acd-eff0-47b3-9bba-391f395f4bab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:46 compute-0 nova_compute[256729]: 2025-11-29 07:50:46.711 256736 DEBUG nova.storage.rbd_utils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] resizing rbd image 82592acd-eff0-47b3-9bba-391f395f4bab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:50:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 29 07:50:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3855194947' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3855194947' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 29 07:50:46 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 29 07:50:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 29 07:50:47 compute-0 fervent_carson[270566]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:50:47 compute-0 fervent_carson[270566]: --> relative data size: 1.0
Nov 29 07:50:47 compute-0 fervent_carson[270566]: --> All data devices are unavailable
Nov 29 07:50:47 compute-0 systemd[1]: libpod-2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52.scope: Deactivated successfully.
Nov 29 07:50:47 compute-0 podman[270549]: 2025-11-29 07:50:47.186420155 +0000 UTC m=+1.286879455 container died 2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:50:47 compute-0 systemd[1]: libpod-2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52.scope: Consumed 1.042s CPU time.
Nov 29 07:50:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 29 07:50:47 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 29 07:50:47 compute-0 nova_compute[256729]: 2025-11-29 07:50:47.297 256736 DEBUG nova.objects.instance [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'migration_context' on Instance uuid 82592acd-eff0-47b3-9bba-391f395f4bab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:47 compute-0 nova_compute[256729]: 2025-11-29 07:50:47.318 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:50:47 compute-0 nova_compute[256729]: 2025-11-29 07:50:47.319 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Ensure instance console log exists: /var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:50:47 compute-0 nova_compute[256729]: 2025-11-29 07:50:47.320 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:47 compute-0 nova_compute[256729]: 2025-11-29 07:50:47.320 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:47 compute-0 nova_compute[256729]: 2025-11-29 07:50:47.321 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:47 compute-0 nova_compute[256729]: 2025-11-29 07:50:47.558 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-520321aa6db574c964b3744ca3a0b9492aab10bcaa7d601fd762313c2e539b3e-merged.mount: Deactivated successfully.
Nov 29 07:50:47 compute-0 ceph-mon[75050]: pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 4.9 KiB/s wr, 100 op/s
Nov 29 07:50:47 compute-0 ceph-mon[75050]: osdmap e175: 3 total, 3 up, 3 in
Nov 29 07:50:47 compute-0 ceph-mon[75050]: osdmap e176: 3 total, 3 up, 3 in
Nov 29 07:50:48 compute-0 nova_compute[256729]: 2025-11-29 07:50:48.043 256736 DEBUG nova.network.neutron [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Successfully created port: 8d79a1b9-de43-4d80-994a-adff27ccf2a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:50:48 compute-0 podman[270549]: 2025-11-29 07:50:48.082221588 +0000 UTC m=+2.182680858 container remove 2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:50:48 compute-0 systemd[1]: libpod-conmon-2f567124e3eaf6b908dd1e1c0a18956842936e58b516ec1768424bec79ca7e52.scope: Deactivated successfully.
Nov 29 07:50:48 compute-0 sudo[270426]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:48 compute-0 sudo[270775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:48 compute-0 sudo[270775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:48 compute-0 sudo[270775]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:48 compute-0 sudo[270800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:50:48 compute-0 sudo[270800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:48 compute-0 sudo[270800]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:48 compute-0 sudo[270825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:48 compute-0 sudo[270825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:48 compute-0 sudo[270825]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:48 compute-0 sshd-session[270626]: Connection closed by authenticating user root 143.14.121.41 port 45890 [preauth]
Nov 29 07:50:48 compute-0 sudo[270850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:50:48 compute-0 sudo[270850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 72 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 2.1 MiB/s wr, 258 op/s
Nov 29 07:50:49 compute-0 podman[270915]: 2025-11-29 07:50:48.688734939 +0000 UTC m=+0.026107602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.269 256736 DEBUG nova.network.neutron [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Successfully updated port: 8d79a1b9-de43-4d80-994a-adff27ccf2a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.286 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "refresh_cache-82592acd-eff0-47b3-9bba-391f395f4bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.286 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquired lock "refresh_cache-82592acd-eff0-47b3-9bba-391f395f4bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.287 256736 DEBUG nova.network.neutron [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.383 256736 DEBUG nova.compute.manager [req-65586117-4dfd-4a39-b660-00c4088a0c6b req-3bf4d75f-7dab-41a4-af9e-7505e390d8be ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received event network-changed-8d79a1b9-de43-4d80-994a-adff27ccf2a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.384 256736 DEBUG nova.compute.manager [req-65586117-4dfd-4a39-b660-00c4088a0c6b req-3bf4d75f-7dab-41a4-af9e-7505e390d8be ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Refreshing instance network info cache due to event network-changed-8d79a1b9-de43-4d80-994a-adff27ccf2a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.384 256736 DEBUG oslo_concurrency.lockutils [req-65586117-4dfd-4a39-b660-00c4088a0c6b req-3bf4d75f-7dab-41a4-af9e-7505e390d8be ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-82592acd-eff0-47b3-9bba-391f395f4bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:50:49 compute-0 podman[270915]: 2025-11-29 07:50:49.43729437 +0000 UTC m=+0.774666983 container create 1942387dded84b779ba2b0a8e46233db75c686926ba1788c60bf9fbb87137e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.464 256736 DEBUG nova.network.neutron [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.861 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.862 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.883 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:49 compute-0 nova_compute[256729]: 2025-11-29 07:50:49.883 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.148 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:50:50 compute-0 systemd[1]: Started libpod-conmon-1942387dded84b779ba2b0a8e46233db75c686926ba1788c60bf9fbb87137e13.scope.
Nov 29 07:50:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:50 compute-0 ceph-mon[75050]: pgmap v1328: 305 pgs: 305 active+clean; 72 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 2.1 MiB/s wr, 258 op/s
Nov 29 07:50:50 compute-0 podman[270915]: 2025-11-29 07:50:50.459560831 +0000 UTC m=+1.796933424 container init 1942387dded84b779ba2b0a8e46233db75c686926ba1788c60bf9fbb87137e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cray, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:50:50 compute-0 podman[270915]: 2025-11-29 07:50:50.474400556 +0000 UTC m=+1.811773149 container start 1942387dded84b779ba2b0a8e46233db75c686926ba1788c60bf9fbb87137e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cray, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:50:50 compute-0 zen_cray[270932]: 167 167
Nov 29 07:50:50 compute-0 systemd[1]: libpod-1942387dded84b779ba2b0a8e46233db75c686926ba1788c60bf9fbb87137e13.scope: Deactivated successfully.
Nov 29 07:50:50 compute-0 podman[270915]: 2025-11-29 07:50:50.544674141 +0000 UTC m=+1.882046744 container attach 1942387dded84b779ba2b0a8e46233db75c686926ba1788c60bf9fbb87137e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cray, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 07:50:50 compute-0 podman[270915]: 2025-11-29 07:50:50.546817819 +0000 UTC m=+1.884190442 container died 1942387dded84b779ba2b0a8e46233db75c686926ba1788c60bf9fbb87137e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.617 256736 DEBUG nova.network.neutron [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Updating instance_info_cache with network_info: [{"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 72 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 1.5 MiB/s wr, 194 op/s
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.642 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Releasing lock "refresh_cache-82592acd-eff0-47b3-9bba-391f395f4bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.643 256736 DEBUG nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Instance network_info: |[{"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.643 256736 DEBUG oslo_concurrency.lockutils [req-65586117-4dfd-4a39-b660-00c4088a0c6b req-3bf4d75f-7dab-41a4-af9e-7505e390d8be ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-82592acd-eff0-47b3-9bba-391f395f4bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.643 256736 DEBUG nova.network.neutron [req-65586117-4dfd-4a39-b660-00c4088a0c6b req-3bf4d75f-7dab-41a4-af9e-7505e390d8be ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Refreshing network info cache for port 8d79a1b9-de43-4d80-994a-adff27ccf2a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.646 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Start _get_guest_xml network_info=[{"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.654 256736 WARNING nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.666 256736 DEBUG nova.virt.libvirt.host [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.667 256736 DEBUG nova.virt.libvirt.host [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.671 256736 DEBUG nova.virt.libvirt.host [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.672 256736 DEBUG nova.virt.libvirt.host [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.672 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.673 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.674 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.674 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.674 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.675 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.675 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.676 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.676 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.676 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.677 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.677 256736 DEBUG nova.virt.hardware [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:50:50 compute-0 nova_compute[256729]: 2025-11-29 07:50:50.682 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea5251f7d0562bc03d8cade0a0f947b1cf3dade86d1bdbf1f7746ad9cf067c8a-merged.mount: Deactivated successfully.
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757725188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.182 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.215 256736 DEBUG nova.storage.rbd_utils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 82592acd-eff0-47b3-9bba-391f395f4bab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.219 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:51 compute-0 podman[270915]: 2025-11-29 07:50:51.276352683 +0000 UTC m=+2.613725266 container remove 1942387dded84b779ba2b0a8e46233db75c686926ba1788c60bf9fbb87137e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:50:51 compute-0 systemd[1]: libpod-conmon-1942387dded84b779ba2b0a8e46233db75c686926ba1788c60bf9fbb87137e13.scope: Deactivated successfully.
Nov 29 07:50:51 compute-0 sshd-session[270898]: Connection closed by authenticating user root 143.14.121.41 port 45892 [preauth]
Nov 29 07:50:51 compute-0 ceph-mon[75050]: pgmap v1329: 305 pgs: 305 active+clean; 72 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 1.5 MiB/s wr, 194 op/s
Nov 29 07:50:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3757725188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2345285295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2345285295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:51 compute-0 podman[271017]: 2025-11-29 07:50:51.492280658 +0000 UTC m=+0.061753384 container create 96b458fa906ee7de0d3f1d1998602ddee8bf068d561b1e890893920eea1fba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:50:51 compute-0 systemd[1]: Started libpod-conmon-96b458fa906ee7de0d3f1d1998602ddee8bf068d561b1e890893920eea1fba5a.scope.
Nov 29 07:50:51 compute-0 podman[271017]: 2025-11-29 07:50:51.453980264 +0000 UTC m=+0.023452910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:50:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc011fa1462e12274352bda578e7ce01950b58b3645c284859e37238b3208968/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc011fa1462e12274352bda578e7ce01950b58b3645c284859e37238b3208968/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc011fa1462e12274352bda578e7ce01950b58b3645c284859e37238b3208968/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc011fa1462e12274352bda578e7ce01950b58b3645c284859e37238b3208968/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:51 compute-0 podman[271017]: 2025-11-29 07:50:51.589295942 +0000 UTC m=+0.158768588 container init 96b458fa906ee7de0d3f1d1998602ddee8bf068d561b1e890893920eea1fba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 07:50:51 compute-0 podman[271017]: 2025-11-29 07:50:51.598725539 +0000 UTC m=+0.168198165 container start 96b458fa906ee7de0d3f1d1998602ddee8bf068d561b1e890893920eea1fba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:50:51 compute-0 podman[271017]: 2025-11-29 07:50:51.614082477 +0000 UTC m=+0.183555103 container attach 96b458fa906ee7de0d3f1d1998602ddee8bf068d561b1e890893920eea1fba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.626 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:50:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1408625864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.755 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.757 256736 DEBUG nova.virt.libvirt.vif [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:50:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-320687856',display_name='tempest-VolumesBackupsTest-instance-320687856',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-320687856',id=3,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRag5qkeBRJ62eeHZa7MgR0OvNArfyVXRybex9P+uyBJZ3MeRwnc6HFbAkK1pZNSk2IxUsZ4AocbOGxJy1pZKtNwpHQBf2BhZaRNGwk2OxDmpm0U3hlzSnLIoG3HAZLWQ==',key_name='tempest-keypair-2073514007',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8117debb786c4549812cc6e7571f6d4d',ramdisk_id='',reservation_id='r-42ibu52o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-12225578',owner_user_name='tempest-VolumesBackupsTest-12225578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:50:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6bef1230e3de4a87aa01df74ec671a23',uuid=82592acd-eff0-47b3-9bba-391f395f4bab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.758 256736 DEBUG nova.network.os_vif_util [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converting VIF {"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.759 256736 DEBUG nova.network.os_vif_util [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:a8:a9,bridge_name='br-int',has_traffic_filtering=True,id=8d79a1b9-de43-4d80-994a-adff27ccf2a7,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d79a1b9-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.761 256736 DEBUG nova.objects.instance [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'pci_devices' on Instance uuid 82592acd-eff0-47b3-9bba-391f395f4bab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.783 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <uuid>82592acd-eff0-47b3-9bba-391f395f4bab</uuid>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <name>instance-00000003</name>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <nova:name>tempest-VolumesBackupsTest-instance-320687856</nova:name>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:50:50</nova:creationTime>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <nova:user uuid="6bef1230e3de4a87aa01df74ec671a23">tempest-VolumesBackupsTest-12225578-project-member</nova:user>
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <nova:project uuid="8117debb786c4549812cc6e7571f6d4d">tempest-VolumesBackupsTest-12225578</nova:project>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <nova:port uuid="8d79a1b9-de43-4d80-994a-adff27ccf2a7">
Nov 29 07:50:51 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <system>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <entry name="serial">82592acd-eff0-47b3-9bba-391f395f4bab</entry>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <entry name="uuid">82592acd-eff0-47b3-9bba-391f395f4bab</entry>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     </system>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <os>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   </os>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <features>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   </features>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/82592acd-eff0-47b3-9bba-391f395f4bab_disk">
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       </source>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/82592acd-eff0-47b3-9bba-391f395f4bab_disk.config">
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       </source>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:50:51 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:55:a8:a9"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <target dev="tap8d79a1b9-de"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab/console.log" append="off"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <video>
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     </video>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:50:51 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:50:51 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:50:51 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:50:51 compute-0 nova_compute[256729]: </domain>
Nov 29 07:50:51 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.785 256736 DEBUG nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Preparing to wait for external event network-vif-plugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.786 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.786 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.787 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.788 256736 DEBUG nova.virt.libvirt.vif [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:50:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-320687856',display_name='tempest-VolumesBackupsTest-instance-320687856',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-320687856',id=3,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRag5qkeBRJ62eeHZa7MgR0OvNArfyVXRybex9P+uyBJZ3MeRwnc6HFbAkK1pZNSk2IxUsZ4AocbOGxJy1pZKtNwpHQBf2BhZaRNGwk2OxDmpm0U3hlzSnLIoG3HAZLWQ==',key_name='tempest-keypair-2073514007',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8117debb786c4549812cc6e7571f6d4d',ramdisk_id='',reservation_id='r-42ibu52o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-12225578',owner_user_name='tempest-VolumesBackupsTest-12225578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:50:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6bef1230e3de4a87aa01df74ec671a23',uuid=82592acd-eff0-47b3-9bba-391f395f4bab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.788 256736 DEBUG nova.network.os_vif_util [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converting VIF {"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.789 256736 DEBUG nova.network.os_vif_util [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:a8:a9,bridge_name='br-int',has_traffic_filtering=True,id=8d79a1b9-de43-4d80-994a-adff27ccf2a7,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d79a1b9-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.789 256736 DEBUG os_vif [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:a8:a9,bridge_name='br-int',has_traffic_filtering=True,id=8d79a1b9-de43-4d80-994a-adff27ccf2a7,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d79a1b9-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.790 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.791 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.791 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.800 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.800 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d79a1b9-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.801 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d79a1b9-de, col_values=(('external_ids', {'iface-id': '8d79a1b9-de43-4d80-994a-adff27ccf2a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:a8:a9', 'vm-uuid': '82592acd-eff0-47b3-9bba-391f395f4bab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.802 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:51 compute-0 NetworkManager[48962]: <info>  [1764402651.8039] manager: (tap8d79a1b9-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.805 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.810 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.811 256736 INFO os_vif [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:a8:a9,bridge_name='br-int',has_traffic_filtering=True,id=8d79a1b9-de43-4d80-994a-adff27ccf2a7,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d79a1b9-de')
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.890 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.891 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.891 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No VIF found with MAC fa:16:3e:55:a8:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.892 256736 INFO nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Using config drive
Nov 29 07:50:51 compute-0 nova_compute[256729]: 2025-11-29 07:50:51.920 256736 DEBUG nova.storage.rbd_utils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 82592acd-eff0-47b3-9bba-391f395f4bab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:52 compute-0 nova_compute[256729]: 2025-11-29 07:50:52.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:52 compute-0 nova_compute[256729]: 2025-11-29 07:50:52.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:50:52 compute-0 nova_compute[256729]: 2025-11-29 07:50:52.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:50:52 compute-0 nova_compute[256729]: 2025-11-29 07:50:52.181 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 07:50:52 compute-0 nova_compute[256729]: 2025-11-29 07:50:52.182 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:50:52 compute-0 nova_compute[256729]: 2025-11-29 07:50:52.183 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]: {
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:     "0": [
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:         {
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "devices": [
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "/dev/loop3"
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             ],
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_name": "ceph_lv0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_size": "21470642176",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "name": "ceph_lv0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "tags": {
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.cluster_name": "ceph",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.crush_device_class": "",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.encrypted": "0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.osd_id": "0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.type": "block",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.vdo": "0"
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             },
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "type": "block",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "vg_name": "ceph_vg0"
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:         }
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:     ],
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:     "1": [
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:         {
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "devices": [
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "/dev/loop4"
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             ],
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_name": "ceph_lv1",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_size": "21470642176",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "name": "ceph_lv1",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "tags": {
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.cluster_name": "ceph",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.crush_device_class": "",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.encrypted": "0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.osd_id": "1",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.type": "block",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.vdo": "0"
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             },
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "type": "block",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "vg_name": "ceph_vg1"
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:         }
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:     ],
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:     "2": [
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:         {
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "devices": [
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "/dev/loop5"
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             ],
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_name": "ceph_lv2",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_size": "21470642176",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "name": "ceph_lv2",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "tags": {
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.cluster_name": "ceph",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.crush_device_class": "",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.encrypted": "0",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.osd_id": "2",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.type": "block",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:                 "ceph.vdo": "0"
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             },
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "type": "block",
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:             "vg_name": "ceph_vg2"
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:         }
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]:     ]
Nov 29 07:50:52 compute-0 quirky_elgamal[271034]: }
Nov 29 07:50:52 compute-0 systemd[1]: libpod-96b458fa906ee7de0d3f1d1998602ddee8bf068d561b1e890893920eea1fba5a.scope: Deactivated successfully.
Nov 29 07:50:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2345285295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2345285295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1408625864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:50:52 compute-0 podman[271067]: 2025-11-29 07:50:52.465892533 +0000 UTC m=+0.029372941 container died 96b458fa906ee7de0d3f1d1998602ddee8bf068d561b1e890893920eea1fba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc011fa1462e12274352bda578e7ce01950b58b3645c284859e37238b3208968-merged.mount: Deactivated successfully.
Nov 29 07:50:52 compute-0 podman[271067]: 2025-11-29 07:50:52.524000617 +0000 UTC m=+0.087480975 container remove 96b458fa906ee7de0d3f1d1998602ddee8bf068d561b1e890893920eea1fba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:50:52 compute-0 systemd[1]: libpod-conmon-96b458fa906ee7de0d3f1d1998602ddee8bf068d561b1e890893920eea1fba5a.scope: Deactivated successfully.
Nov 29 07:50:52 compute-0 sudo[270850]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:52 compute-0 sudo[271082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:52 compute-0 sudo[271082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:52 compute-0 sudo[271082]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 2.7 MiB/s wr, 176 op/s
Nov 29 07:50:52 compute-0 sudo[271107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:50:52 compute-0 sudo[271107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:52 compute-0 sudo[271107]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:52 compute-0 sudo[271132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:52 compute-0 sudo[271132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:52 compute-0 sudo[271132]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:52 compute-0 sudo[271157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:50:52 compute-0 sudo[271157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:53 compute-0 podman[271222]: 2025-11-29 07:50:53.173978342 +0000 UTC m=+0.047456325 container create 0534efebfcf8c16cb9b2e9c2dbbf6123c8dad5b0c9dd856edb18fd15c12ef003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.180 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.180 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.181 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.181 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.181 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:53 compute-0 systemd[1]: Started libpod-conmon-0534efebfcf8c16cb9b2e9c2dbbf6123c8dad5b0c9dd856edb18fd15c12ef003.scope.
Nov 29 07:50:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:53 compute-0 podman[271222]: 2025-11-29 07:50:53.152697381 +0000 UTC m=+0.026175414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:50:53 compute-0 podman[271222]: 2025-11-29 07:50:53.282231323 +0000 UTC m=+0.155709376 container init 0534efebfcf8c16cb9b2e9c2dbbf6123c8dad5b0c9dd856edb18fd15c12ef003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_northcutt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:50:53 compute-0 podman[271222]: 2025-11-29 07:50:53.292651776 +0000 UTC m=+0.166129759 container start 0534efebfcf8c16cb9b2e9c2dbbf6123c8dad5b0c9dd856edb18fd15c12ef003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:50:53 compute-0 podman[271222]: 2025-11-29 07:50:53.296680926 +0000 UTC m=+0.170159229 container attach 0534efebfcf8c16cb9b2e9c2dbbf6123c8dad5b0c9dd856edb18fd15c12ef003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:50:53 compute-0 focused_northcutt[271239]: 167 167
Nov 29 07:50:53 compute-0 systemd[1]: libpod-0534efebfcf8c16cb9b2e9c2dbbf6123c8dad5b0c9dd856edb18fd15c12ef003.scope: Deactivated successfully.
Nov 29 07:50:53 compute-0 podman[271222]: 2025-11-29 07:50:53.302220697 +0000 UTC m=+0.175698730 container died 0534efebfcf8c16cb9b2e9c2dbbf6123c8dad5b0c9dd856edb18fd15c12ef003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.382 256736 INFO nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Creating config drive at /var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab/disk.config
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.388 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1zpcir_b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:53 compute-0 ceph-mon[75050]: pgmap v1330: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 2.7 MiB/s wr, 176 op/s
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.521 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1zpcir_b" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-89d4b8875aa2e173380bd00fdda11c6e39bbb8ba2eaf5ae8f5e39cd64c1a6ffc-merged.mount: Deactivated successfully.
Nov 29 07:50:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:50:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309268143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.650 256736 DEBUG nova.storage.rbd_utils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 82592acd-eff0-47b3-9bba-391f395f4bab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.655 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab/disk.config 82592acd-eff0-47b3-9bba-391f395f4bab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.677 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:50:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3228636635' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:50:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3228636635' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:53 compute-0 podman[271222]: 2025-11-29 07:50:53.85308203 +0000 UTC m=+0.726560053 container remove 0534efebfcf8c16cb9b2e9c2dbbf6123c8dad5b0c9dd856edb18fd15c12ef003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.908 256736 DEBUG nova.network.neutron [req-65586117-4dfd-4a39-b660-00c4088a0c6b req-3bf4d75f-7dab-41a4-af9e-7505e390d8be ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Updated VIF entry in instance network info cache for port 8d79a1b9-de43-4d80-994a-adff27ccf2a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.909 256736 DEBUG nova.network.neutron [req-65586117-4dfd-4a39-b660-00c4088a0c6b req-3bf4d75f-7dab-41a4-af9e-7505e390d8be ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Updating instance_info_cache with network_info: [{"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:50:53 compute-0 systemd[1]: libpod-conmon-0534efebfcf8c16cb9b2e9c2dbbf6123c8dad5b0c9dd856edb18fd15c12ef003.scope: Deactivated successfully.
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.941 256736 DEBUG oslo_concurrency.lockutils [req-65586117-4dfd-4a39-b660-00c4088a0c6b req-3bf4d75f-7dab-41a4-af9e-7505e390d8be ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-82592acd-eff0-47b3-9bba-391f395f4bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.985 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:50:53 compute-0 nova_compute[256729]: 2025-11-29 07:50:53.985 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:50:54 compute-0 sshd-session[271037]: Connection closed by authenticating user root 143.14.121.41 port 45896 [preauth]
Nov 29 07:50:54 compute-0 podman[271324]: 2025-11-29 07:50:54.131490249 +0000 UTC m=+0.113275489 container create 12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 07:50:54 compute-0 podman[271324]: 2025-11-29 07:50:54.051996412 +0000 UTC m=+0.033781672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.149 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.150 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4654MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.150 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.150 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:54 compute-0 systemd[1]: Started libpod-conmon-12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371.scope.
Nov 29 07:50:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.217 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 82592acd-eff0-47b3-9bba-391f395f4bab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.217 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.217 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49a380caa5049b1df68250f3061e15bf55c0d738efcc6cbcb614e6bdc836e9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49a380caa5049b1df68250f3061e15bf55c0d738efcc6cbcb614e6bdc836e9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49a380caa5049b1df68250f3061e15bf55c0d738efcc6cbcb614e6bdc836e9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49a380caa5049b1df68250f3061e15bf55c0d738efcc6cbcb614e6bdc836e9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.271 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:50:54 compute-0 podman[271324]: 2025-11-29 07:50:54.324794827 +0000 UTC m=+0.306580147 container init 12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:50:54 compute-0 podman[271324]: 2025-11-29 07:50:54.338442369 +0000 UTC m=+0.320227649 container start 12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.466 256736 DEBUG oslo_concurrency.processutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab/disk.config 82592acd-eff0-47b3-9bba-391f395f4bab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.811s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.467 256736 INFO nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Deleting local config drive /var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab/disk.config because it was imported into RBD.
Nov 29 07:50:54 compute-0 podman[271324]: 2025-11-29 07:50:54.482530156 +0000 UTC m=+0.464315396 container attach 12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_johnson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:50:54 compute-0 kernel: tap8d79a1b9-de: entered promiscuous mode
Nov 29 07:50:54 compute-0 NetworkManager[48962]: <info>  [1764402654.5232] manager: (tap8d79a1b9-de): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.559 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.561 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:54 compute-0 ovn_controller[153383]: 2025-11-29T07:50:54Z|00053|binding|INFO|Claiming lport 8d79a1b9-de43-4d80-994a-adff27ccf2a7 for this chassis.
Nov 29 07:50:54 compute-0 ovn_controller[153383]: 2025-11-29T07:50:54Z|00054|binding|INFO|8d79a1b9-de43-4d80-994a-adff27ccf2a7: Claiming fa:16:3e:55:a8:a9 10.100.0.12
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.567 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.581 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:a8:a9 10.100.0.12'], port_security=['fa:16:3e:55:a8:a9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '82592acd-eff0-47b3-9bba-391f395f4bab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8117debb786c4549812cc6e7571f6d4d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '34247bb0-2cba-49f1-a2f6-90778dcaa45a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b45dfb6d-5934-4acb-b62b-b7104c4a665d, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=8d79a1b9-de43-4d80-994a-adff27ccf2a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.583 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 8d79a1b9-de43-4d80-994a-adff27ccf2a7 in datapath a24c1904-53b2-4346-8806-9a1bad79dd5c bound to our chassis
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.584 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a24c1904-53b2-4346-8806-9a1bad79dd5c
Nov 29 07:50:54 compute-0 systemd-udevd[271379]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.597 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3e951077-61cd-4f24-ad79-4d25901c7c2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.598 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa24c1904-51 in ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.601 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa24c1904-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:50:54 compute-0 systemd-machined[217781]: New machine qemu-3-instance-00000003.
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.602 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[146705e6-7537-4dc6-805a-7812ab262935]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.605 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[06e5bc99-e69e-4347-8db7-6af05a80610a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 NetworkManager[48962]: <info>  [1764402654.6068] device (tap8d79a1b9-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:50:54 compute-0 NetworkManager[48962]: <info>  [1764402654.6074] device (tap8d79a1b9-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.625 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[5329fd23-357c-4677-b2d2-2967b12fa797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 2.7 MiB/s wr, 169 op/s
Nov 29 07:50:54 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 29 07:50:54 compute-0 ovn_controller[153383]: 2025-11-29T07:50:54Z|00055|binding|INFO|Setting lport 8d79a1b9-de43-4d80-994a-adff27ccf2a7 ovn-installed in OVS
Nov 29 07:50:54 compute-0 ovn_controller[153383]: 2025-11-29T07:50:54Z|00056|binding|INFO|Setting lport 8d79a1b9-de43-4d80-994a-adff27ccf2a7 up in Southbound
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.649 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.648 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[14f99744-6621-4675-816f-973956129a08]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.696 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[347a35af-cb3a-4370-b750-61b9a7258d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 NetworkManager[48962]: <info>  [1764402654.7022] manager: (tapa24c1904-50): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.702 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[37a20e83-b1e3-40c8-9559-00cda294c0ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.736 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[77f6a9f1-59cd-4137-bea8-befb766ccb8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.738 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[a37359a7-7e60-4a9a-9bb9-42c73622a418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 NetworkManager[48962]: <info>  [1764402654.7590] device (tapa24c1904-50): carrier: link connected
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.763 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[a08c7349-6e44-44d4-9ae1-c36ea4f30958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.782 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab6b58d-92d3-4374-b2ee-bbf00004e927]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa24c1904-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:15:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496982, 'reachable_time': 23127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271413, 'error': None, 'target': 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:50:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165341514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2309268143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3228636635' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:50:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3228636635' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.797 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd1666d-5c42-4b07-bdf6-77700e2eaadc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:158a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496982, 'tstamp': 496982}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271414, 'error': None, 'target': 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.810 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.815 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.820 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[47f6c01b-e269-4f6e-bf7c-106e03e42f68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa24c1904-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:15:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496982, 'reachable_time': 23127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271417, 'error': None, 'target': 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.834 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.852 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e21394-0a9e-483b-8e2e-ed4887e85957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.884 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.885 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.911 256736 DEBUG nova.compute.manager [req-6cb8192f-8e54-4ee0-8222-a4972c0685ae req-71921548-fb0b-4645-964a-ce512500c973 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received event network-vif-plugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.911 256736 DEBUG oslo_concurrency.lockutils [req-6cb8192f-8e54-4ee0-8222-a4972c0685ae req-71921548-fb0b-4645-964a-ce512500c973 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.911 256736 DEBUG oslo_concurrency.lockutils [req-6cb8192f-8e54-4ee0-8222-a4972c0685ae req-71921548-fb0b-4645-964a-ce512500c973 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.911 256736 DEBUG oslo_concurrency.lockutils [req-6cb8192f-8e54-4ee0-8222-a4972c0685ae req-71921548-fb0b-4645-964a-ce512500c973 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.911 256736 DEBUG nova.compute.manager [req-6cb8192f-8e54-4ee0-8222-a4972c0685ae req-71921548-fb0b-4645-964a-ce512500c973 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Processing event network-vif-plugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.925 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[398f0c01-5d0c-40ae-ac7e-331fb2425a84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.926 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa24c1904-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.927 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.927 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa24c1904-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.929 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:54 compute-0 NetworkManager[48962]: <info>  [1764402654.9302] manager: (tapa24c1904-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 29 07:50:54 compute-0 kernel: tapa24c1904-50: entered promiscuous mode
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.933 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.935 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:54 compute-0 ovn_controller[153383]: 2025-11-29T07:50:54Z|00057|binding|INFO|Releasing lport 11f9d079-79cd-4588-8ec9-e7d71108206b from this chassis (sb_readonly=0)
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.934 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa24c1904-50, col_values=(('external_ids', {'iface-id': '11f9d079-79cd-4588-8ec9-e7d71108206b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:50:54 compute-0 nova_compute[256729]: 2025-11-29 07:50:54.966 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.967 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a24c1904-53b2-4346-8806-9a1bad79dd5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a24c1904-53b2-4346-8806-9a1bad79dd5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.969 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[25783c74-977a-430a-943d-f272742db889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.970 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-a24c1904-53b2-4346-8806-9a1bad79dd5c
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/a24c1904-53b2-4346-8806-9a1bad79dd5c.pid.haproxy
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID a24c1904-53b2-4346-8806-9a1bad79dd5c
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:50:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:54.971 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'env', 'PROCESS_TAG=haproxy-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a24c1904-53b2-4346-8806-9a1bad79dd5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.242 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402655.2413993, 82592acd-eff0-47b3-9bba-391f395f4bab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.242 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] VM Started (Lifecycle Event)
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.245 256736 DEBUG nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.249 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.253 256736 INFO nova.virt.libvirt.driver [-] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Instance spawned successfully.
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.254 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]: {
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "osd_id": 2,
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "type": "bluestore"
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:     },
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "osd_id": 1,
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "type": "bluestore"
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:     },
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "osd_id": 0,
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:         "type": "bluestore"
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]:     }
Nov 29 07:50:55 compute-0 nostalgic_johnson[271340]: }
Nov 29 07:50:55 compute-0 systemd[1]: libpod-12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371.scope: Deactivated successfully.
Nov 29 07:50:55 compute-0 systemd[1]: libpod-12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371.scope: Consumed 1.057s CPU time.
Nov 29 07:50:55 compute-0 podman[271324]: 2025-11-29 07:50:55.41792042 +0000 UTC m=+1.399705690 container died 12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 07:50:55 compute-0 podman[271511]: 2025-11-29 07:50:55.366986392 +0000 UTC m=+0.025302771 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:50:55 compute-0 podman[271511]: 2025-11-29 07:50:55.636486077 +0000 UTC m=+0.294802426 container create 498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.746 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.752 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:50:55 compute-0 systemd[1]: Started libpod-conmon-498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9.scope.
Nov 29 07:50:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a122d52a4869b4c023d67aae666bc9f89a7d1d85c31cb273ecf621a6a31c99/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b49a380caa5049b1df68250f3061e15bf55c0d738efcc6cbcb614e6bdc836e9d-merged.mount: Deactivated successfully.
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.869 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.879 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402655.2415528, 82592acd-eff0-47b3-9bba-391f395f4bab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.880 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] VM Paused (Lifecycle Event)
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.885 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.887 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.891 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.893 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.894 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.895 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:55 compute-0 nova_compute[256729]: 2025-11-29 07:50:55.895 256736 DEBUG nova.virt.libvirt.driver [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:50:55 compute-0 ceph-mon[75050]: pgmap v1331: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 2.7 MiB/s wr, 169 op/s
Nov 29 07:50:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1165341514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:50:56 compute-0 podman[271324]: 2025-11-29 07:50:56.101940393 +0000 UTC m=+2.083725673 container remove 12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 07:50:56 compute-0 systemd[1]: libpod-conmon-12e4ba62a3a1c3f4df6dd89c976eedc97ffbdbe47367523f92de7545a09b3371.scope: Deactivated successfully.
Nov 29 07:50:56 compute-0 sudo[271157]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:50:56 compute-0 podman[271511]: 2025-11-29 07:50:56.268638336 +0000 UTC m=+0.926954765 container init 498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:50:56 compute-0 podman[271511]: 2025-11-29 07:50:56.279759719 +0000 UTC m=+0.938076078 container start 498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 07:50:56 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:50:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:50:56 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[271544]: [NOTICE]   (271551) : New worker (271553) forked
Nov 29 07:50:56 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[271544]: [NOTICE]   (271551) : Loading success.
Nov 29 07:50:56 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:50:56 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev e619b2f6-357f-488c-af0a-3b2b8f969902 does not exist
Nov 29 07:50:56 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev e46ac366-5b83-4dce-ab39-97763394ecff does not exist
Nov 29 07:50:56 compute-0 sudo[271562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:50:56 compute-0 sudo[271562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:56 compute-0 sudo[271562]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.492 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.498 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402655.2483084, 82592acd-eff0-47b3-9bba-391f395f4bab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.498 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] VM Resumed (Lifecycle Event)
Nov 29 07:50:56 compute-0 sudo[271587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:50:56 compute-0 sudo[271587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:50:56 compute-0 sudo[271587]: pam_unix(sudo:session): session closed for user root
Nov 29 07:50:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 1.9 MiB/s wr, 108 op/s
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.642 256736 INFO nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Took 10.65 seconds to spawn the instance on the hypervisor.
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.642 256736 DEBUG nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.645 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.662 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.670 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.740 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.762 256736 INFO nova.compute.manager [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Took 11.70 seconds to build instance.
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.784 256736 DEBUG oslo_concurrency.lockutils [None req-6d99591b-26ec-4bfe-a670-f7b7a76a498e 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:56 compute-0 nova_compute[256729]: 2025-11-29 07:50:56.803 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:50:57 compute-0 nova_compute[256729]: 2025-11-29 07:50:57.042 256736 DEBUG nova.compute.manager [req-ee5ac010-c15f-43f1-a06c-6bb898d1908e req-47cb8c99-348e-49fb-8186-f912cca8ce66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received event network-vif-plugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:50:57 compute-0 nova_compute[256729]: 2025-11-29 07:50:57.043 256736 DEBUG oslo_concurrency.lockutils [req-ee5ac010-c15f-43f1-a06c-6bb898d1908e req-47cb8c99-348e-49fb-8186-f912cca8ce66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:57 compute-0 nova_compute[256729]: 2025-11-29 07:50:57.044 256736 DEBUG oslo_concurrency.lockutils [req-ee5ac010-c15f-43f1-a06c-6bb898d1908e req-47cb8c99-348e-49fb-8186-f912cca8ce66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:57 compute-0 nova_compute[256729]: 2025-11-29 07:50:57.044 256736 DEBUG oslo_concurrency.lockutils [req-ee5ac010-c15f-43f1-a06c-6bb898d1908e req-47cb8c99-348e-49fb-8186-f912cca8ce66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:50:57 compute-0 nova_compute[256729]: 2025-11-29 07:50:57.045 256736 DEBUG nova.compute.manager [req-ee5ac010-c15f-43f1-a06c-6bb898d1908e req-47cb8c99-348e-49fb-8186-f912cca8ce66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] No waiting events found dispatching network-vif-plugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:50:57 compute-0 nova_compute[256729]: 2025-11-29 07:50:57.045 256736 WARNING nova.compute.manager [req-ee5ac010-c15f-43f1-a06c-6bb898d1908e req-47cb8c99-348e-49fb-8186-f912cca8ce66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received unexpected event network-vif-plugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 for instance with vm_state active and task_state None.
Nov 29 07:50:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:50:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 29 07:50:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 29 07:50:57 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 29 07:50:57 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:50:57 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:50:57 compute-0 ceph-mon[75050]: pgmap v1332: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 1.9 MiB/s wr, 108 op/s
Nov 29 07:50:57 compute-0 ceph-mon[75050]: osdmap e177: 3 total, 3 up, 3 in
Nov 29 07:50:57 compute-0 sshd-session[271346]: Connection closed by authenticating user root 143.14.121.41 port 36748 [preauth]
Nov 29 07:50:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 934 KiB/s wr, 94 op/s
Nov 29 07:50:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:59.770 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:50:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:59.771 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:50:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:50:59.771 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:00 compute-0 ceph-mon[75050]: pgmap v1334: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 934 KiB/s wr, 94 op/s
Nov 29 07:51:00 compute-0 nova_compute[256729]: 2025-11-29 07:51:00.197 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:00 compute-0 NetworkManager[48962]: <info>  [1764402660.1989] manager: (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 29 07:51:00 compute-0 NetworkManager[48962]: <info>  [1764402660.2005] manager: (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 29 07:51:00 compute-0 ovn_controller[153383]: 2025-11-29T07:51:00Z|00058|binding|INFO|Releasing lport 11f9d079-79cd-4588-8ec9-e7d71108206b from this chassis (sb_readonly=0)
Nov 29 07:51:00 compute-0 nova_compute[256729]: 2025-11-29 07:51:00.277 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:00 compute-0 ovn_controller[153383]: 2025-11-29T07:51:00Z|00059|binding|INFO|Releasing lport 11f9d079-79cd-4588-8ec9-e7d71108206b from this chassis (sb_readonly=0)
Nov 29 07:51:00 compute-0 nova_compute[256729]: 2025-11-29 07:51:00.282 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 934 KiB/s wr, 94 op/s
Nov 29 07:51:00 compute-0 sshd-session[271612]: Connection closed by authenticating user root 143.14.121.41 port 36750 [preauth]
Nov 29 07:51:00 compute-0 nova_compute[256729]: 2025-11-29 07:51:00.827 256736 DEBUG nova.compute.manager [req-a315a248-0a72-4e34-b2cb-7d86a3f393c3 req-929db212-6e99-424a-b5f5-7109c59e10cf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received event network-changed-8d79a1b9-de43-4d80-994a-adff27ccf2a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:00 compute-0 nova_compute[256729]: 2025-11-29 07:51:00.828 256736 DEBUG nova.compute.manager [req-a315a248-0a72-4e34-b2cb-7d86a3f393c3 req-929db212-6e99-424a-b5f5-7109c59e10cf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Refreshing instance network info cache due to event network-changed-8d79a1b9-de43-4d80-994a-adff27ccf2a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:51:00 compute-0 nova_compute[256729]: 2025-11-29 07:51:00.828 256736 DEBUG oslo_concurrency.lockutils [req-a315a248-0a72-4e34-b2cb-7d86a3f393c3 req-929db212-6e99-424a-b5f5-7109c59e10cf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-82592acd-eff0-47b3-9bba-391f395f4bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:00 compute-0 nova_compute[256729]: 2025-11-29 07:51:00.829 256736 DEBUG oslo_concurrency.lockutils [req-a315a248-0a72-4e34-b2cb-7d86a3f393c3 req-929db212-6e99-424a-b5f5-7109c59e10cf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-82592acd-eff0-47b3-9bba-391f395f4bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:00 compute-0 nova_compute[256729]: 2025-11-29 07:51:00.829 256736 DEBUG nova.network.neutron [req-a315a248-0a72-4e34-b2cb-7d86a3f393c3 req-929db212-6e99-424a-b5f5-7109c59e10cf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Refreshing network info cache for port 8d79a1b9-de43-4d80-994a-adff27ccf2a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:51:01 compute-0 ceph-mon[75050]: pgmap v1335: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 934 KiB/s wr, 94 op/s
Nov 29 07:51:01 compute-0 nova_compute[256729]: 2025-11-29 07:51:01.664 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:01 compute-0 nova_compute[256729]: 2025-11-29 07:51:01.805 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:02 compute-0 nova_compute[256729]: 2025-11-29 07:51:02.426 256736 DEBUG nova.network.neutron [req-a315a248-0a72-4e34-b2cb-7d86a3f393c3 req-929db212-6e99-424a-b5f5-7109c59e10cf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Updated VIF entry in instance network info cache for port 8d79a1b9-de43-4d80-994a-adff27ccf2a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:51:02 compute-0 nova_compute[256729]: 2025-11-29 07:51:02.427 256736 DEBUG nova.network.neutron [req-a315a248-0a72-4e34-b2cb-7d86a3f393c3 req-929db212-6e99-424a-b5f5-7109c59e10cf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Updating instance_info_cache with network_info: [{"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 111 op/s
Nov 29 07:51:02 compute-0 nova_compute[256729]: 2025-11-29 07:51:02.685 256736 DEBUG oslo_concurrency.lockutils [req-a315a248-0a72-4e34-b2cb-7d86a3f393c3 req-929db212-6e99-424a-b5f5-7109c59e10cf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-82592acd-eff0-47b3-9bba-391f395f4bab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:04 compute-0 ceph-mon[75050]: pgmap v1336: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 111 op/s
Nov 29 07:51:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 102 op/s
Nov 29 07:51:04 compute-0 sshd-session[271615]: Connection closed by authenticating user root 143.14.121.41 port 36760 [preauth]
Nov 29 07:51:05 compute-0 ceph-mon[75050]: pgmap v1337: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 102 op/s
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:51:05
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control']
Nov 29 07:51:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:51:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 29 07:51:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 29 07:51:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 73 op/s
Nov 29 07:51:06 compute-0 nova_compute[256729]: 2025-11-29 07:51:06.666 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:06 compute-0 nova_compute[256729]: 2025-11-29 07:51:06.807 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:51:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:51:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:07 compute-0 ceph-mon[75050]: osdmap e178: 3 total, 3 up, 3 in
Nov 29 07:51:07 compute-0 ceph-mon[75050]: pgmap v1339: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 73 op/s
Nov 29 07:51:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39015645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39015645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:08 compute-0 sshd-session[271617]: Connection closed by authenticating user root 143.14.121.41 port 43606 [preauth]
Nov 29 07:51:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/39015645' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/39015645' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/864564327' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/864564327' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 600 KiB/s wr, 80 op/s
Nov 29 07:51:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 29 07:51:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/864564327' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/864564327' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:09 compute-0 ceph-mon[75050]: pgmap v1340: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 600 KiB/s wr, 80 op/s
Nov 29 07:51:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 29 07:51:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 29 07:51:10 compute-0 ovn_controller[153383]: 2025-11-29T07:51:10Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:a8:a9 10.100.0.12
Nov 29 07:51:10 compute-0 ovn_controller[153383]: 2025-11-29T07:51:10Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:a8:a9 10.100.0.12
Nov 29 07:51:10 compute-0 ceph-mon[75050]: osdmap e179: 3 total, 3 up, 3 in
Nov 29 07:51:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 750 KiB/s wr, 63 op/s
Nov 29 07:51:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093360439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093360439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:11 compute-0 ceph-mon[75050]: pgmap v1342: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 750 KiB/s wr, 63 op/s
Nov 29 07:51:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1093360439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1093360439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:11 compute-0 nova_compute[256729]: 2025-11-29 07:51:11.669 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:11 compute-0 nova_compute[256729]: 2025-11-29 07:51:11.808 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 117 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 463 KiB/s rd, 3.1 MiB/s wr, 144 op/s
Nov 29 07:51:12 compute-0 sshd-session[271619]: Connection closed by authenticating user root 143.14.121.41 port 43612 [preauth]
Nov 29 07:51:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3792427040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3792427040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:13 compute-0 ceph-mon[75050]: pgmap v1343: 305 pgs: 305 active+clean; 117 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 463 KiB/s rd, 3.1 MiB/s wr, 144 op/s
Nov 29 07:51:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3792427040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3792427040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 534 KiB/s rd, 3.1 MiB/s wr, 164 op/s
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007571745580191443 of space, bias 1.0, pg target 0.2271523674057433 quantized to 32 (current 32)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:51:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:51:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:15.547 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:51:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:15.549 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:51:15 compute-0 nova_compute[256729]: 2025-11-29 07:51:15.549 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:15 compute-0 podman[271625]: 2025-11-29 07:51:15.737650949 +0000 UTC m=+0.087383463 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 07:51:15 compute-0 podman[271624]: 2025-11-29 07:51:15.76153256 +0000 UTC m=+0.116909577 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 07:51:15 compute-0 podman[271623]: 2025-11-29 07:51:15.762234419 +0000 UTC m=+0.122173030 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 07:51:15 compute-0 ceph-mon[75050]: pgmap v1344: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 534 KiB/s rd, 3.1 MiB/s wr, 164 op/s
Nov 29 07:51:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 452 KiB/s rd, 2.6 MiB/s wr, 147 op/s
Nov 29 07:51:16 compute-0 nova_compute[256729]: 2025-11-29 07:51:16.671 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:16 compute-0 nova_compute[256729]: 2025-11-29 07:51:16.810 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:51:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 5009 writes, 22K keys, 5009 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 5008 writes, 5008 syncs, 1.00 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1360 writes, 6215 keys, 1360 commit groups, 1.0 writes per commit group, ingest: 8.93 MB, 0.01 MB/s
                                           Interval WAL: 1359 writes, 1359 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.6      4.68              0.12        12    0.390       0      0       0.0       0.0
                                             L6      1/0    8.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     23.5     19.3      4.27              0.36        11    0.388     50K   5815       0.0       0.0
                                            Sum      1/0    8.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     11.2     12.2      8.95              0.47        23    0.389     50K   5815       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.0     14.1     14.1      2.58              0.19         8    0.323     20K   2026       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     23.5     19.3      4.27              0.36        11    0.388     50K   5815       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      4.67              0.12        11    0.425       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.026, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.11 GB write, 0.05 MB/s write, 0.10 GB read, 0.04 MB/s read, 8.9 seconds
                                           Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 2.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdb5ecb1f0#2 capacity: 304.00 MB usage: 8.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000186 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(583,8.56 MB,2.81582%) FilterBlock(24,146.11 KB,0.0469358%) IndexBlock(24,278.33 KB,0.0894095%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 07:51:17 compute-0 ceph-mon[75050]: pgmap v1345: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 452 KiB/s rd, 2.6 MiB/s wr, 147 op/s
Nov 29 07:51:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 29 07:51:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 29 07:51:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 29 07:51:17 compute-0 nova_compute[256729]: 2025-11-29 07:51:17.239 256736 DEBUG oslo_concurrency.lockutils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:17 compute-0 nova_compute[256729]: 2025-11-29 07:51:17.239 256736 DEBUG oslo_concurrency.lockutils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:17 compute-0 nova_compute[256729]: 2025-11-29 07:51:17.258 256736 DEBUG nova.objects.instance [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'flavor' on Instance uuid 82592acd-eff0-47b3-9bba-391f395f4bab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:17 compute-0 nova_compute[256729]: 2025-11-29 07:51:17.283 256736 INFO nova.virt.libvirt.driver [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Ignoring supplied device name: /dev/vdb
Nov 29 07:51:17 compute-0 nova_compute[256729]: 2025-11-29 07:51:17.306 256736 DEBUG oslo_concurrency.lockutils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:17 compute-0 sshd-session[271621]: Connection closed by authenticating user root 143.14.121.41 port 57384 [preauth]
Nov 29 07:51:17 compute-0 nova_compute[256729]: 2025-11-29 07:51:17.781 256736 DEBUG oslo_concurrency.lockutils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:17 compute-0 nova_compute[256729]: 2025-11-29 07:51:17.782 256736 DEBUG oslo_concurrency.lockutils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:17 compute-0 nova_compute[256729]: 2025-11-29 07:51:17.782 256736 INFO nova.compute.manager [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Attaching volume a079fbd7-4e02-4b79-a69f-6842dbcb8781 to /dev/vdb
Nov 29 07:51:18 compute-0 ceph-mon[75050]: osdmap e180: 3 total, 3 up, 3 in
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.254 256736 DEBUG os_brick.utils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.256 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.265 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.266 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[95e3ceb8-8714-44e8-94a1-54c51cf8f448]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.267 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.273 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.274 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[9d28da92-2650-487a-b495-eff323eda54e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.276 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.283 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.284 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2226e3-9bcd-42bc-bade-60907bfaceaf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.285 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[f951ef4c-5467-4375-85a0-3c3f559ef6cf]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.285 256736 DEBUG oslo_concurrency.processutils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.309 256736 DEBUG oslo_concurrency.processutils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.311 256736 DEBUG os_brick.initiator.connectors.lightos [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.311 256736 DEBUG os_brick.initiator.connectors.lightos [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.311 256736 DEBUG os_brick.initiator.connectors.lightos [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.312 256736 DEBUG os_brick.utils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:51:18 compute-0 nova_compute[256729]: 2025-11-29 07:51:18.312 256736 DEBUG nova.virt.block_device [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Updating existing volume attachment record: c30c6d42-0613-45dd-a264-d717ffefe059 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:51:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 361 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Nov 29 07:51:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:51:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463109648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:19 compute-0 nova_compute[256729]: 2025-11-29 07:51:19.502 256736 DEBUG nova.objects.instance [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'flavor' on Instance uuid 82592acd-eff0-47b3-9bba-391f395f4bab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:19 compute-0 nova_compute[256729]: 2025-11-29 07:51:19.548 256736 DEBUG nova.virt.libvirt.driver [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Attempting to attach volume a079fbd7-4e02-4b79-a69f-6842dbcb8781 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:51:19 compute-0 nova_compute[256729]: 2025-11-29 07:51:19.552 256736 DEBUG nova.virt.libvirt.guest [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:51:19 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:51:19 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-a079fbd7-4e02-4b79-a69f-6842dbcb8781">
Nov 29 07:51:19 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:51:19 compute-0 nova_compute[256729]:   </source>
Nov 29 07:51:19 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 07:51:19 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:51:19 compute-0 nova_compute[256729]:   </auth>
Nov 29 07:51:19 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:51:19 compute-0 nova_compute[256729]:   <serial>a079fbd7-4e02-4b79-a69f-6842dbcb8781</serial>
Nov 29 07:51:19 compute-0 nova_compute[256729]: </disk>
Nov 29 07:51:19 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:51:20 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:20.551 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.0 MiB/s wr, 99 op/s
Nov 29 07:51:20 compute-0 ceph-mon[75050]: pgmap v1347: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 361 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Nov 29 07:51:21 compute-0 nova_compute[256729]: 2025-11-29 07:51:21.134 256736 DEBUG nova.virt.libvirt.driver [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:51:21 compute-0 nova_compute[256729]: 2025-11-29 07:51:21.135 256736 DEBUG nova.virt.libvirt.driver [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:51:21 compute-0 nova_compute[256729]: 2025-11-29 07:51:21.136 256736 DEBUG nova.virt.libvirt.driver [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:51:21 compute-0 nova_compute[256729]: 2025-11-29 07:51:21.136 256736 DEBUG nova.virt.libvirt.driver [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No VIF found with MAC fa:16:3e:55:a8:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:51:21 compute-0 nova_compute[256729]: 2025-11-29 07:51:21.513 256736 DEBUG oslo_concurrency.lockutils [None req-d7969e9c-5ab1-4bf6-b27f-d6259d64c37c 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:21 compute-0 nova_compute[256729]: 2025-11-29 07:51:21.707 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:21 compute-0 nova_compute[256729]: 2025-11-29 07:51:21.812 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2463109648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:22 compute-0 ceph-mon[75050]: pgmap v1348: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.0 MiB/s wr, 99 op/s
Nov 29 07:51:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:22 compute-0 sshd-session[271687]: Connection closed by authenticating user root 143.14.121.41 port 57392 [preauth]
Nov 29 07:51:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:51:22 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3158383128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 104 KiB/s wr, 35 op/s
Nov 29 07:51:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 29 07:51:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 29 07:51:23 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 29 07:51:23 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3158383128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:23 compute-0 ceph-mon[75050]: pgmap v1349: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 104 KiB/s wr, 35 op/s
Nov 29 07:51:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 29 07:51:24 compute-0 ceph-mon[75050]: osdmap e181: 3 total, 3 up, 3 in
Nov 29 07:51:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 25 KiB/s wr, 10 op/s
Nov 29 07:51:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 29 07:51:24 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 29 07:51:25 compute-0 sshd-session[271716]: Connection closed by authenticating user root 143.14.121.41 port 45238 [preauth]
Nov 29 07:51:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396819748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396819748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:25 compute-0 ceph-mon[75050]: pgmap v1351: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 25 KiB/s wr, 10 op/s
Nov 29 07:51:25 compute-0 ceph-mon[75050]: osdmap e182: 3 total, 3 up, 3 in
Nov 29 07:51:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/396819748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/396819748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:26 compute-0 nova_compute[256729]: 2025-11-29 07:51:26.241 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 7.1 KiB/s wr, 10 op/s
Nov 29 07:51:26 compute-0 nova_compute[256729]: 2025-11-29 07:51:26.710 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:26 compute-0 nova_compute[256729]: 2025-11-29 07:51:26.813 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 29 07:51:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 29 07:51:26 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 29 07:51:27 compute-0 ceph-mon[75050]: pgmap v1353: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 7.1 KiB/s wr, 10 op/s
Nov 29 07:51:27 compute-0 ceph-mon[75050]: osdmap e183: 3 total, 3 up, 3 in
Nov 29 07:51:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3406905801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3406905801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3406905801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3406905801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.498 256736 DEBUG oslo_concurrency.lockutils [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.499 256736 DEBUG oslo_concurrency.lockutils [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.515 256736 INFO nova.compute.manager [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Detaching volume a079fbd7-4e02-4b79-a69f-6842dbcb8781
Nov 29 07:51:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 14 KiB/s wr, 95 op/s
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.698 256736 INFO nova.virt.block_device [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Attempting to driver detach volume a079fbd7-4e02-4b79-a69f-6842dbcb8781 from mountpoint /dev/vdb
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.710 256736 DEBUG nova.virt.libvirt.driver [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Attempting to detach device vdb from instance 82592acd-eff0-47b3-9bba-391f395f4bab from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.711 256736 DEBUG nova.virt.libvirt.guest [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-a079fbd7-4e02-4b79-a69f-6842dbcb8781">
Nov 29 07:51:28 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   </source>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <serial>a079fbd7-4e02-4b79-a69f-6842dbcb8781</serial>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:51:28 compute-0 nova_compute[256729]: </disk>
Nov 29 07:51:28 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.726 256736 INFO nova.virt.libvirt.driver [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Successfully detached device vdb from instance 82592acd-eff0-47b3-9bba-391f395f4bab from the persistent domain config.
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.727 256736 DEBUG nova.virt.libvirt.driver [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 82592acd-eff0-47b3-9bba-391f395f4bab from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.728 256736 DEBUG nova.virt.libvirt.guest [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-a079fbd7-4e02-4b79-a69f-6842dbcb8781">
Nov 29 07:51:28 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   </source>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <serial>a079fbd7-4e02-4b79-a69f-6842dbcb8781</serial>
Nov 29 07:51:28 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:51:28 compute-0 nova_compute[256729]: </disk>
Nov 29 07:51:28 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.779 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764402688.7792141, 82592acd-eff0-47b3-9bba-391f395f4bab => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.783 256736 DEBUG nova.virt.libvirt.driver [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 82592acd-eff0-47b3-9bba-391f395f4bab _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:51:28 compute-0 nova_compute[256729]: 2025-11-29 07:51:28.786 256736 INFO nova.virt.libvirt.driver [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Successfully detached device vdb from instance 82592acd-eff0-47b3-9bba-391f395f4bab from the live domain config.
Nov 29 07:51:29 compute-0 nova_compute[256729]: 2025-11-29 07:51:29.051 256736 DEBUG nova.objects.instance [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'flavor' on Instance uuid 82592acd-eff0-47b3-9bba-391f395f4bab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:29 compute-0 nova_compute[256729]: 2025-11-29 07:51:29.089 256736 DEBUG oslo_concurrency.lockutils [None req-a9e432de-d397-4080-96f7-c6aec5224260 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:29 compute-0 ceph-mon[75050]: pgmap v1355: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 14 KiB/s wr, 95 op/s
Nov 29 07:51:29 compute-0 sshd-session[271718]: Connection closed by authenticating user root 143.14.121.41 port 45250 [preauth]
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.146 256736 DEBUG oslo_concurrency.lockutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.147 256736 DEBUG oslo_concurrency.lockutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.147 256736 DEBUG oslo_concurrency.lockutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.148 256736 DEBUG oslo_concurrency.lockutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.148 256736 DEBUG oslo_concurrency.lockutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.150 256736 INFO nova.compute.manager [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Terminating instance
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.152 256736 DEBUG nova.compute.manager [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:51:30 compute-0 kernel: tap8d79a1b9-de (unregistering): left promiscuous mode
Nov 29 07:51:30 compute-0 NetworkManager[48962]: <info>  [1764402690.2042] device (tap8d79a1b9-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:51:30 compute-0 ovn_controller[153383]: 2025-11-29T07:51:30Z|00060|binding|INFO|Releasing lport 8d79a1b9-de43-4d80-994a-adff27ccf2a7 from this chassis (sb_readonly=0)
Nov 29 07:51:30 compute-0 ovn_controller[153383]: 2025-11-29T07:51:30Z|00061|binding|INFO|Setting lport 8d79a1b9-de43-4d80-994a-adff27ccf2a7 down in Southbound
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.216 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:30 compute-0 ovn_controller[153383]: 2025-11-29T07:51:30Z|00062|binding|INFO|Removing iface tap8d79a1b9-de ovn-installed in OVS
Nov 29 07:51:30 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:30.235 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:a8:a9 10.100.0.12'], port_security=['fa:16:3e:55:a8:a9 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '82592acd-eff0-47b3-9bba-391f395f4bab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8117debb786c4549812cc6e7571f6d4d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34247bb0-2cba-49f1-a2f6-90778dcaa45a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b45dfb6d-5934-4acb-b62b-b7104c4a665d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=8d79a1b9-de43-4d80-994a-adff27ccf2a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:51:30 compute-0 ovn_controller[153383]: 2025-11-29T07:51:30Z|00063|binding|INFO|Releasing lport 11f9d079-79cd-4588-8ec9-e7d71108206b from this chassis (sb_readonly=0)
Nov 29 07:51:30 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:30.237 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 8d79a1b9-de43-4d80-994a-adff27ccf2a7 in datapath a24c1904-53b2-4346-8806-9a1bad79dd5c unbound from our chassis
Nov 29 07:51:30 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:30.239 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a24c1904-53b2-4346-8806-9a1bad79dd5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:51:30 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:30.241 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[278b5598-82da-4c69-a9fe-2fb3349948b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.241 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:30 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:30.242 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c namespace which is not needed anymore
Nov 29 07:51:30 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 29 07:51:30 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 14.595s CPU time.
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.257 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:30 compute-0 systemd-machined[217781]: Machine qemu-3-instance-00000003 terminated.
Nov 29 07:51:30 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[271544]: [NOTICE]   (271551) : haproxy version is 2.8.14-c23fe91
Nov 29 07:51:30 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[271544]: [NOTICE]   (271551) : path to executable is /usr/sbin/haproxy
Nov 29 07:51:30 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[271544]: [WARNING]  (271551) : Exiting Master process...
Nov 29 07:51:30 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[271544]: [ALERT]    (271551) : Current worker (271553) exited with code 143 (Terminated)
Nov 29 07:51:30 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[271544]: [WARNING]  (271551) : All workers exited. Exiting... (0)
Nov 29 07:51:30 compute-0 systemd[1]: libpod-498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9.scope: Deactivated successfully.
Nov 29 07:51:30 compute-0 ovn_controller[153383]: 2025-11-29T07:51:30Z|00064|binding|INFO|Releasing lport 11f9d079-79cd-4588-8ec9-e7d71108206b from this chassis (sb_readonly=0)
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.394 256736 INFO nova.virt.libvirt.driver [-] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Instance destroyed successfully.
Nov 29 07:51:30 compute-0 podman[271746]: 2025-11-29 07:51:30.396083757 +0000 UTC m=+0.050886718 container died 498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.395 256736 DEBUG nova.objects.instance [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'resources' on Instance uuid 82592acd-eff0-47b3-9bba-391f395f4bab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.396 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.417 256736 DEBUG nova.virt.libvirt.vif [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:50:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-320687856',display_name='tempest-VolumesBackupsTest-instance-320687856',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-320687856',id=3,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRag5qkeBRJ62eeHZa7MgR0OvNArfyVXRybex9P+uyBJZ3MeRwnc6HFbAkK1pZNSk2IxUsZ4AocbOGxJy1pZKtNwpHQBf2BhZaRNGwk2OxDmpm0U3hlzSnLIoG3HAZLWQ==',key_name='tempest-keypair-2073514007',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:50:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8117debb786c4549812cc6e7571f6d4d',ramdisk_id='',reservation_id='r-42ibu52o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-12225578',owner_user_name='tempest-VolumesBackupsTest-12225578-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6bef1230e3de4a87aa01df74ec671a23',uuid=82592acd-eff0-47b3-9bba-391f395f4bab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.418 256736 DEBUG nova.network.os_vif_util [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converting VIF {"id": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "address": "fa:16:3e:55:a8:a9", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d79a1b9-de", "ovs_interfaceid": "8d79a1b9-de43-4d80-994a-adff27ccf2a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.418 256736 DEBUG nova.network.os_vif_util [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:a8:a9,bridge_name='br-int',has_traffic_filtering=True,id=8d79a1b9-de43-4d80-994a-adff27ccf2a7,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d79a1b9-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.419 256736 DEBUG os_vif [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:a8:a9,bridge_name='br-int',has_traffic_filtering=True,id=8d79a1b9-de43-4d80-994a-adff27ccf2a7,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d79a1b9-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.421 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.421 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d79a1b9-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.422 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.423 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:30 compute-0 nova_compute[256729]: 2025-11-29 07:51:30.425 256736 INFO os_vif [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:a8:a9,bridge_name='br-int',has_traffic_filtering=True,id=8d79a1b9-de43-4d80-994a-adff27ccf2a7,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d79a1b9-de')
Nov 29 07:51:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 4.7 KiB/s wr, 71 op/s
Nov 29 07:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9-userdata-shm.mount: Deactivated successfully.
Nov 29 07:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a122d52a4869b4c023d67aae666bc9f89a7d1d85c31cb273ecf621a6a31c99-merged.mount: Deactivated successfully.
Nov 29 07:51:31 compute-0 ceph-mon[75050]: pgmap v1356: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 4.7 KiB/s wr, 71 op/s
Nov 29 07:51:31 compute-0 podman[271746]: 2025-11-29 07:51:31.291917853 +0000 UTC m=+0.946720774 container cleanup 498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:51:31 compute-0 systemd[1]: libpod-conmon-498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9.scope: Deactivated successfully.
Nov 29 07:51:31 compute-0 nova_compute[256729]: 2025-11-29 07:51:31.713 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:32 compute-0 podman[271804]: 2025-11-29 07:51:32.29825192 +0000 UTC m=+0.944760310 container remove 498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 07:51:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:32.310 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[36ff50e8-7f1d-4338-ab40-4a6640d6988f]: (4, ('Sat Nov 29 07:51:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c (498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9)\n498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9\nSat Nov 29 07:51:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c (498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9)\n498d89c84f5c470866d3f14395216e8339e35d2e56e4ebea5115109f330f62c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:32.312 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b35f90c2-39d3-42b7-93d2-3bc9c1776af3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:32.312 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa24c1904-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:32 compute-0 nova_compute[256729]: 2025-11-29 07:51:32.315 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:32 compute-0 kernel: tapa24c1904-50: left promiscuous mode
Nov 29 07:51:32 compute-0 nova_compute[256729]: 2025-11-29 07:51:32.336 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:32.340 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0b1fca-0b4b-497a-9015-c8b256d2d07c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:32.359 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3f90da4d-9700-4b43-a20b-eb9795337a5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:32.360 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[274a5c6e-bfd4-47a7-aee5-61c387bbf37c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:32.375 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[71a3b4fa-425e-4254-b5b8-79e84d2d3b57]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496974, 'reachable_time': 37549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271820, 'error': None, 'target': 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:32.378 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:51:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:32.378 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf967ea-c76f-41cf-b1d3-54041f9c33a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:32 compute-0 systemd[1]: run-netns-ovnmeta\x2da24c1904\x2d53b2\x2d4346\x2d8806\x2d9a1bad79dd5c.mount: Deactivated successfully.
Nov 29 07:51:32 compute-0 sshd-session[271722]: Invalid user test from 143.14.121.41 port 45258
Nov 29 07:51:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 5.1 KiB/s wr, 98 op/s
Nov 29 07:51:33 compute-0 sshd-session[271722]: Connection closed by invalid user test 143.14.121.41 port 45258 [preauth]
Nov 29 07:51:33 compute-0 ceph-mon[75050]: pgmap v1357: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 5.1 KiB/s wr, 98 op/s
Nov 29 07:51:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 80 op/s
Nov 29 07:51:35 compute-0 nova_compute[256729]: 2025-11-29 07:51:35.091 256736 DEBUG nova.compute.manager [req-bd214b39-1e71-4bea-a7a2-116a27263f68 req-e5e098a5-d2b3-46dc-9f75-465933ec7d9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received event network-vif-unplugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:35 compute-0 nova_compute[256729]: 2025-11-29 07:51:35.092 256736 DEBUG oslo_concurrency.lockutils [req-bd214b39-1e71-4bea-a7a2-116a27263f68 req-e5e098a5-d2b3-46dc-9f75-465933ec7d9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:35 compute-0 nova_compute[256729]: 2025-11-29 07:51:35.092 256736 DEBUG oslo_concurrency.lockutils [req-bd214b39-1e71-4bea-a7a2-116a27263f68 req-e5e098a5-d2b3-46dc-9f75-465933ec7d9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:35 compute-0 nova_compute[256729]: 2025-11-29 07:51:35.092 256736 DEBUG oslo_concurrency.lockutils [req-bd214b39-1e71-4bea-a7a2-116a27263f68 req-e5e098a5-d2b3-46dc-9f75-465933ec7d9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:35 compute-0 nova_compute[256729]: 2025-11-29 07:51:35.093 256736 DEBUG nova.compute.manager [req-bd214b39-1e71-4bea-a7a2-116a27263f68 req-e5e098a5-d2b3-46dc-9f75-465933ec7d9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] No waiting events found dispatching network-vif-unplugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:51:35 compute-0 nova_compute[256729]: 2025-11-29 07:51:35.093 256736 DEBUG nova.compute.manager [req-bd214b39-1e71-4bea-a7a2-116a27263f68 req-e5e098a5-d2b3-46dc-9f75-465933ec7d9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received event network-vif-unplugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:51:35 compute-0 nova_compute[256729]: 2025-11-29 07:51:35.423 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:35 compute-0 ceph-mon[75050]: pgmap v1358: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 80 op/s
Nov 29 07:51:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:51:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:51:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 93 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.5 KiB/s wr, 79 op/s
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.715 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.790 256736 INFO nova.virt.libvirt.driver [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Deleting instance files /var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab_del
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.791 256736 INFO nova.virt.libvirt.driver [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Deletion of /var/lib/nova/instances/82592acd-eff0-47b3-9bba-391f395f4bab_del complete
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.849 256736 INFO nova.compute.manager [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Took 6.70 seconds to destroy the instance on the hypervisor.
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.850 256736 DEBUG oslo.service.loopingcall [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.850 256736 DEBUG nova.compute.manager [-] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.850 256736 DEBUG nova.network.neutron [-] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.876 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.876 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:36 compute-0 nova_compute[256729]: 2025-11-29 07:51:36.898 256736 DEBUG nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.015 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.017 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.031 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.031 256736 INFO nova.compute.claims [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:51:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 29 07:51:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 29 07:51:37 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.416 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.561 256736 DEBUG nova.compute.manager [req-74f437b6-9be9-47c7-b21f-9e8c16815969 req-b3a847c8-5ca3-42ca-b90a-bd1561a173ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received event network-vif-plugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.562 256736 DEBUG oslo_concurrency.lockutils [req-74f437b6-9be9-47c7-b21f-9e8c16815969 req-b3a847c8-5ca3-42ca-b90a-bd1561a173ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.562 256736 DEBUG oslo_concurrency.lockutils [req-74f437b6-9be9-47c7-b21f-9e8c16815969 req-b3a847c8-5ca3-42ca-b90a-bd1561a173ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.562 256736 DEBUG oslo_concurrency.lockutils [req-74f437b6-9be9-47c7-b21f-9e8c16815969 req-b3a847c8-5ca3-42ca-b90a-bd1561a173ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.562 256736 DEBUG nova.compute.manager [req-74f437b6-9be9-47c7-b21f-9e8c16815969 req-b3a847c8-5ca3-42ca-b90a-bd1561a173ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] No waiting events found dispatching network-vif-plugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.563 256736 WARNING nova.compute.manager [req-74f437b6-9be9-47c7-b21f-9e8c16815969 req-b3a847c8-5ca3-42ca-b90a-bd1561a173ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received unexpected event network-vif-plugged-8d79a1b9-de43-4d80-994a-adff27ccf2a7 for instance with vm_state active and task_state deleting.
Nov 29 07:51:37 compute-0 ceph-mon[75050]: pgmap v1359: 305 pgs: 305 active+clean; 93 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.5 KiB/s wr, 79 op/s
Nov 29 07:51:37 compute-0 ceph-mon[75050]: osdmap e184: 3 total, 3 up, 3 in
Nov 29 07:51:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807762457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.848 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.854 256736 DEBUG nova.compute.provider_tree [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.879 256736 DEBUG nova.scheduler.client.report [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.914 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:37 compute-0 nova_compute[256729]: 2025-11-29 07:51:37.915 256736 DEBUG nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.005 256736 DEBUG nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.006 256736 DEBUG nova.network.neutron [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.031 256736 INFO nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.062 256736 DEBUG nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.193 256736 DEBUG nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.195 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.195 256736 INFO nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Creating image(s)
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.223 256736 DEBUG nova.storage.rbd_utils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.249 256736 DEBUG nova.storage.rbd_utils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.270 256736 DEBUG nova.storage.rbd_utils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.273 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.324 256736 DEBUG nova.network.neutron [-] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.335 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.336 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.336 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.337 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.357 256736 DEBUG nova.storage.rbd_utils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.362 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.382 256736 INFO nova.compute.manager [-] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Took 1.53 seconds to deallocate network for instance.
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.447 256736 DEBUG nova.compute.manager [req-6a9be855-8a9c-4c52-a845-89973d1c1a82 req-e64bcfc3-1526-45dc-8df3-99085aa44ac6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Received event network-vif-deleted-8d79a1b9-de43-4d80-994a-adff27ccf2a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.451 256736 DEBUG oslo_concurrency.lockutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.451 256736 DEBUG oslo_concurrency.lockutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.539 256736 DEBUG oslo_concurrency.processutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:38 compute-0 nova_compute[256729]: 2025-11-29 07:51:38.564 256736 DEBUG nova.policy [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a47b942d30fe4bd69742fcb8e3cfdb1d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6fa0635c8b0e4d5b8c2a094db6beebe2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:51:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 42 op/s
Nov 29 07:51:39 compute-0 sshd-session[271821]: Invalid user test1 from 143.14.121.41 port 55732
Nov 29 07:51:39 compute-0 nova_compute[256729]: 2025-11-29 07:51:39.357 256736 DEBUG nova.network.neutron [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Successfully created port: 37259991-854b-4c1c-b4f6-77bef9eeb129 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:51:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762460749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:39 compute-0 nova_compute[256729]: 2025-11-29 07:51:39.466 256736 DEBUG oslo_concurrency.processutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.928s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:39 compute-0 nova_compute[256729]: 2025-11-29 07:51:39.475 256736 DEBUG nova.compute.provider_tree [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:51:39 compute-0 nova_compute[256729]: 2025-11-29 07:51:39.498 256736 DEBUG nova.scheduler.client.report [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:51:39 compute-0 sshd-session[271821]: Connection closed by invalid user test1 143.14.121.41 port 55732 [preauth]
Nov 29 07:51:39 compute-0 nova_compute[256729]: 2025-11-29 07:51:39.525 256736 DEBUG oslo_concurrency.lockutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:39 compute-0 nova_compute[256729]: 2025-11-29 07:51:39.550 256736 INFO nova.scheduler.client.report [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Deleted allocations for instance 82592acd-eff0-47b3-9bba-391f395f4bab
Nov 29 07:51:39 compute-0 nova_compute[256729]: 2025-11-29 07:51:39.625 256736 DEBUG oslo_concurrency.lockutils [None req-1c99d713-fb3f-4a41-ad46-7618a71751ce 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "82592acd-eff0-47b3-9bba-391f395f4bab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2807762457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:40 compute-0 nova_compute[256729]: 2025-11-29 07:51:40.426 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 42 op/s
Nov 29 07:51:41 compute-0 nova_compute[256729]: 2025-11-29 07:51:41.012 256736 DEBUG nova.network.neutron [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Successfully updated port: 37259991-854b-4c1c-b4f6-77bef9eeb129 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:51:41 compute-0 nova_compute[256729]: 2025-11-29 07:51:41.042 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "refresh_cache-d052677b-94f4-47ef-94ad-02bc1cbf6dd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:41 compute-0 nova_compute[256729]: 2025-11-29 07:51:41.042 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquired lock "refresh_cache-d052677b-94f4-47ef-94ad-02bc1cbf6dd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:41 compute-0 nova_compute[256729]: 2025-11-29 07:51:41.043 256736 DEBUG nova.network.neutron [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:51:41 compute-0 nova_compute[256729]: 2025-11-29 07:51:41.139 256736 DEBUG nova.compute.manager [req-4808c10a-5291-41c8-9948-5324047fad57 req-290e7430-0c65-4196-bf13-03c82b915a92 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Received event network-changed-37259991-854b-4c1c-b4f6-77bef9eeb129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:41 compute-0 nova_compute[256729]: 2025-11-29 07:51:41.140 256736 DEBUG nova.compute.manager [req-4808c10a-5291-41c8-9948-5324047fad57 req-290e7430-0c65-4196-bf13-03c82b915a92 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Refreshing instance network info cache due to event network-changed-37259991-854b-4c1c-b4f6-77bef9eeb129. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:51:41 compute-0 nova_compute[256729]: 2025-11-29 07:51:41.141 256736 DEBUG oslo_concurrency.lockutils [req-4808c10a-5291-41c8-9948-5324047fad57 req-290e7430-0c65-4196-bf13-03c82b915a92 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-d052677b-94f4-47ef-94ad-02bc1cbf6dd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:51:41 compute-0 nova_compute[256729]: 2025-11-29 07:51:41.298 256736 DEBUG nova.network.neutron [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:51:41 compute-0 nova_compute[256729]: 2025-11-29 07:51:41.756 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:41 compute-0 sshd-session[271962]: Invalid user telnet from 143.14.121.41 port 55734
Nov 29 07:51:42 compute-0 ceph-mon[75050]: pgmap v1361: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 42 op/s
Nov 29 07:51:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3762460749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:42 compute-0 nova_compute[256729]: 2025-11-29 07:51:42.166 256736 DEBUG nova.network.neutron [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Updating instance_info_cache with network_info: [{"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:42 compute-0 nova_compute[256729]: 2025-11-29 07:51:42.195 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Releasing lock "refresh_cache-d052677b-94f4-47ef-94ad-02bc1cbf6dd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:42 compute-0 nova_compute[256729]: 2025-11-29 07:51:42.196 256736 DEBUG nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Instance network_info: |[{"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:51:42 compute-0 nova_compute[256729]: 2025-11-29 07:51:42.197 256736 DEBUG oslo_concurrency.lockutils [req-4808c10a-5291-41c8-9948-5324047fad57 req-290e7430-0c65-4196-bf13-03c82b915a92 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-d052677b-94f4-47ef-94ad-02bc1cbf6dd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:51:42 compute-0 nova_compute[256729]: 2025-11-29 07:51:42.197 256736 DEBUG nova.network.neutron [req-4808c10a-5291-41c8-9948-5324047fad57 req-290e7430-0c65-4196-bf13-03c82b915a92 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Refreshing network info cache for port 37259991-854b-4c1c-b4f6-77bef9eeb129 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:51:42 compute-0 sshd-session[271962]: Connection closed by invalid user telnet 143.14.121.41 port 55734 [preauth]
Nov 29 07:51:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 66 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 29 07:51:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 29 07:51:44 compute-0 ceph-mon[75050]: pgmap v1362: 305 pgs: 305 active+clean; 41 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 42 op/s
Nov 29 07:51:44 compute-0 nova_compute[256729]: 2025-11-29 07:51:44.882 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:45 compute-0 nova_compute[256729]: 2025-11-29 07:51:45.003 256736 DEBUG nova.storage.rbd_utils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] resizing rbd image d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:51:45 compute-0 nova_compute[256729]: 2025-11-29 07:51:45.100 256736 DEBUG nova.network.neutron [req-4808c10a-5291-41c8-9948-5324047fad57 req-290e7430-0c65-4196-bf13-03c82b915a92 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Updated VIF entry in instance network info cache for port 37259991-854b-4c1c-b4f6-77bef9eeb129. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:51:45 compute-0 nova_compute[256729]: 2025-11-29 07:51:45.101 256736 DEBUG nova.network.neutron [req-4808c10a-5291-41c8-9948-5324047fad57 req-290e7430-0c65-4196-bf13-03c82b915a92 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Updating instance_info_cache with network_info: [{"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:45 compute-0 nova_compute[256729]: 2025-11-29 07:51:45.122 256736 DEBUG oslo_concurrency.lockutils [req-4808c10a-5291-41c8-9948-5324047fad57 req-290e7430-0c65-4196-bf13-03c82b915a92 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-d052677b-94f4-47ef-94ad-02bc1cbf6dd2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:51:45 compute-0 nova_compute[256729]: 2025-11-29 07:51:45.392 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402690.390488, 82592acd-eff0-47b3-9bba-391f395f4bab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:45 compute-0 nova_compute[256729]: 2025-11-29 07:51:45.392 256736 INFO nova.compute.manager [-] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] VM Stopped (Lifecycle Event)
Nov 29 07:51:45 compute-0 nova_compute[256729]: 2025-11-29 07:51:45.417 256736 DEBUG nova.compute.manager [None req-80aa96b5-d6d3-4c22-bc5b-b3cc87bec7d3 - - - - - -] [instance: 82592acd-eff0-47b3-9bba-391f395f4bab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:45 compute-0 nova_compute[256729]: 2025-11-29 07:51:45.429 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 29 07:51:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 29 07:51:46 compute-0 ceph-mon[75050]: pgmap v1363: 305 pgs: 305 active+clean; 66 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Nov 29 07:51:46 compute-0 ceph-mon[75050]: pgmap v1364: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 29 07:51:46 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.103 256736 DEBUG nova.objects.instance [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lazy-loading 'migration_context' on Instance uuid d052677b-94f4-47ef-94ad-02bc1cbf6dd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.126 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.126 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Ensure instance console log exists: /var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.127 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.127 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.128 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.131 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Start _get_guest_xml network_info=[{"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.136 256736 WARNING nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.142 256736 DEBUG nova.virt.libvirt.host [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.143 256736 DEBUG nova.virt.libvirt.host [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.146 256736 DEBUG nova.virt.libvirt.host [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.146 256736 DEBUG nova.virt.libvirt.host [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.147 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.147 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.147 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.148 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.148 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.148 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.148 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.149 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.149 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.149 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.149 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.150 256736 DEBUG nova.virt.hardware [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.152 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.176 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:51:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426964175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.584 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.607 256736 DEBUG nova.storage.rbd_utils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.611 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:46 compute-0 sshd-session[271964]: Connection closed by authenticating user root 143.14.121.41 port 34882 [preauth]
Nov 29 07:51:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 MiB/s wr, 41 op/s
Nov 29 07:51:46 compute-0 podman[272081]: 2025-11-29 07:51:46.685065289 +0000 UTC m=+0.057795307 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 07:51:46 compute-0 podman[272079]: 2025-11-29 07:51:46.698861235 +0000 UTC m=+0.072964210 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:51:46 compute-0 podman[272075]: 2025-11-29 07:51:46.715733354 +0000 UTC m=+0.092839972 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 07:51:46 compute-0 nova_compute[256729]: 2025-11-29 07:51:46.754 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298186208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298186208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:51:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1593089235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:47 compute-0 ceph-mon[75050]: osdmap e185: 3 total, 3 up, 3 in
Nov 29 07:51:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3426964175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:47 compute-0 ceph-mon[75050]: pgmap v1366: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 MiB/s wr, 41 op/s
Nov 29 07:51:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/298186208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/298186208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.033 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.035 256736 DEBUG nova.virt.libvirt.vif [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:51:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1580983255',display_name='tempest-VolumesActionsTest-instance-1580983255',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1580983255',id=4,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fa0635c8b0e4d5b8c2a094db6beebe2',ramdisk_id='',reservation_id='r-wakx735a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-814590685',owner_user_name='tempest-VolumesActionsTest-814590685-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:51:38Z,user_data=None,user_id='a47b942d30fe4bd69742fcb8e3cfdb1d',uuid=d052677b-94f4-47ef-94ad-02bc1cbf6dd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.036 256736 DEBUG nova.network.os_vif_util [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converting VIF {"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.037 256736 DEBUG nova.network.os_vif_util [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:58:29,bridge_name='br-int',has_traffic_filtering=True,id=37259991-854b-4c1c-b4f6-77bef9eeb129,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37259991-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.038 256736 DEBUG nova.objects.instance [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d052677b-94f4-47ef-94ad-02bc1cbf6dd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.085 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <uuid>d052677b-94f4-47ef-94ad-02bc1cbf6dd2</uuid>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <name>instance-00000004</name>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <nova:name>tempest-VolumesActionsTest-instance-1580983255</nova:name>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:51:46</nova:creationTime>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <nova:user uuid="a47b942d30fe4bd69742fcb8e3cfdb1d">tempest-VolumesActionsTest-814590685-project-member</nova:user>
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <nova:project uuid="6fa0635c8b0e4d5b8c2a094db6beebe2">tempest-VolumesActionsTest-814590685</nova:project>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <nova:port uuid="37259991-854b-4c1c-b4f6-77bef9eeb129">
Nov 29 07:51:47 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <system>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <entry name="serial">d052677b-94f4-47ef-94ad-02bc1cbf6dd2</entry>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <entry name="uuid">d052677b-94f4-47ef-94ad-02bc1cbf6dd2</entry>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     </system>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <os>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   </os>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <features>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   </features>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk">
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       </source>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk.config">
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       </source>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:51:47 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:45:58:29"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <target dev="tap37259991-85"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2/console.log" append="off"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <video>
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     </video>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:51:47 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:51:47 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:51:47 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:51:47 compute-0 nova_compute[256729]: </domain>
Nov 29 07:51:47 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.087 256736 DEBUG nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Preparing to wait for external event network-vif-plugged-37259991-854b-4c1c-b4f6-77bef9eeb129 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.087 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.087 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.088 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.088 256736 DEBUG nova.virt.libvirt.vif [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:51:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1580983255',display_name='tempest-VolumesActionsTest-instance-1580983255',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1580983255',id=4,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fa0635c8b0e4d5b8c2a094db6beebe2',ramdisk_id='',reservation_id='r-wakx735a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-814590685',owner_user_name='tempest-VolumesActionsTest-814590685-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:51:38Z,user_data=None,user_id='a47b942d30fe4bd69742fcb8e3cfdb1d',uuid=d052677b-94f4-47ef-94ad-02bc1cbf6dd2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.089 256736 DEBUG nova.network.os_vif_util [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converting VIF {"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.089 256736 DEBUG nova.network.os_vif_util [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:58:29,bridge_name='br-int',has_traffic_filtering=True,id=37259991-854b-4c1c-b4f6-77bef9eeb129,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37259991-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.090 256736 DEBUG os_vif [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:58:29,bridge_name='br-int',has_traffic_filtering=True,id=37259991-854b-4c1c-b4f6-77bef9eeb129,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37259991-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.091 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.091 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.091 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.097 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.098 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37259991-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.098 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37259991-85, col_values=(('external_ids', {'iface-id': '37259991-854b-4c1c-b4f6-77bef9eeb129', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:58:29', 'vm-uuid': 'd052677b-94f4-47ef-94ad-02bc1cbf6dd2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.099 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:47 compute-0 NetworkManager[48962]: <info>  [1764402707.1006] manager: (tap37259991-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.103 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.106 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.107 256736 INFO os_vif [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:58:29,bridge_name='br-int',has_traffic_filtering=True,id=37259991-854b-4c1c-b4f6-77bef9eeb129,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37259991-85')
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.190 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.190 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.191 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] No VIF found with MAC fa:16:3e:45:58:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.191 256736 INFO nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Using config drive
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.210 256736 DEBUG nova.storage.rbd_utils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.651 256736 INFO nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Creating config drive at /var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2/disk.config
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.662 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jgo_g_4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.795 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jgo_g_4" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.823 256736 DEBUG nova.storage.rbd_utils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:51:47 compute-0 nova_compute[256729]: 2025-11-29 07:51:47.827 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2/disk.config d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1175890370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1175890370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.016 256736 DEBUG oslo_concurrency.processutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2/disk.config d052677b-94f4-47ef-94ad-02bc1cbf6dd2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.018 256736 INFO nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Deleting local config drive /var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2/disk.config because it was imported into RBD.
Nov 29 07:51:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1593089235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:51:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1175890370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1175890370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:48 compute-0 kernel: tap37259991-85: entered promiscuous mode
Nov 29 07:51:48 compute-0 NetworkManager[48962]: <info>  [1764402708.0896] manager: (tap37259991-85): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Nov 29 07:51:48 compute-0 ovn_controller[153383]: 2025-11-29T07:51:48Z|00065|binding|INFO|Claiming lport 37259991-854b-4c1c-b4f6-77bef9eeb129 for this chassis.
Nov 29 07:51:48 compute-0 ovn_controller[153383]: 2025-11-29T07:51:48Z|00066|binding|INFO|37259991-854b-4c1c-b4f6-77bef9eeb129: Claiming fa:16:3e:45:58:29 10.100.0.13
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.091 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.106 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:58:29 10.100.0.13'], port_security=['fa:16:3e:45:58:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd052677b-94f4-47ef-94ad-02bc1cbf6dd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0250a927-9e86-48d0-9872-012000185830', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fa0635c8b0e4d5b8c2a094db6beebe2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24ad3171-6597-46b3-9123-d3df2d1383f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d521626b-b074-4c60-9bd5-bdea08a2b916, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=37259991-854b-4c1c-b4f6-77bef9eeb129) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.107 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 37259991-854b-4c1c-b4f6-77bef9eeb129 in datapath 0250a927-9e86-48d0-9872-012000185830 bound to our chassis
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.109 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0250a927-9e86-48d0-9872-012000185830
Nov 29 07:51:48 compute-0 systemd-udevd[272235]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.125 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bdd74c-07fa-401c-9d67-5d746d20e373]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.126 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0250a927-91 in ovnmeta-0250a927-9e86-48d0-9872-012000185830 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.129 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0250a927-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.129 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6976db-d804-4527-abb3-00cebe0837fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.130 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4d436fa8-e60a-4ac7-bc84-09d89309e947]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 systemd-machined[217781]: New machine qemu-4-instance-00000004.
Nov 29 07:51:48 compute-0 NetworkManager[48962]: <info>  [1764402708.1411] device (tap37259991-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:51:48 compute-0 NetworkManager[48962]: <info>  [1764402708.1426] device (tap37259991-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.151 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[259f36ca-4b64-4407-b976-55341f69403d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.177 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6d802c0d-360d-4f92-8fc2-6ee92925d9f4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.191 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:48 compute-0 ovn_controller[153383]: 2025-11-29T07:51:48Z|00067|binding|INFO|Setting lport 37259991-854b-4c1c-b4f6-77bef9eeb129 ovn-installed in OVS
Nov 29 07:51:48 compute-0 ovn_controller[153383]: 2025-11-29T07:51:48Z|00068|binding|INFO|Setting lport 37259991-854b-4c1c-b4f6-77bef9eeb129 up in Southbound
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.195 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.209 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[b60e6920-1dce-4143-a74d-39383033cdae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 NetworkManager[48962]: <info>  [1764402708.2154] manager: (tap0250a927-90): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.215 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[37432d64-9bfc-4b86-8f6d-cc481d6dd072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.253 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[67bf277d-4711-4d39-9626-4f25e73377cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.256 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1de6db-eabd-4e9d-8637-2d71b67eee85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 NetworkManager[48962]: <info>  [1764402708.2791] device (tap0250a927-90): carrier: link connected
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.284 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7668e2db-4efe-443d-b473-f8a67cf38521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.305 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6555da42-0959-4a43-b908-21d9af37ba49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0250a927-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:51:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502334, 'reachable_time': 42382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272267, 'error': None, 'target': 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.324 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[892cb0bd-6bca-40ad-b5db-3254d34be447]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:516f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502334, 'tstamp': 502334}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272268, 'error': None, 'target': 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.344 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0efb68fc-7cbe-418e-b288-efcb9c6a3abd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0250a927-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:51:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502334, 'reachable_time': 42382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272269, 'error': None, 'target': 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.376 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[49e79d19-2bd2-4280-bb8f-fcb39d64ae0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.449 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[62e99713-04c2-4b49-a4d9-5463a251d53b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.452 256736 DEBUG nova.compute.manager [req-59e5ec8f-89e3-41ed-9304-a3ea7ca5e14f req-8d2f9db7-b736-41ed-9087-6800fcf980ab ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Received event network-vif-plugged-37259991-854b-4c1c-b4f6-77bef9eeb129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.454 256736 DEBUG oslo_concurrency.lockutils [req-59e5ec8f-89e3-41ed-9304-a3ea7ca5e14f req-8d2f9db7-b736-41ed-9087-6800fcf980ab ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.455 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0250a927-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.454 256736 DEBUG oslo_concurrency.lockutils [req-59e5ec8f-89e3-41ed-9304-a3ea7ca5e14f req-8d2f9db7-b736-41ed-9087-6800fcf980ab ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.455 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.456 256736 DEBUG oslo_concurrency.lockutils [req-59e5ec8f-89e3-41ed-9304-a3ea7ca5e14f req-8d2f9db7-b736-41ed-9087-6800fcf980ab ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.456 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0250a927-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.456 256736 DEBUG nova.compute.manager [req-59e5ec8f-89e3-41ed-9304-a3ea7ca5e14f req-8d2f9db7-b736-41ed-9087-6800fcf980ab ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Processing event network-vif-plugged-37259991-854b-4c1c-b4f6-77bef9eeb129 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.459 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:48 compute-0 kernel: tap0250a927-90: entered promiscuous mode
Nov 29 07:51:48 compute-0 NetworkManager[48962]: <info>  [1764402708.4596] manager: (tap0250a927-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.464 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0250a927-90, col_values=(('external_ids', {'iface-id': '125944c7-1b99-4283-8df8-d8ef5cebc9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:48 compute-0 ovn_controller[153383]: 2025-11-29T07:51:48Z|00069|binding|INFO|Releasing lport 125944c7-1b99-4283-8df8-d8ef5cebc9bf from this chassis (sb_readonly=0)
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.465 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.496 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.498 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0250a927-9e86-48d0-9872-012000185830.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0250a927-9e86-48d0-9872-012000185830.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.499 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[603144ed-63e5-43a2-8f02-ed0fc88bcef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.500 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-0250a927-9e86-48d0-9872-012000185830
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/0250a927-9e86-48d0-9872-012000185830.pid.haproxy
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 0250a927-9e86-48d0-9872-012000185830
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:51:48 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:48.501 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'env', 'PROCESS_TAG=haproxy-0250a927-9e86-48d0-9872-012000185830', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0250a927-9e86-48d0-9872-012000185830.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.621 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402708.6213696, d052677b-94f4-47ef-94ad-02bc1cbf6dd2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.622 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] VM Started (Lifecycle Event)
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.626 256736 DEBUG nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.630 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.635 256736 INFO nova.virt.libvirt.driver [-] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Instance spawned successfully.
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.636 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:51:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.669 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.676 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.682 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.682 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.683 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.684 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.684 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.685 256736 DEBUG nova.virt.libvirt.driver [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.697 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.698 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402708.6214724, d052677b-94f4-47ef-94ad-02bc1cbf6dd2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.698 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] VM Paused (Lifecycle Event)
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.722 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.725 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402708.6294012, d052677b-94f4-47ef-94ad-02bc1cbf6dd2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.726 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] VM Resumed (Lifecycle Event)
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.795 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.800 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.829 256736 INFO nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Took 10.63 seconds to spawn the instance on the hypervisor.
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.829 256736 DEBUG nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.834 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:51:48 compute-0 podman[272343]: 2025-11-29 07:51:48.881673016 +0000 UTC m=+0.053315374 container create 59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.918 256736 INFO nova.compute.manager [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Took 11.96 seconds to build instance.
Nov 29 07:51:48 compute-0 systemd[1]: Started libpod-conmon-59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c.scope.
Nov 29 07:51:48 compute-0 podman[272343]: 2025-11-29 07:51:48.853654362 +0000 UTC m=+0.025296760 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:51:48 compute-0 nova_compute[256729]: 2025-11-29 07:51:48.956 256736 DEBUG oslo_concurrency.lockutils [None req-c949ef8b-ab40-4727-ac2b-0d71a7767aac a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef392ecd47ba91bcc10f5fef33242696ceea4ad0d9eb157090ea18c4f800e354/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:49 compute-0 podman[272343]: 2025-11-29 07:51:49.049383037 +0000 UTC m=+0.221025405 container init 59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 07:51:49 compute-0 ceph-mon[75050]: pgmap v1367: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 29 07:51:49 compute-0 podman[272343]: 2025-11-29 07:51:49.057018054 +0000 UTC m=+0.228660402 container start 59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 07:51:49 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[272358]: [NOTICE]   (272362) : New worker (272364) forked
Nov 29 07:51:49 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[272358]: [NOTICE]   (272362) : Loading success.
Nov 29 07:51:49 compute-0 sshd-session[272157]: Connection closed by authenticating user root 143.14.121.41 port 34892 [preauth]
Nov 29 07:51:50 compute-0 nova_compute[256729]: 2025-11-29 07:51:50.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:50 compute-0 nova_compute[256729]: 2025-11-29 07:51:50.616 256736 DEBUG nova.compute.manager [req-839a4383-fa58-413e-be47-73d46be5a1fa req-9e943154-0d7b-4aba-bccc-8dea7fbda280 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Received event network-vif-plugged-37259991-854b-4c1c-b4f6-77bef9eeb129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:50 compute-0 nova_compute[256729]: 2025-11-29 07:51:50.617 256736 DEBUG oslo_concurrency.lockutils [req-839a4383-fa58-413e-be47-73d46be5a1fa req-9e943154-0d7b-4aba-bccc-8dea7fbda280 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:50 compute-0 nova_compute[256729]: 2025-11-29 07:51:50.617 256736 DEBUG oslo_concurrency.lockutils [req-839a4383-fa58-413e-be47-73d46be5a1fa req-9e943154-0d7b-4aba-bccc-8dea7fbda280 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:50 compute-0 nova_compute[256729]: 2025-11-29 07:51:50.617 256736 DEBUG oslo_concurrency.lockutils [req-839a4383-fa58-413e-be47-73d46be5a1fa req-9e943154-0d7b-4aba-bccc-8dea7fbda280 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:50 compute-0 nova_compute[256729]: 2025-11-29 07:51:50.617 256736 DEBUG nova.compute.manager [req-839a4383-fa58-413e-be47-73d46be5a1fa req-9e943154-0d7b-4aba-bccc-8dea7fbda280 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] No waiting events found dispatching network-vif-plugged-37259991-854b-4c1c-b4f6-77bef9eeb129 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:51:50 compute-0 nova_compute[256729]: 2025-11-29 07:51:50.618 256736 WARNING nova.compute.manager [req-839a4383-fa58-413e-be47-73d46be5a1fa req-9e943154-0d7b-4aba-bccc-8dea7fbda280 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Received unexpected event network-vif-plugged-37259991-854b-4c1c-b4f6-77bef9eeb129 for instance with vm_state active and task_state None.
Nov 29 07:51:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.178 256736 DEBUG oslo_concurrency.lockutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.179 256736 DEBUG oslo_concurrency.lockutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.179 256736 DEBUG oslo_concurrency.lockutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.180 256736 DEBUG oslo_concurrency.lockutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.180 256736 DEBUG oslo_concurrency.lockutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.182 256736 INFO nova.compute.manager [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Terminating instance
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.184 256736 DEBUG nova.compute.manager [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:51:51 compute-0 kernel: tap37259991-85 (unregistering): left promiscuous mode
Nov 29 07:51:51 compute-0 NetworkManager[48962]: <info>  [1764402711.2307] device (tap37259991-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:51:51 compute-0 ovn_controller[153383]: 2025-11-29T07:51:51Z|00070|binding|INFO|Releasing lport 37259991-854b-4c1c-b4f6-77bef9eeb129 from this chassis (sb_readonly=0)
Nov 29 07:51:51 compute-0 ovn_controller[153383]: 2025-11-29T07:51:51Z|00071|binding|INFO|Setting lport 37259991-854b-4c1c-b4f6-77bef9eeb129 down in Southbound
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.244 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:51 compute-0 ovn_controller[153383]: 2025-11-29T07:51:51Z|00072|binding|INFO|Removing iface tap37259991-85 ovn-installed in OVS
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.268 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:58:29 10.100.0.13'], port_security=['fa:16:3e:45:58:29 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd052677b-94f4-47ef-94ad-02bc1cbf6dd2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0250a927-9e86-48d0-9872-012000185830', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fa0635c8b0e4d5b8c2a094db6beebe2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24ad3171-6597-46b3-9123-d3df2d1383f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d521626b-b074-4c60-9bd5-bdea08a2b916, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=37259991-854b-4c1c-b4f6-77bef9eeb129) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.269 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 37259991-854b-4c1c-b4f6-77bef9eeb129 in datapath 0250a927-9e86-48d0-9872-012000185830 unbound from our chassis
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.270 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0250a927-9e86-48d0-9872-012000185830, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.272 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0e40b2-eb59-47a6-888c-61298daa6d8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.272 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0250a927-9e86-48d0-9872-012000185830 namespace which is not needed anymore
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.280 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:51 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 29 07:51:51 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 3.131s CPU time.
Nov 29 07:51:51 compute-0 systemd-machined[217781]: Machine qemu-4-instance-00000004 terminated.
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.433 256736 INFO nova.virt.libvirt.driver [-] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Instance destroyed successfully.
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.434 256736 DEBUG nova.objects.instance [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lazy-loading 'resources' on Instance uuid d052677b-94f4-47ef-94ad-02bc1cbf6dd2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.457 256736 DEBUG nova.virt.libvirt.vif [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:51:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1580983255',display_name='tempest-VolumesActionsTest-instance-1580983255',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1580983255',id=4,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:51:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fa0635c8b0e4d5b8c2a094db6beebe2',ramdisk_id='',reservation_id='r-wakx735a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-814590685',owner_user_name='tempest-VolumesActionsTest-814590685-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:51:48Z,user_data=None,user_id='a47b942d30fe4bd69742fcb8e3cfdb1d',uuid=d052677b-94f4-47ef-94ad-02bc1cbf6dd2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.458 256736 DEBUG nova.network.os_vif_util [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converting VIF {"id": "37259991-854b-4c1c-b4f6-77bef9eeb129", "address": "fa:16:3e:45:58:29", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37259991-85", "ovs_interfaceid": "37259991-854b-4c1c-b4f6-77bef9eeb129", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.459 256736 DEBUG nova.network.os_vif_util [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:58:29,bridge_name='br-int',has_traffic_filtering=True,id=37259991-854b-4c1c-b4f6-77bef9eeb129,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37259991-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.459 256736 DEBUG os_vif [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:58:29,bridge_name='br-int',has_traffic_filtering=True,id=37259991-854b-4c1c-b4f6-77bef9eeb129,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37259991-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.463 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.463 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37259991-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.464 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.468 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.472 256736 INFO os_vif [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:58:29,bridge_name='br-int',has_traffic_filtering=True,id=37259991-854b-4c1c-b4f6-77bef9eeb129,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37259991-85')
Nov 29 07:51:51 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[272358]: [NOTICE]   (272362) : haproxy version is 2.8.14-c23fe91
Nov 29 07:51:51 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[272358]: [NOTICE]   (272362) : path to executable is /usr/sbin/haproxy
Nov 29 07:51:51 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[272358]: [WARNING]  (272362) : Exiting Master process...
Nov 29 07:51:51 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[272358]: [WARNING]  (272362) : Exiting Master process...
Nov 29 07:51:51 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[272358]: [ALERT]    (272362) : Current worker (272364) exited with code 143 (Terminated)
Nov 29 07:51:51 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[272358]: [WARNING]  (272362) : All workers exited. Exiting... (0)
Nov 29 07:51:51 compute-0 systemd[1]: libpod-59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c.scope: Deactivated successfully.
Nov 29 07:51:51 compute-0 podman[272400]: 2025-11-29 07:51:51.59695207 +0000 UTC m=+0.222644899 container died 59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 07:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c-userdata-shm.mount: Deactivated successfully.
Nov 29 07:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef392ecd47ba91bcc10f5fef33242696ceea4ad0d9eb157090ea18c4f800e354-merged.mount: Deactivated successfully.
Nov 29 07:51:51 compute-0 podman[272400]: 2025-11-29 07:51:51.67658788 +0000 UTC m=+0.302280619 container cleanup 59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 07:51:51 compute-0 systemd[1]: libpod-conmon-59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c.scope: Deactivated successfully.
Nov 29 07:51:51 compute-0 ceph-mon[75050]: pgmap v1368: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.756 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:51 compute-0 podman[272460]: 2025-11-29 07:51:51.796823197 +0000 UTC m=+0.079013084 container remove 59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.805 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[eba7b327-ab1a-4899-b14e-cbc9394b8f21]: (4, ('Sat Nov 29 07:51:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830 (59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c)\n59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c\nSat Nov 29 07:51:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830 (59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c)\n59737d187949b8f9ec15e765b30a5a0ae1295d1b4206dc48d2ffb4ee8334e76c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.807 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4a7f02-431d-4e84-a7b5-7c5dbd796f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.809 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0250a927-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.811 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:51 compute-0 kernel: tap0250a927-90: left promiscuous mode
Nov 29 07:51:51 compute-0 nova_compute[256729]: 2025-11-29 07:51:51.827 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.831 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[962ef186-d198-4389-a361-781f154205b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.847 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8f19c0-e7c9-4894-95b3-820920811da7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.849 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8e4c53-a59f-4ae1-a7b5-de5535964d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.874 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f76cbe-ed93-4d57-93bb-b5d25b6dba93]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502326, 'reachable_time': 17076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272476, 'error': None, 'target': 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.878 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0250a927-9e86-48d0-9872-012000185830 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:51:51 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:51.878 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbc03c0-9bd8-4168-9844-dd0da08b51fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:51:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d0250a927\x2d9e86\x2d48d0\x2d9872\x2d012000185830.mount: Deactivated successfully.
Nov 29 07:51:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:51:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2163252613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:51:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2163252613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.056 256736 INFO nova.virt.libvirt.driver [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Deleting instance files /var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2_del
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.058 256736 INFO nova.virt.libvirt.driver [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Deletion of /var/lib/nova/instances/d052677b-94f4-47ef-94ad-02bc1cbf6dd2_del complete
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.172 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.172 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.173 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.173 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.178 256736 INFO nova.compute.manager [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Took 0.99 seconds to destroy the instance on the hypervisor.
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.178 256736 DEBUG oslo.service.loopingcall [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.179 256736 DEBUG nova.compute.manager [-] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:51:52 compute-0 nova_compute[256729]: 2025-11-29 07:51:52.179 256736 DEBUG nova.network.neutron [-] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:51:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 383 KiB/s wr, 145 op/s
Nov 29 07:51:53 compute-0 sshd-session[272373]: Connection closed by authenticating user root 143.14.121.41 port 34900 [preauth]
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.106 256736 DEBUG nova.compute.manager [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Received event network-vif-unplugged-37259991-854b-4c1c-b4f6-77bef9eeb129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.107 256736 DEBUG oslo_concurrency.lockutils [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.107 256736 DEBUG oslo_concurrency.lockutils [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.108 256736 DEBUG oslo_concurrency.lockutils [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.108 256736 DEBUG nova.compute.manager [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] No waiting events found dispatching network-vif-unplugged-37259991-854b-4c1c-b4f6-77bef9eeb129 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.108 256736 DEBUG nova.compute.manager [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Received event network-vif-unplugged-37259991-854b-4c1c-b4f6-77bef9eeb129 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.109 256736 DEBUG nova.compute.manager [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Received event network-vif-plugged-37259991-854b-4c1c-b4f6-77bef9eeb129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.109 256736 DEBUG oslo_concurrency.lockutils [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.110 256736 DEBUG oslo_concurrency.lockutils [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.110 256736 DEBUG oslo_concurrency.lockutils [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.110 256736 DEBUG nova.compute.manager [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] No waiting events found dispatching network-vif-plugged-37259991-854b-4c1c-b4f6-77bef9eeb129 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.111 256736 WARNING nova.compute.manager [req-18e6541a-d194-43ab-83a6-f4e57bf70168 req-8c465186-2aa0-421c-9087-5e542c1403ae ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Received unexpected event network-vif-plugged-37259991-854b-4c1c-b4f6-77bef9eeb129 for instance with vm_state active and task_state deleting.
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.188 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.189 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.189 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.190 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.190 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2163252613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:51:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2163252613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:51:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505479569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:53 compute-0 nova_compute[256729]: 2025-11-29 07:51:53.882 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.130 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.131 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4669MB free_disk=59.967384338378906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.131 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.131 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.336 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance d052677b-94f4-47ef-94ad-02bc1cbf6dd2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.337 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.337 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.359 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing inventories for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:51:54 compute-0 ceph-mon[75050]: pgmap v1369: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 383 KiB/s wr, 145 op/s
Nov 29 07:51:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2505479569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.393 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating ProviderTree inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.393 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.423 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing aggregate associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.479 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing trait associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, traits: COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NODE,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.482 256736 DEBUG nova.network.neutron [-] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.503 256736 INFO nova.compute.manager [-] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Took 2.32 seconds to deallocate network for instance.
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.533 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.569 256736 DEBUG oslo_concurrency.lockutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 66 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 176 op/s
Nov 29 07:51:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/96598456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.948 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:54 compute-0 nova_compute[256729]: 2025-11-29 07:51:54.956 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.286 256736 DEBUG nova.compute.manager [req-65b8a996-e35e-4597-9c9d-c6e7ba783c8d req-8e88209b-9a81-468b-a45a-a4fbc675e295 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Received event network-vif-deleted-37259991-854b-4c1c-b4f6-77bef9eeb129 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.289 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.320 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.321 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.322 256736 DEBUG oslo_concurrency.lockutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:55 compute-0 ceph-mon[75050]: pgmap v1370: 305 pgs: 305 active+clean; 66 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 176 op/s
Nov 29 07:51:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/96598456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.386 256736 DEBUG oslo_concurrency.processutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:51:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:51:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3493594730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.918 256736 DEBUG oslo_concurrency.processutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.924 256736 DEBUG nova.compute.provider_tree [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.942 256736 DEBUG nova.scheduler.client.report [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.965 256736 DEBUG oslo_concurrency.lockutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:55 compute-0 nova_compute[256729]: 2025-11-29 07:51:55.993 256736 INFO nova.scheduler.client.report [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Deleted allocations for instance d052677b-94f4-47ef-94ad-02bc1cbf6dd2
Nov 29 07:51:56 compute-0 nova_compute[256729]: 2025-11-29 07:51:56.080 256736 DEBUG oslo_concurrency.lockutils [None req-fa8978c0-fbf7-44aa-9521-c9d63b9ce04b a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "d052677b-94f4-47ef-94ad-02bc1cbf6dd2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:56 compute-0 nova_compute[256729]: 2025-11-29 07:51:56.322 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:56 compute-0 nova_compute[256729]: 2025-11-29 07:51:56.322 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:51:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3493594730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:51:56 compute-0 nova_compute[256729]: 2025-11-29 07:51:56.467 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:56 compute-0 sudo[272547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:56 compute-0 sudo[272547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:56 compute-0 sudo[272547]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 54 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 17 KiB/s wr, 166 op/s
Nov 29 07:51:56 compute-0 sudo[272572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:51:56 compute-0 sudo[272572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:56 compute-0 sudo[272572]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:56 compute-0 nova_compute[256729]: 2025-11-29 07:51:56.758 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:51:56 compute-0 sudo[272597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:56 compute-0 sudo[272597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:56 compute-0 sudo[272597]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:56 compute-0 sudo[272622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:51:56 compute-0 sudo[272622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:57 compute-0 sudo[272622]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:57 compute-0 ceph-mon[75050]: pgmap v1371: 305 pgs: 305 active+clean; 54 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 17 KiB/s wr, 166 op/s
Nov 29 07:51:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:51:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:51:57 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:51:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:51:57 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:51:57 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7cf8ad0c-f39f-4a0a-a12b-5ac5ce466809 does not exist
Nov 29 07:51:57 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c54a6e70-98c0-4503-8f9a-325fb4a47017 does not exist
Nov 29 07:51:57 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev f7245d2d-c215-48aa-9b92-845bdaf50342 does not exist
Nov 29 07:51:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:51:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:51:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:51:57 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:51:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:51:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:57 compute-0 sudo[272678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:57 compute-0 sudo[272678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:57 compute-0 sudo[272678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:51:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 29 07:51:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 29 07:51:57 compute-0 sudo[272703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:51:57 compute-0 sudo[272703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:57 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 29 07:51:57 compute-0 sudo[272703]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:57 compute-0 sudo[272728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:51:57 compute-0 sudo[272728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:57 compute-0 sudo[272728]: pam_unix(sudo:session): session closed for user root
Nov 29 07:51:57 compute-0 sudo[272753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:51:57 compute-0 sudo[272753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:51:58 compute-0 podman[272819]: 2025-11-29 07:51:58.183394702 +0000 UTC m=+0.078458959 container create b6e2f03670044ae2c56c0887fbef2ad84b0aa77747abe1dd06e5821e90574ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:51:58 compute-0 systemd[1]: Started libpod-conmon-b6e2f03670044ae2c56c0887fbef2ad84b0aa77747abe1dd06e5821e90574ae1.scope.
Nov 29 07:51:58 compute-0 podman[272819]: 2025-11-29 07:51:58.145374517 +0000 UTC m=+0.040438824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:51:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:58 compute-0 podman[272819]: 2025-11-29 07:51:58.296277369 +0000 UTC m=+0.191341686 container init b6e2f03670044ae2c56c0887fbef2ad84b0aa77747abe1dd06e5821e90574ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:51:58 compute-0 podman[272819]: 2025-11-29 07:51:58.308223425 +0000 UTC m=+0.203287692 container start b6e2f03670044ae2c56c0887fbef2ad84b0aa77747abe1dd06e5821e90574ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:51:58 compute-0 podman[272819]: 2025-11-29 07:51:58.312748138 +0000 UTC m=+0.207812405 container attach b6e2f03670044ae2c56c0887fbef2ad84b0aa77747abe1dd06e5821e90574ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hellman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 07:51:58 compute-0 focused_hellman[272835]: 167 167
Nov 29 07:51:58 compute-0 systemd[1]: libpod-b6e2f03670044ae2c56c0887fbef2ad84b0aa77747abe1dd06e5821e90574ae1.scope: Deactivated successfully.
Nov 29 07:51:58 compute-0 podman[272819]: 2025-11-29 07:51:58.316698626 +0000 UTC m=+0.211762923 container died b6e2f03670044ae2c56c0887fbef2ad84b0aa77747abe1dd06e5821e90574ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c8e28179f697d3a3e3abd752a4f5a5a0b8ffaf546a7d3d0cfc77af74a2e9217-merged.mount: Deactivated successfully.
Nov 29 07:51:58 compute-0 podman[272819]: 2025-11-29 07:51:58.371332475 +0000 UTC m=+0.266396742 container remove b6e2f03670044ae2c56c0887fbef2ad84b0aa77747abe1dd06e5821e90574ae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:51:58 compute-0 systemd[1]: libpod-conmon-b6e2f03670044ae2c56c0887fbef2ad84b0aa77747abe1dd06e5821e90574ae1.scope: Deactivated successfully.
Nov 29 07:51:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:51:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:51:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:51:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:51:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:51:58 compute-0 ceph-mon[75050]: osdmap e186: 3 total, 3 up, 3 in
Nov 29 07:51:58 compute-0 podman[272859]: 2025-11-29 07:51:58.574618945 +0000 UTC m=+0.063560403 container create b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:51:58 compute-0 systemd[1]: Started libpod-conmon-b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b.scope.
Nov 29 07:51:58 compute-0 podman[272859]: 2025-11-29 07:51:58.552252006 +0000 UTC m=+0.041193464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:51:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0523fc707fb738c16d8f0d43db54313ed8201d9386d3550f9ff8a624eb3ccd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0523fc707fb738c16d8f0d43db54313ed8201d9386d3550f9ff8a624eb3ccd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0523fc707fb738c16d8f0d43db54313ed8201d9386d3550f9ff8a624eb3ccd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0523fc707fb738c16d8f0d43db54313ed8201d9386d3550f9ff8a624eb3ccd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0523fc707fb738c16d8f0d43db54313ed8201d9386d3550f9ff8a624eb3ccd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:51:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 642 KiB/s wr, 176 op/s
Nov 29 07:51:58 compute-0 podman[272859]: 2025-11-29 07:51:58.681598161 +0000 UTC m=+0.170539629 container init b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_clarke, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:51:58 compute-0 podman[272859]: 2025-11-29 07:51:58.69953735 +0000 UTC m=+0.188478808 container start b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:51:58 compute-0 podman[272859]: 2025-11-29 07:51:58.703637912 +0000 UTC m=+0.192579340 container attach b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:51:58 compute-0 sshd-session[272479]: Connection closed by authenticating user root 143.14.121.41 port 41834 [preauth]
Nov 29 07:51:59 compute-0 ceph-mon[75050]: pgmap v1373: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 642 KiB/s wr, 176 op/s
Nov 29 07:51:59 compute-0 serene_clarke[272875]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:51:59 compute-0 serene_clarke[272875]: --> relative data size: 1.0
Nov 29 07:51:59 compute-0 serene_clarke[272875]: --> All data devices are unavailable
Nov 29 07:51:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:59.770 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:51:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:59.772 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:51:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:51:59.772 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:51:59 compute-0 systemd[1]: libpod-b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b.scope: Deactivated successfully.
Nov 29 07:51:59 compute-0 systemd[1]: libpod-b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b.scope: Consumed 1.055s CPU time.
Nov 29 07:51:59 compute-0 podman[272906]: 2025-11-29 07:51:59.846782868 +0000 UTC m=+0.028478887 container died b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_clarke, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b0523fc707fb738c16d8f0d43db54313ed8201d9386d3550f9ff8a624eb3ccd-merged.mount: Deactivated successfully.
Nov 29 07:51:59 compute-0 podman[272906]: 2025-11-29 07:51:59.897975133 +0000 UTC m=+0.079671142 container remove b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_clarke, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:51:59 compute-0 systemd[1]: libpod-conmon-b32cd3cf4312bfc72c0566a6eaf63883fbac357fe8d5b533bbf3cf670d02e56b.scope: Deactivated successfully.
Nov 29 07:51:59 compute-0 sudo[272753]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3509320031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:00 compute-0 sudo[272921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:00 compute-0 sudo[272921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:00 compute-0 sudo[272921]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:00 compute-0 sudo[272946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:52:00 compute-0 sudo[272946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:00 compute-0 sudo[272946]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:00 compute-0 sudo[272971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:00 compute-0 sudo[272971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:00 compute-0 sudo[272971]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:00 compute-0 sudo[272996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:52:00 compute-0 sudo[272996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 29 07:52:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 29 07:52:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 29 07:52:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3509320031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 783 KiB/s wr, 104 op/s
Nov 29 07:52:00 compute-0 podman[273060]: 2025-11-29 07:52:00.675538816 +0000 UTC m=+0.056525152 container create 4e60651a2926614f09523e3f66b3e9199672c03f0d8f018251ce413667f18e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:52:00 compute-0 systemd[1]: Started libpod-conmon-4e60651a2926614f09523e3f66b3e9199672c03f0d8f018251ce413667f18e9d.scope.
Nov 29 07:52:00 compute-0 podman[273060]: 2025-11-29 07:52:00.652668562 +0000 UTC m=+0.033654928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:00 compute-0 podman[273060]: 2025-11-29 07:52:00.778055889 +0000 UTC m=+0.159042275 container init 4e60651a2926614f09523e3f66b3e9199672c03f0d8f018251ce413667f18e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:52:00 compute-0 podman[273060]: 2025-11-29 07:52:00.790949711 +0000 UTC m=+0.171936067 container start 4e60651a2926614f09523e3f66b3e9199672c03f0d8f018251ce413667f18e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:52:00 compute-0 interesting_haslett[273076]: 167 167
Nov 29 07:52:00 compute-0 systemd[1]: libpod-4e60651a2926614f09523e3f66b3e9199672c03f0d8f018251ce413667f18e9d.scope: Deactivated successfully.
Nov 29 07:52:00 compute-0 podman[273060]: 2025-11-29 07:52:00.800247634 +0000 UTC m=+0.181233970 container attach 4e60651a2926614f09523e3f66b3e9199672c03f0d8f018251ce413667f18e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:52:00 compute-0 podman[273060]: 2025-11-29 07:52:00.800579283 +0000 UTC m=+0.181565629 container died 4e60651a2926614f09523e3f66b3e9199672c03f0d8f018251ce413667f18e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b908e761b1c8d70ba87c9bfaeb0bec6b64148f36cf0b74a61b3ab4dbfe95f2-merged.mount: Deactivated successfully.
Nov 29 07:52:00 compute-0 podman[273060]: 2025-11-29 07:52:00.89363021 +0000 UTC m=+0.274616556 container remove 4e60651a2926614f09523e3f66b3e9199672c03f0d8f018251ce413667f18e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:52:00 compute-0 nova_compute[256729]: 2025-11-29 07:52:00.894 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "77a189a8-6952-4618-9fbd-4fc89e13179f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:00 compute-0 nova_compute[256729]: 2025-11-29 07:52:00.894 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:00 compute-0 systemd[1]: libpod-conmon-4e60651a2926614f09523e3f66b3e9199672c03f0d8f018251ce413667f18e9d.scope: Deactivated successfully.
Nov 29 07:52:00 compute-0 nova_compute[256729]: 2025-11-29 07:52:00.915 256736 DEBUG nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:52:00 compute-0 nova_compute[256729]: 2025-11-29 07:52:00.994 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:00 compute-0 nova_compute[256729]: 2025-11-29 07:52:00.995 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.005 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.006 256736 INFO nova.compute.claims [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:52:01 compute-0 podman[273099]: 2025-11-29 07:52:01.065488243 +0000 UTC m=+0.044873154 container create 36c1cce3ed75df5fe0438e6db3eac78c37f409269d5cb13d66e4c88a68b5a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:52:01 compute-0 systemd[1]: Started libpod-conmon-36c1cce3ed75df5fe0438e6db3eac78c37f409269d5cb13d66e4c88a68b5a6f9.scope.
Nov 29 07:52:01 compute-0 podman[273099]: 2025-11-29 07:52:01.046555477 +0000 UTC m=+0.025940408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.144 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f98e85b469bef1a42bd9497beb295cf935e8dc73b0e7a95d5ced2d180dd7e2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f98e85b469bef1a42bd9497beb295cf935e8dc73b0e7a95d5ced2d180dd7e2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f98e85b469bef1a42bd9497beb295cf935e8dc73b0e7a95d5ced2d180dd7e2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f98e85b469bef1a42bd9497beb295cf935e8dc73b0e7a95d5ced2d180dd7e2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:01 compute-0 podman[273099]: 2025-11-29 07:52:01.168684526 +0000 UTC m=+0.148069487 container init 36c1cce3ed75df5fe0438e6db3eac78c37f409269d5cb13d66e4c88a68b5a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:52:01 compute-0 podman[273099]: 2025-11-29 07:52:01.181217337 +0000 UTC m=+0.160602258 container start 36c1cce3ed75df5fe0438e6db3eac78c37f409269d5cb13d66e4c88a68b5a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:52:01 compute-0 podman[273099]: 2025-11-29 07:52:01.185408782 +0000 UTC m=+0.164793803 container attach 36c1cce3ed75df5fe0438e6db3eac78c37f409269d5cb13d66e4c88a68b5a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 07:52:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 29 07:52:01 compute-0 ceph-mon[75050]: osdmap e187: 3 total, 3 up, 3 in
Nov 29 07:52:01 compute-0 ceph-mon[75050]: pgmap v1375: 305 pgs: 305 active+clean; 56 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 783 KiB/s wr, 104 op/s
Nov 29 07:52:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 29 07:52:01 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.470 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4088964409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.640 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.646 256736 DEBUG nova.compute.provider_tree [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.671 256736 DEBUG nova.scheduler.client.report [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.697 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.698 256736 DEBUG nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.752 256736 DEBUG nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.753 256736 DEBUG nova.network.neutron [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.760 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.781 256736 INFO nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.807 256736 DEBUG nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.913 256736 DEBUG nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.914 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.915 256736 INFO nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Creating image(s)
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.934 256736 DEBUG nova.storage.rbd_utils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image 77a189a8-6952-4618-9fbd-4fc89e13179f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:01 compute-0 goofy_williamson[273115]: {
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:     "0": [
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:         {
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "devices": [
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "/dev/loop3"
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             ],
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_name": "ceph_lv0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_size": "21470642176",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "name": "ceph_lv0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "tags": {
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.cluster_name": "ceph",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.crush_device_class": "",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.encrypted": "0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.osd_id": "0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.type": "block",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.vdo": "0"
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             },
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "type": "block",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "vg_name": "ceph_vg0"
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:         }
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:     ],
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:     "1": [
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:         {
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "devices": [
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "/dev/loop4"
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             ],
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_name": "ceph_lv1",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_size": "21470642176",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "name": "ceph_lv1",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "tags": {
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.cluster_name": "ceph",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.crush_device_class": "",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.encrypted": "0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.osd_id": "1",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.type": "block",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.vdo": "0"
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             },
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "type": "block",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "vg_name": "ceph_vg1"
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:         }
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:     ],
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:     "2": [
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:         {
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "devices": [
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "/dev/loop5"
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             ],
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_name": "ceph_lv2",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_size": "21470642176",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "name": "ceph_lv2",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "tags": {
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.cluster_name": "ceph",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.crush_device_class": "",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.encrypted": "0",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.osd_id": "2",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.type": "block",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:                 "ceph.vdo": "0"
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             },
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "type": "block",
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:             "vg_name": "ceph_vg2"
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:         }
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.955 256736 DEBUG nova.storage.rbd_utils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image 77a189a8-6952-4618-9fbd-4fc89e13179f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:01 compute-0 goofy_williamson[273115]:     ]
Nov 29 07:52:01 compute-0 goofy_williamson[273115]: }
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.976 256736 DEBUG nova.storage.rbd_utils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image 77a189a8-6952-4618-9fbd-4fc89e13179f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:01 compute-0 nova_compute[256729]: 2025-11-29 07:52:01.979 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:01 compute-0 systemd[1]: libpod-36c1cce3ed75df5fe0438e6db3eac78c37f409269d5cb13d66e4c88a68b5a6f9.scope: Deactivated successfully.
Nov 29 07:52:01 compute-0 podman[273099]: 2025-11-29 07:52:01.982191858 +0000 UTC m=+0.961576779 container died 36c1cce3ed75df5fe0438e6db3eac78c37f409269d5cb13d66e4c88a68b5a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f98e85b469bef1a42bd9497beb295cf935e8dc73b0e7a95d5ced2d180dd7e2b-merged.mount: Deactivated successfully.
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.024 256736 DEBUG nova.policy [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a47b942d30fe4bd69742fcb8e3cfdb1d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6fa0635c8b0e4d5b8c2a094db6beebe2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:52:02 compute-0 podman[273099]: 2025-11-29 07:52:02.038013209 +0000 UTC m=+1.017398120 container remove 36c1cce3ed75df5fe0438e6db3eac78c37f409269d5cb13d66e4c88a68b5a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.042 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.044 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:02 compute-0 systemd[1]: libpod-conmon-36c1cce3ed75df5fe0438e6db3eac78c37f409269d5cb13d66e4c88a68b5a6f9.scope: Deactivated successfully.
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.046 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.047 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.067 256736 DEBUG nova.storage.rbd_utils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image 77a189a8-6952-4618-9fbd-4fc89e13179f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:02 compute-0 sudo[272996]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.070 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 77a189a8-6952-4618-9fbd-4fc89e13179f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:02 compute-0 sudo[273235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:02 compute-0 sudo[273235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:02 compute-0 sudo[273235]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:02 compute-0 sudo[273275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:52:02 compute-0 sudo[273275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:02 compute-0 sudo[273275]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:02 compute-0 sudo[273303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:02 compute-0 sudo[273303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:02 compute-0 sudo[273303]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:02 compute-0 sshd-session[272880]: Connection closed by authenticating user root 143.14.121.41 port 41842 [preauth]
Nov 29 07:52:02 compute-0 sudo[273328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:52:02 compute-0 sudo[273328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.332 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 77a189a8-6952-4618-9fbd-4fc89e13179f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.392 256736 DEBUG nova.storage.rbd_utils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] resizing rbd image 77a189a8-6952-4618-9fbd-4fc89e13179f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:52:02 compute-0 ceph-mon[75050]: osdmap e188: 3 total, 3 up, 3 in
Nov 29 07:52:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4088964409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.488 256736 DEBUG nova.objects.instance [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lazy-loading 'migration_context' on Instance uuid 77a189a8-6952-4618-9fbd-4fc89e13179f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:52:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/665978374' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/665978374' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.532 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.533 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Ensure instance console log exists: /var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.533 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.534 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:02 compute-0 nova_compute[256729]: 2025-11-29 07:52:02.534 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 113 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 143 op/s
Nov 29 07:52:02 compute-0 podman[273466]: 2025-11-29 07:52:02.703495407 +0000 UTC m=+0.060012848 container create 3c0b124b5bc73657800c126c19b939120bd15ce4e269bcb8bb9904cde2dc6179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:52:02 compute-0 systemd[1]: Started libpod-conmon-3c0b124b5bc73657800c126c19b939120bd15ce4e269bcb8bb9904cde2dc6179.scope.
Nov 29 07:52:02 compute-0 podman[273466]: 2025-11-29 07:52:02.676530371 +0000 UTC m=+0.033047912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:02 compute-0 podman[273466]: 2025-11-29 07:52:02.793086189 +0000 UTC m=+0.149603690 container init 3c0b124b5bc73657800c126c19b939120bd15ce4e269bcb8bb9904cde2dc6179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 07:52:02 compute-0 podman[273466]: 2025-11-29 07:52:02.805721772 +0000 UTC m=+0.162239253 container start 3c0b124b5bc73657800c126c19b939120bd15ce4e269bcb8bb9904cde2dc6179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:52:02 compute-0 podman[273466]: 2025-11-29 07:52:02.809269299 +0000 UTC m=+0.165786760 container attach 3c0b124b5bc73657800c126c19b939120bd15ce4e269bcb8bb9904cde2dc6179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:02 compute-0 angry_edison[273482]: 167 167
Nov 29 07:52:02 compute-0 systemd[1]: libpod-3c0b124b5bc73657800c126c19b939120bd15ce4e269bcb8bb9904cde2dc6179.scope: Deactivated successfully.
Nov 29 07:52:02 compute-0 podman[273466]: 2025-11-29 07:52:02.814874622 +0000 UTC m=+0.171392103 container died 3c0b124b5bc73657800c126c19b939120bd15ce4e269bcb8bb9904cde2dc6179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b55a0788c7e7feaf5c79685eaced5171ce881de6ee94e6cf7054037d6b08566-merged.mount: Deactivated successfully.
Nov 29 07:52:02 compute-0 podman[273466]: 2025-11-29 07:52:02.861987166 +0000 UTC m=+0.218504607 container remove 3c0b124b5bc73657800c126c19b939120bd15ce4e269bcb8bb9904cde2dc6179 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:52:02 compute-0 systemd[1]: libpod-conmon-3c0b124b5bc73657800c126c19b939120bd15ce4e269bcb8bb9904cde2dc6179.scope: Deactivated successfully.
Nov 29 07:52:03 compute-0 podman[273507]: 2025-11-29 07:52:03.077374966 +0000 UTC m=+0.046421726 container create a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:52:03 compute-0 systemd[1]: Started libpod-conmon-a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9.scope.
Nov 29 07:52:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0ac68032e95c7b939fb3474cc76e6ffe3aa8d353c151548d9e41e65ab3201a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:03 compute-0 podman[273507]: 2025-11-29 07:52:03.058710017 +0000 UTC m=+0.027756797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0ac68032e95c7b939fb3474cc76e6ffe3aa8d353c151548d9e41e65ab3201a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0ac68032e95c7b939fb3474cc76e6ffe3aa8d353c151548d9e41e65ab3201a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0ac68032e95c7b939fb3474cc76e6ffe3aa8d353c151548d9e41e65ab3201a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:03 compute-0 podman[273507]: 2025-11-29 07:52:03.174733809 +0000 UTC m=+0.143780609 container init a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 07:52:03 compute-0 podman[273507]: 2025-11-29 07:52:03.18207665 +0000 UTC m=+0.151123430 container start a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:52:03 compute-0 podman[273507]: 2025-11-29 07:52:03.185465702 +0000 UTC m=+0.154512552 container attach a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:52:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/665978374' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/665978374' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:03 compute-0 ceph-mon[75050]: pgmap v1377: 305 pgs: 305 active+clean; 113 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 143 op/s
Nov 29 07:52:03 compute-0 nova_compute[256729]: 2025-11-29 07:52:03.715 256736 DEBUG nova.network.neutron [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Successfully created port: 88467199-2f72-4f31-a582-1d679789c919 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:52:04 compute-0 beautiful_moore[273523]: {
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "osd_id": 2,
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "type": "bluestore"
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:     },
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "osd_id": 1,
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "type": "bluestore"
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:     },
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "osd_id": 0,
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:         "type": "bluestore"
Nov 29 07:52:04 compute-0 beautiful_moore[273523]:     }
Nov 29 07:52:04 compute-0 beautiful_moore[273523]: }
Nov 29 07:52:04 compute-0 systemd[1]: libpod-a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9.scope: Deactivated successfully.
Nov 29 07:52:04 compute-0 podman[273507]: 2025-11-29 07:52:04.239468818 +0000 UTC m=+1.208515578 container died a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:52:04 compute-0 systemd[1]: libpod-a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9.scope: Consumed 1.062s CPU time.
Nov 29 07:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d0ac68032e95c7b939fb3474cc76e6ffe3aa8d353c151548d9e41e65ab3201a-merged.mount: Deactivated successfully.
Nov 29 07:52:04 compute-0 podman[273507]: 2025-11-29 07:52:04.306891995 +0000 UTC m=+1.275938755 container remove a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moore, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:52:04 compute-0 systemd[1]: libpod-conmon-a8f626401497675040d04eb97ef113db2b4ecd25499c0ff0f9c10e633b6c9cd9.scope: Deactivated successfully.
Nov 29 07:52:04 compute-0 sudo[273328]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:52:04 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:52:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:52:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 151 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.1 MiB/s wr, 144 op/s
Nov 29 07:52:04 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:52:04 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 0b8c904e-7351-4176-94f0-2a983670b754 does not exist
Nov 29 07:52:04 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev bc239e5d-7827-44f5-89b0-a9cffa706aa1 does not exist
Nov 29 07:52:04 compute-0 sudo[273568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:52:04 compute-0 sudo[273568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:04 compute-0 sudo[273568]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:04 compute-0 sudo[273593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:52:04 compute-0 sudo[273593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:52:04 compute-0 sudo[273593]: pam_unix(sudo:session): session closed for user root
Nov 29 07:52:05 compute-0 nova_compute[256729]: 2025-11-29 07:52:05.092 256736 DEBUG nova.network.neutron [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Successfully updated port: 88467199-2f72-4f31-a582-1d679789c919 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:52:05 compute-0 nova_compute[256729]: 2025-11-29 07:52:05.112 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "refresh_cache-77a189a8-6952-4618-9fbd-4fc89e13179f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:52:05 compute-0 nova_compute[256729]: 2025-11-29 07:52:05.112 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquired lock "refresh_cache-77a189a8-6952-4618-9fbd-4fc89e13179f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:52:05 compute-0 nova_compute[256729]: 2025-11-29 07:52:05.113 256736 DEBUG nova.network.neutron [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:52:05 compute-0 nova_compute[256729]: 2025-11-29 07:52:05.229 256736 DEBUG nova.compute.manager [req-a8ea92ad-d1e2-46f3-bc05-f7339b97145e req-01e41eb2-d8ed-4418-9466-6cd4b21202e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Received event network-changed-88467199-2f72-4f31-a582-1d679789c919 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:05 compute-0 nova_compute[256729]: 2025-11-29 07:52:05.230 256736 DEBUG nova.compute.manager [req-a8ea92ad-d1e2-46f3-bc05-f7339b97145e req-01e41eb2-d8ed-4418-9466-6cd4b21202e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Refreshing instance network info cache due to event network-changed-88467199-2f72-4f31-a582-1d679789c919. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:52:05 compute-0 nova_compute[256729]: 2025-11-29 07:52:05.231 256736 DEBUG oslo_concurrency.lockutils [req-a8ea92ad-d1e2-46f3-bc05-f7339b97145e req-01e41eb2-d8ed-4418-9466-6cd4b21202e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-77a189a8-6952-4618-9fbd-4fc89e13179f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:52:05 compute-0 nova_compute[256729]: 2025-11-29 07:52:05.315 256736 DEBUG nova.network.neutron [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:52:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2539051745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:52:05 compute-0 ceph-mon[75050]: pgmap v1378: 305 pgs: 305 active+clean; 151 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.1 MiB/s wr, 144 op/s
Nov 29 07:52:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:52:05 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2539051745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:52:05
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'volumes', 'images', 'vms', 'default.rgw.meta']
Nov 29 07:52:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.431 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402711.4308221, d052677b-94f4-47ef-94ad-02bc1cbf6dd2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.432 256736 INFO nova.compute.manager [-] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] VM Stopped (Lifecycle Event)
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.454 256736 DEBUG nova.compute.manager [None req-9cc962e5-a2be-41b3-b297-10ada26762df - - - - - -] [instance: d052677b-94f4-47ef-94ad-02bc1cbf6dd2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.474 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.618 256736 DEBUG nova.network.neutron [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Updating instance_info_cache with network_info: [{"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.647 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Releasing lock "refresh_cache-77a189a8-6952-4618-9fbd-4fc89e13179f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.648 256736 DEBUG nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Instance network_info: |[{"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.648 256736 DEBUG oslo_concurrency.lockutils [req-a8ea92ad-d1e2-46f3-bc05-f7339b97145e req-01e41eb2-d8ed-4418-9466-6cd4b21202e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-77a189a8-6952-4618-9fbd-4fc89e13179f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.648 256736 DEBUG nova.network.neutron [req-a8ea92ad-d1e2-46f3-bc05-f7339b97145e req-01e41eb2-d8ed-4418-9466-6cd4b21202e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Refreshing network info cache for port 88467199-2f72-4f31-a582-1d679789c919 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.653 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Start _get_guest_xml network_info=[{"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.660 256736 WARNING nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.667 256736 DEBUG nova.virt.libvirt.host [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.669 256736 DEBUG nova.virt.libvirt.host [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 164 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.2 MiB/s wr, 117 op/s
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.678 256736 DEBUG nova.virt.libvirt.host [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.679 256736 DEBUG nova.virt.libvirt.host [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.679 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.680 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.680 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.680 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.681 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.681 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.681 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.681 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.682 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.682 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.682 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.683 256736 DEBUG nova.virt.hardware [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.686 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 29 07:52:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 29 07:52:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 29 07:52:06 compute-0 nova_compute[256729]: 2025-11-29 07:52:06.762 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:52:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:52:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457551180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.142 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.180 256736 DEBUG nova.storage.rbd_utils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image 77a189a8-6952-4618-9fbd-4fc89e13179f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.185 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147888448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.652 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.654 256736 DEBUG nova.virt.libvirt.vif [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-910310895',display_name='tempest-VolumesActionsTest-instance-910310895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-910310895',id=5,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fa0635c8b0e4d5b8c2a094db6beebe2',ramdisk_id='',reservation_id='r-0x03aofk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-814590685',owner_user_name='tempest-VolumesActionsTest-814590685-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:01Z,user_data=None,user_id='a47b942d30fe4bd69742fcb8e3cfdb1d',uuid=77a189a8-6952-4618-9fbd-4fc89e13179f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.654 256736 DEBUG nova.network.os_vif_util [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converting VIF {"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.655 256736 DEBUG nova.network.os_vif_util [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:10:84,bridge_name='br-int',has_traffic_filtering=True,id=88467199-2f72-4f31-a582-1d679789c919,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88467199-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.656 256736 DEBUG nova.objects.instance [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77a189a8-6952-4618-9fbd-4fc89e13179f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.680 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <uuid>77a189a8-6952-4618-9fbd-4fc89e13179f</uuid>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <name>instance-00000005</name>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <nova:name>tempest-VolumesActionsTest-instance-910310895</nova:name>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:52:06</nova:creationTime>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <nova:user uuid="a47b942d30fe4bd69742fcb8e3cfdb1d">tempest-VolumesActionsTest-814590685-project-member</nova:user>
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <nova:project uuid="6fa0635c8b0e4d5b8c2a094db6beebe2">tempest-VolumesActionsTest-814590685</nova:project>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <nova:port uuid="88467199-2f72-4f31-a582-1d679789c919">
Nov 29 07:52:07 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <system>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <entry name="serial">77a189a8-6952-4618-9fbd-4fc89e13179f</entry>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <entry name="uuid">77a189a8-6952-4618-9fbd-4fc89e13179f</entry>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     </system>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <os>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   </os>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <features>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   </features>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/77a189a8-6952-4618-9fbd-4fc89e13179f_disk">
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       </source>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/77a189a8-6952-4618-9fbd-4fc89e13179f_disk.config">
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       </source>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:52:07 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:f4:10:84"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <target dev="tap88467199-2f"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f/console.log" append="off"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <video>
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     </video>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:52:07 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:52:07 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:52:07 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:52:07 compute-0 nova_compute[256729]: </domain>
Nov 29 07:52:07 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.682 256736 DEBUG nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Preparing to wait for external event network-vif-plugged-88467199-2f72-4f31-a582-1d679789c919 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.683 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.683 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.683 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.684 256736 DEBUG nova.virt.libvirt.vif [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-910310895',display_name='tempest-VolumesActionsTest-instance-910310895',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-910310895',id=5,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6fa0635c8b0e4d5b8c2a094db6beebe2',ramdisk_id='',reservation_id='r-0x03aofk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-814590685',owner_user_name='tempest-VolumesActionsTest-814590685-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:01Z,user_data=None,user_id='a47b942d30fe4bd69742fcb8e3cfdb1d',uuid=77a189a8-6952-4618-9fbd-4fc89e13179f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.684 256736 DEBUG nova.network.os_vif_util [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converting VIF {"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.685 256736 DEBUG nova.network.os_vif_util [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:10:84,bridge_name='br-int',has_traffic_filtering=True,id=88467199-2f72-4f31-a582-1d679789c919,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88467199-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.685 256736 DEBUG os_vif [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:10:84,bridge_name='br-int',has_traffic_filtering=True,id=88467199-2f72-4f31-a582-1d679789c919,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88467199-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.686 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.686 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.687 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.690 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.690 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88467199-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.690 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88467199-2f, col_values=(('external_ids', {'iface-id': '88467199-2f72-4f31-a582-1d679789c919', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:10:84', 'vm-uuid': '77a189a8-6952-4618-9fbd-4fc89e13179f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.692 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:07 compute-0 NetworkManager[48962]: <info>  [1764402727.6937] manager: (tap88467199-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.694 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.704 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.706 256736 INFO os_vif [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:10:84,bridge_name='br-int',has_traffic_filtering=True,id=88467199-2f72-4f31-a582-1d679789c919,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88467199-2f')
Nov 29 07:52:07 compute-0 ceph-mon[75050]: pgmap v1379: 305 pgs: 305 active+clean; 164 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.2 MiB/s wr, 117 op/s
Nov 29 07:52:07 compute-0 ceph-mon[75050]: osdmap e189: 3 total, 3 up, 3 in
Nov 29 07:52:07 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/457551180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:07 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3147888448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.766 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.766 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.767 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] No VIF found with MAC fa:16:3e:f4:10:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.768 256736 INFO nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Using config drive
Nov 29 07:52:07 compute-0 nova_compute[256729]: 2025-11-29 07:52:07.796 256736 DEBUG nova.storage.rbd_utils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image 77a189a8-6952-4618-9fbd-4fc89e13179f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.035 256736 DEBUG nova.network.neutron [req-a8ea92ad-d1e2-46f3-bc05-f7339b97145e req-01e41eb2-d8ed-4418-9466-6cd4b21202e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Updated VIF entry in instance network info cache for port 88467199-2f72-4f31-a582-1d679789c919. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.036 256736 DEBUG nova.network.neutron [req-a8ea92ad-d1e2-46f3-bc05-f7339b97145e req-01e41eb2-d8ed-4418-9466-6cd4b21202e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Updating instance_info_cache with network_info: [{"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.057 256736 DEBUG oslo_concurrency.lockutils [req-a8ea92ad-d1e2-46f3-bc05-f7339b97145e req-01e41eb2-d8ed-4418-9466-6cd4b21202e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-77a189a8-6952-4618-9fbd-4fc89e13179f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.199 256736 INFO nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Creating config drive at /var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f/disk.config
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.209 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9h94v1te execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.356 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9h94v1te" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.394 256736 DEBUG nova.storage.rbd_utils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] rbd image 77a189a8-6952-4618-9fbd-4fc89e13179f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.399 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f/disk.config 77a189a8-6952-4618-9fbd-4fc89e13179f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/7658738' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/7658738' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.564 256736 DEBUG oslo_concurrency.processutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f/disk.config 77a189a8-6952-4618-9fbd-4fc89e13179f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.565 256736 INFO nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Deleting local config drive /var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f/disk.config because it was imported into RBD.
Nov 29 07:52:08 compute-0 kernel: tap88467199-2f: entered promiscuous mode
Nov 29 07:52:08 compute-0 NetworkManager[48962]: <info>  [1764402728.6257] manager: (tap88467199-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Nov 29 07:52:08 compute-0 ovn_controller[153383]: 2025-11-29T07:52:08Z|00073|binding|INFO|Claiming lport 88467199-2f72-4f31-a582-1d679789c919 for this chassis.
Nov 29 07:52:08 compute-0 ovn_controller[153383]: 2025-11-29T07:52:08Z|00074|binding|INFO|88467199-2f72-4f31-a582-1d679789c919: Claiming fa:16:3e:f4:10:84 10.100.0.5
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.627 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.636 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:10:84 10.100.0.5'], port_security=['fa:16:3e:f4:10:84 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '77a189a8-6952-4618-9fbd-4fc89e13179f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0250a927-9e86-48d0-9872-012000185830', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fa0635c8b0e4d5b8c2a094db6beebe2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24ad3171-6597-46b3-9123-d3df2d1383f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d521626b-b074-4c60-9bd5-bdea08a2b916, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=88467199-2f72-4f31-a582-1d679789c919) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.638 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 88467199-2f72-4f31-a582-1d679789c919 in datapath 0250a927-9e86-48d0-9872-012000185830 bound to our chassis
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.639 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0250a927-9e86-48d0-9872-012000185830
Nov 29 07:52:08 compute-0 ovn_controller[153383]: 2025-11-29T07:52:08Z|00075|binding|INFO|Setting lport 88467199-2f72-4f31-a582-1d679789c919 ovn-installed in OVS
Nov 29 07:52:08 compute-0 ovn_controller[153383]: 2025-11-29T07:52:08Z|00076|binding|INFO|Setting lport 88467199-2f72-4f31-a582-1d679789c919 up in Southbound
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.652 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.655 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[947005a1-84a2-4f10-a2c9-739849e900cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.656 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0250a927-91 in ovnmeta-0250a927-9e86-48d0-9872-012000185830 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.659 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0250a927-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.659 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dd62be67-ddc1-4c5b-9c10-deb5bba1187c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.660 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[105457ea-1fd3-4068-9347-a4be47d3c3f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 systemd-machined[217781]: New machine qemu-5-instance-00000005.
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.670 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4dc665-5091-4f32-a16d-482a62c75454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 217 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 9.2 MiB/s wr, 146 op/s
Nov 29 07:52:08 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.701 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc9b555-7f55-413c-9957-c69423262212]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 29 07:52:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/7658738' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/7658738' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:08 compute-0 systemd-udevd[273758]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.726 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7736b987-0e7b-4e9f-928d-7b3e473faa33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 29 07:52:08 compute-0 NetworkManager[48962]: <info>  [1764402728.7316] manager: (tap0250a927-90): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Nov 29 07:52:08 compute-0 systemd-udevd[273764]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.730 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7d51ef-389b-4964-8e8c-915638f4984c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 29 07:52:08 compute-0 NetworkManager[48962]: <info>  [1764402728.7469] device (tap88467199-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:52:08 compute-0 NetworkManager[48962]: <info>  [1764402728.7481] device (tap88467199-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.765 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[52a5570e-27b5-44f8-a7cd-18d8f5770b1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.769 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[4442b383-dd5f-4070-b1fc-2e9481bae3ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 NetworkManager[48962]: <info>  [1764402728.7941] device (tap0250a927-90): carrier: link connected
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.799 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[5b12fb74-a655-4c97-94e2-6e29abd418ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.818 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2b720ede-2d22-49b3-b87a-da060faed9ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0250a927-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:51:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504385, 'reachable_time': 19920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273786, 'error': None, 'target': 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.833 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2099ec-af76-43ca-98e9-9b574fb7a226]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:516f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 504385, 'tstamp': 504385}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273787, 'error': None, 'target': 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.849 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e2cf133a-4bc4-4bef-85c7-7a5d5ac7e126]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0250a927-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:51:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504385, 'reachable_time': 19920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273788, 'error': None, 'target': 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.884 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7808bcd2-4123-41bf-b9b8-e3350ba79b26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.951 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3614f347-c9b2-4636-8ac3-7c2287d0c967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.952 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0250a927-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.952 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.953 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0250a927-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.955 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:08 compute-0 kernel: tap0250a927-90: entered promiscuous mode
Nov 29 07:52:08 compute-0 NetworkManager[48962]: <info>  [1764402728.9559] manager: (tap0250a927-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.958 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0250a927-90, col_values=(('external_ids', {'iface-id': '125944c7-1b99-4283-8df8-d8ef5cebc9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:08 compute-0 ovn_controller[153383]: 2025-11-29T07:52:08Z|00077|binding|INFO|Releasing lport 125944c7-1b99-4283-8df8-d8ef5cebc9bf from this chassis (sb_readonly=0)
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.961 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0250a927-9e86-48d0-9872-012000185830.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0250a927-9e86-48d0-9872-012000185830.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.962 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7dd48e-eed1-470d-bf5f-6f192418c811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.962 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-0250a927-9e86-48d0-9872-012000185830
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/0250a927-9e86-48d0-9872-012000185830.pid.haproxy
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 0250a927-9e86-48d0-9872-012000185830
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:52:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:08.963 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'env', 'PROCESS_TAG=haproxy-0250a927-9e86-48d0-9872-012000185830', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0250a927-9e86-48d0-9872-012000185830.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:52:08 compute-0 nova_compute[256729]: 2025-11-29 07:52:08.975 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.103 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402729.102637, 77a189a8-6952-4618-9fbd-4fc89e13179f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.104 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] VM Started (Lifecycle Event)
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.135 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.141 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402729.1027591, 77a189a8-6952-4618-9fbd-4fc89e13179f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.141 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] VM Paused (Lifecycle Event)
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.169 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.175 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:52:09 compute-0 sshd-session[273443]: Connection closed by authenticating user root 143.14.121.41 port 56272 [preauth]
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.207 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.220 256736 DEBUG nova.compute.manager [req-aa873adf-9511-4e2a-a8a1-8748672b27fb req-546a3fd1-1b27-4028-9b18-8fe6b45ef4cc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Received event network-vif-plugged-88467199-2f72-4f31-a582-1d679789c919 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.221 256736 DEBUG oslo_concurrency.lockutils [req-aa873adf-9511-4e2a-a8a1-8748672b27fb req-546a3fd1-1b27-4028-9b18-8fe6b45ef4cc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.221 256736 DEBUG oslo_concurrency.lockutils [req-aa873adf-9511-4e2a-a8a1-8748672b27fb req-546a3fd1-1b27-4028-9b18-8fe6b45ef4cc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.221 256736 DEBUG oslo_concurrency.lockutils [req-aa873adf-9511-4e2a-a8a1-8748672b27fb req-546a3fd1-1b27-4028-9b18-8fe6b45ef4cc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.221 256736 DEBUG nova.compute.manager [req-aa873adf-9511-4e2a-a8a1-8748672b27fb req-546a3fd1-1b27-4028-9b18-8fe6b45ef4cc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Processing event network-vif-plugged-88467199-2f72-4f31-a582-1d679789c919 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.222 256736 DEBUG nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.227 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402729.2267694, 77a189a8-6952-4618-9fbd-4fc89e13179f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.227 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] VM Resumed (Lifecycle Event)
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.229 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.232 256736 INFO nova.virt.libvirt.driver [-] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Instance spawned successfully.
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.233 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:52:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2645703334' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2645703334' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.266 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.272 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.273 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.273 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.275 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.276 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.277 256736 DEBUG nova.virt.libvirt.driver [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.282 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.334 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.374 256736 INFO nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Took 7.46 seconds to spawn the instance on the hypervisor.
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.375 256736 DEBUG nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:09 compute-0 podman[273862]: 2025-11-29 07:52:09.311871044 +0000 UTC m=+0.023150523 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:52:09 compute-0 podman[273862]: 2025-11-29 07:52:09.407593683 +0000 UTC m=+0.118873172 container create 5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.461 256736 INFO nova.compute.manager [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Took 8.49 seconds to build instance.
Nov 29 07:52:09 compute-0 systemd[1]: Started libpod-conmon-5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d.scope.
Nov 29 07:52:09 compute-0 nova_compute[256729]: 2025-11-29 07:52:09.480 256736 DEBUG oslo_concurrency.lockutils [None req-ac22243c-0602-4146-ae9f-0fef494d5316 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:52:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c47cc2848ab3e13c6484cdca26bed8e7e7eed09c12b861307b679df67ad5b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:52:09 compute-0 podman[273862]: 2025-11-29 07:52:09.527790699 +0000 UTC m=+0.239070158 container init 5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:52:09 compute-0 podman[273862]: 2025-11-29 07:52:09.534988864 +0000 UTC m=+0.246268313 container start 5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 07:52:09 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[273878]: [NOTICE]   (273883) : New worker (273885) forked
Nov 29 07:52:09 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[273878]: [NOTICE]   (273883) : Loading success.
Nov 29 07:52:09 compute-0 ceph-mon[75050]: pgmap v1381: 305 pgs: 305 active+clean; 217 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 9.2 MiB/s wr, 146 op/s
Nov 29 07:52:09 compute-0 ceph-mon[75050]: osdmap e190: 3 total, 3 up, 3 in
Nov 29 07:52:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2645703334' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2645703334' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 217 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 94 op/s
Nov 29 07:52:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 29 07:52:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 29 07:52:11 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 29 07:52:11 compute-0 ceph-mon[75050]: pgmap v1383: 305 pgs: 305 active+clean; 217 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 94 op/s
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.356 256736 DEBUG nova.compute.manager [req-f2c76f12-6ea8-4fcd-9ae5-ff74d4360ea4 req-fa97c2ef-ab7f-4f13-8aec-88a2228b2bd5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Received event network-vif-plugged-88467199-2f72-4f31-a582-1d679789c919 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.356 256736 DEBUG oslo_concurrency.lockutils [req-f2c76f12-6ea8-4fcd-9ae5-ff74d4360ea4 req-fa97c2ef-ab7f-4f13-8aec-88a2228b2bd5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.357 256736 DEBUG oslo_concurrency.lockutils [req-f2c76f12-6ea8-4fcd-9ae5-ff74d4360ea4 req-fa97c2ef-ab7f-4f13-8aec-88a2228b2bd5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.357 256736 DEBUG oslo_concurrency.lockutils [req-f2c76f12-6ea8-4fcd-9ae5-ff74d4360ea4 req-fa97c2ef-ab7f-4f13-8aec-88a2228b2bd5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.358 256736 DEBUG nova.compute.manager [req-f2c76f12-6ea8-4fcd-9ae5-ff74d4360ea4 req-fa97c2ef-ab7f-4f13-8aec-88a2228b2bd5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] No waiting events found dispatching network-vif-plugged-88467199-2f72-4f31-a582-1d679789c919 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.358 256736 WARNING nova.compute.manager [req-f2c76f12-6ea8-4fcd-9ae5-ff74d4360ea4 req-fa97c2ef-ab7f-4f13-8aec-88a2228b2bd5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Received unexpected event network-vif-plugged-88467199-2f72-4f31-a582-1d679789c919 for instance with vm_state active and task_state None.
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.765 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/305215736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/305215736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.857 256736 DEBUG oslo_concurrency.lockutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "77a189a8-6952-4618-9fbd-4fc89e13179f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.858 256736 DEBUG oslo_concurrency.lockutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.858 256736 DEBUG oslo_concurrency.lockutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.859 256736 DEBUG oslo_concurrency.lockutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.860 256736 DEBUG oslo_concurrency.lockutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.862 256736 INFO nova.compute.manager [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Terminating instance
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.864 256736 DEBUG nova.compute.manager [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:52:11 compute-0 kernel: tap88467199-2f (unregistering): left promiscuous mode
Nov 29 07:52:11 compute-0 NetworkManager[48962]: <info>  [1764402731.9128] device (tap88467199-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.940 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:11 compute-0 ovn_controller[153383]: 2025-11-29T07:52:11Z|00078|binding|INFO|Releasing lport 88467199-2f72-4f31-a582-1d679789c919 from this chassis (sb_readonly=0)
Nov 29 07:52:11 compute-0 ovn_controller[153383]: 2025-11-29T07:52:11Z|00079|binding|INFO|Setting lport 88467199-2f72-4f31-a582-1d679789c919 down in Southbound
Nov 29 07:52:11 compute-0 ovn_controller[153383]: 2025-11-29T07:52:11Z|00080|binding|INFO|Removing iface tap88467199-2f ovn-installed in OVS
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.945 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:11 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:11.955 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:10:84 10.100.0.5'], port_security=['fa:16:3e:f4:10:84 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '77a189a8-6952-4618-9fbd-4fc89e13179f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0250a927-9e86-48d0-9872-012000185830', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6fa0635c8b0e4d5b8c2a094db6beebe2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24ad3171-6597-46b3-9123-d3df2d1383f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d521626b-b074-4c60-9bd5-bdea08a2b916, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=88467199-2f72-4f31-a582-1d679789c919) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:11 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:11.957 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 88467199-2f72-4f31-a582-1d679789c919 in datapath 0250a927-9e86-48d0-9872-012000185830 unbound from our chassis
Nov 29 07:52:11 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:11.958 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0250a927-9e86-48d0-9872-012000185830, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:52:11 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:11.959 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3311c51b-991a-4a96-8bc9-a2065c4096fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:11 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:11.960 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0250a927-9e86-48d0-9872-012000185830 namespace which is not needed anymore
Nov 29 07:52:11 compute-0 nova_compute[256729]: 2025-11-29 07:52:11.980 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:12 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 29 07:52:12 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 3.150s CPU time.
Nov 29 07:52:12 compute-0 systemd-machined[217781]: Machine qemu-5-instance-00000005 terminated.
Nov 29 07:52:12 compute-0 kernel: tap88467199-2f: entered promiscuous mode
Nov 29 07:52:12 compute-0 kernel: tap88467199-2f (unregistering): left promiscuous mode
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.103 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.113 256736 INFO nova.virt.libvirt.driver [-] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Instance destroyed successfully.
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.114 256736 DEBUG nova.objects.instance [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lazy-loading 'resources' on Instance uuid 77a189a8-6952-4618-9fbd-4fc89e13179f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:52:12 compute-0 ceph-mon[75050]: osdmap e191: 3 total, 3 up, 3 in
Nov 29 07:52:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/305215736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/305215736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.135 256736 DEBUG nova.virt.libvirt.vif [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:52:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-910310895',display_name='tempest-VolumesActionsTest-instance-910310895',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-910310895',id=5,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:52:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6fa0635c8b0e4d5b8c2a094db6beebe2',ramdisk_id='',reservation_id='r-0x03aofk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-814590685',owner_user_name='tempest-VolumesActionsTest-814590685-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:52:09Z,user_data=None,user_id='a47b942d30fe4bd69742fcb8e3cfdb1d',uuid=77a189a8-6952-4618-9fbd-4fc89e13179f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.136 256736 DEBUG nova.network.os_vif_util [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converting VIF {"id": "88467199-2f72-4f31-a582-1d679789c919", "address": "fa:16:3e:f4:10:84", "network": {"id": "0250a927-9e86-48d0-9872-012000185830", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1276235221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6fa0635c8b0e4d5b8c2a094db6beebe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88467199-2f", "ovs_interfaceid": "88467199-2f72-4f31-a582-1d679789c919", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.138 256736 DEBUG nova.network.os_vif_util [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:10:84,bridge_name='br-int',has_traffic_filtering=True,id=88467199-2f72-4f31-a582-1d679789c919,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88467199-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.138 256736 DEBUG os_vif [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:10:84,bridge_name='br-int',has_traffic_filtering=True,id=88467199-2f72-4f31-a582-1d679789c919,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88467199-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.141 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:12 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[273878]: [NOTICE]   (273883) : haproxy version is 2.8.14-c23fe91
Nov 29 07:52:12 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[273878]: [NOTICE]   (273883) : path to executable is /usr/sbin/haproxy
Nov 29 07:52:12 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[273878]: [WARNING]  (273883) : Exiting Master process...
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.141 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88467199-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:12 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[273878]: [WARNING]  (273883) : Exiting Master process...
Nov 29 07:52:12 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[273878]: [ALERT]    (273883) : Current worker (273885) exited with code 143 (Terminated)
Nov 29 07:52:12 compute-0 neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830[273878]: [WARNING]  (273883) : All workers exited. Exiting... (0)
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.145 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.147 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:52:12 compute-0 systemd[1]: libpod-5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d.scope: Deactivated successfully.
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.150 256736 INFO os_vif [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:10:84,bridge_name='br-int',has_traffic_filtering=True,id=88467199-2f72-4f31-a582-1d679789c919,network=Network(0250a927-9e86-48d0-9872-012000185830),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88467199-2f')
Nov 29 07:52:12 compute-0 podman[273919]: 2025-11-29 07:52:12.153226014 +0000 UTC m=+0.058324011 container died 5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 07:52:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d-userdata-shm.mount: Deactivated successfully.
Nov 29 07:52:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c47cc2848ab3e13c6484cdca26bed8e7e7eed09c12b861307b679df67ad5b7-merged.mount: Deactivated successfully.
Nov 29 07:52:12 compute-0 podman[273919]: 2025-11-29 07:52:12.196229586 +0000 UTC m=+0.101327573 container cleanup 5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:12 compute-0 systemd[1]: libpod-conmon-5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d.scope: Deactivated successfully.
Nov 29 07:52:12 compute-0 podman[273977]: 2025-11-29 07:52:12.288711056 +0000 UTC m=+0.050617080 container remove 5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:52:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:12.298 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b2740490-d785-45e5-bd84-cfe5f1f0045d]: (4, ('Sat Nov 29 07:52:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830 (5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d)\n5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d\nSat Nov 29 07:52:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0250a927-9e86-48d0-9872-012000185830 (5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d)\n5ad36dd463fa8d8dfb368b8829367139df3a79657400bec2f5b72cd77e93a33d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:12.302 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fc3b4f-cb97-4b85-a043-854fd1394487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:12.303 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0250a927-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:12 compute-0 kernel: tap0250a927-90: left promiscuous mode
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.305 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.319 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:12.322 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4a80aa18-acf9-4692-8467-16542004eb5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:12.340 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[85465bbb-ec81-4414-9d86-978c237c39e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:12.343 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[daff3de5-2789-41a4-8649-bd1d8e04f8d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:12.364 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[88d7bec0-8e05-4a07-93db-4c2405fcb5f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 504378, 'reachable_time': 43386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273992, 'error': None, 'target': 'ovnmeta-0250a927-9e86-48d0-9872-012000185830', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d0250a927\x2d9e86\x2d48d0\x2d9872\x2d012000185830.mount: Deactivated successfully.
Nov 29 07:52:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:12.368 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0250a927-9e86-48d0-9872-012000185830 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:52:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:12.368 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[233f66b3-3bb5-4bc4-bd81-8d74eeac1ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:52:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.590 256736 INFO nova.virt.libvirt.driver [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Deleting instance files /var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f_del
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.591 256736 INFO nova.virt.libvirt.driver [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Deletion of /var/lib/nova/instances/77a189a8-6952-4618-9fbd-4fc89e13179f_del complete
Nov 29 07:52:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414112134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414112134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.662 256736 INFO nova.compute.manager [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Took 0.80 seconds to destroy the instance on the hypervisor.
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.663 256736 DEBUG oslo.service.loopingcall [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.664 256736 DEBUG nova.compute.manager [-] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:52:12 compute-0 nova_compute[256729]: 2025-11-29 07:52:12.665 256736 DEBUG nova.network.neutron [-] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:52:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 183 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 269 op/s
Nov 29 07:52:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 29 07:52:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1414112134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1414112134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:13 compute-0 ceph-mon[75050]: pgmap v1385: 305 pgs: 305 active+clean; 183 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 269 op/s
Nov 29 07:52:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 29 07:52:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 29 07:52:13 compute-0 sshd-session[273881]: Connection closed by authenticating user root 143.14.121.41 port 56278 [preauth]
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.406 256736 DEBUG nova.network.neutron [-] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.442 256736 INFO nova.compute.manager [-] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Took 0.78 seconds to deallocate network for instance.
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.492 256736 DEBUG nova.compute.manager [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Received event network-vif-unplugged-88467199-2f72-4f31-a582-1d679789c919 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.492 256736 DEBUG oslo_concurrency.lockutils [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.493 256736 DEBUG oslo_concurrency.lockutils [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.493 256736 DEBUG oslo_concurrency.lockutils [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.493 256736 DEBUG nova.compute.manager [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] No waiting events found dispatching network-vif-unplugged-88467199-2f72-4f31-a582-1d679789c919 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.493 256736 DEBUG nova.compute.manager [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Received event network-vif-unplugged-88467199-2f72-4f31-a582-1d679789c919 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.494 256736 DEBUG nova.compute.manager [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Received event network-vif-plugged-88467199-2f72-4f31-a582-1d679789c919 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.494 256736 DEBUG oslo_concurrency.lockutils [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.494 256736 DEBUG oslo_concurrency.lockutils [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.495 256736 DEBUG oslo_concurrency.lockutils [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.495 256736 DEBUG nova.compute.manager [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] No waiting events found dispatching network-vif-plugged-88467199-2f72-4f31-a582-1d679789c919 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.495 256736 WARNING nova.compute.manager [req-c12fed57-4539-4ebb-af7a-2f4d6fcde2e6 req-09598dc4-7464-492c-9f47-a58972af6c91 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Received unexpected event network-vif-plugged-88467199-2f72-4f31-a582-1d679789c919 for instance with vm_state active and task_state deleting.
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.497 256736 DEBUG oslo_concurrency.lockutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.497 256736 DEBUG oslo_concurrency.lockutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.584 256736 DEBUG oslo_concurrency.processutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:13 compute-0 nova_compute[256729]: 2025-11-29 07:52:13.617 256736 DEBUG nova.compute.manager [req-9bce95f5-0cc3-4ef7-8313-485480a5b882 req-e246fbea-8b4c-4cf0-9c4b-a16a7e61dd77 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Received event network-vif-deleted-88467199-2f72-4f31-a582-1d679789c919 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:52:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1604639017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1604639017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/37759222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:14 compute-0 nova_compute[256729]: 2025-11-29 07:52:14.087 256736 DEBUG oslo_concurrency.processutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:14 compute-0 nova_compute[256729]: 2025-11-29 07:52:14.096 256736 DEBUG nova.compute.provider_tree [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:52:14 compute-0 nova_compute[256729]: 2025-11-29 07:52:14.113 256736 DEBUG nova.scheduler.client.report [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:52:14 compute-0 nova_compute[256729]: 2025-11-29 07:52:14.141 256736 DEBUG oslo_concurrency.lockutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:14 compute-0 nova_compute[256729]: 2025-11-29 07:52:14.196 256736 INFO nova.scheduler.client.report [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Deleted allocations for instance 77a189a8-6952-4618-9fbd-4fc89e13179f
Nov 29 07:52:14 compute-0 nova_compute[256729]: 2025-11-29 07:52:14.278 256736 DEBUG oslo_concurrency.lockutils [None req-2ed9abe5-a289-42bf-827f-51212d92d487 a47b942d30fe4bd69742fcb8e3cfdb1d 6fa0635c8b0e4d5b8c2a094db6beebe2 - - default default] Lock "77a189a8-6952-4618-9fbd-4fc89e13179f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:14 compute-0 ceph-mon[75050]: osdmap e192: 3 total, 3 up, 3 in
Nov 29 07:52:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1604639017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1604639017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/37759222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 125 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 925 KiB/s wr, 323 op/s
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002161448158820082 of space, bias 1.0, pg target 0.06484344476460246 quantized to 32 (current 32)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003464421762062903 of space, bias 1.0, pg target 0.10393265286188709 quantized to 32 (current 32)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.537124763951292e-05 of space, bias 1.0, pg target 0.019611374291853875 quantized to 32 (current 32)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:52:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:52:15 compute-0 ceph-mon[75050]: pgmap v1387: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 125 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 925 KiB/s wr, 323 op/s
Nov 29 07:52:16 compute-0 sshd-session[273995]: Connection closed by authenticating user root 143.14.121.41 port 52556 [preauth]
Nov 29 07:52:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 29 07:52:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 29 07:52:16 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 29 07:52:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 83 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 926 KiB/s wr, 388 op/s
Nov 29 07:52:16 compute-0 nova_compute[256729]: 2025-11-29 07:52:16.767 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:17 compute-0 nova_compute[256729]: 2025-11-29 07:52:17.145 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3160066641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 29 07:52:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 29 07:52:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 29 07:52:17 compute-0 ceph-mon[75050]: osdmap e193: 3 total, 3 up, 3 in
Nov 29 07:52:17 compute-0 ceph-mon[75050]: pgmap v1389: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 83 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 926 KiB/s wr, 388 op/s
Nov 29 07:52:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3160066641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:17 compute-0 podman[274022]: 2025-11-29 07:52:17.725591864 +0000 UTC m=+0.072217464 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:52:17 compute-0 podman[274021]: 2025-11-29 07:52:17.773363522 +0000 UTC m=+0.126213458 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 07:52:17 compute-0 podman[274020]: 2025-11-29 07:52:17.791447356 +0000 UTC m=+0.147658062 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:52:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 29 07:52:18 compute-0 ceph-mon[75050]: osdmap e194: 3 total, 3 up, 3 in
Nov 29 07:52:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 29 07:52:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 29 07:52:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 11 KiB/s wr, 254 op/s
Nov 29 07:52:19 compute-0 sshd-session[274018]: Connection closed by authenticating user root 143.14.121.41 port 52572 [preauth]
Nov 29 07:52:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 29 07:52:19 compute-0 ceph-mon[75050]: osdmap e195: 3 total, 3 up, 3 in
Nov 29 07:52:19 compute-0 ceph-mon[75050]: pgmap v1392: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 11 KiB/s wr, 254 op/s
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.648387) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402739648450, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2368, "num_deletes": 265, "total_data_size": 3513069, "memory_usage": 3578800, "flush_reason": "Manual Compaction"}
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 29 07:52:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 29 07:52:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402739669921, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3448408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21618, "largest_seqno": 23985, "table_properties": {"data_size": 3437468, "index_size": 7050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23975, "raw_average_key_size": 21, "raw_value_size": 3415104, "raw_average_value_size": 3041, "num_data_blocks": 310, "num_entries": 1123, "num_filter_entries": 1123, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402539, "oldest_key_time": 1764402539, "file_creation_time": 1764402739, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 21844 microseconds, and 10164 cpu microseconds.
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.670210) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3448408 bytes OK
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.670261) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.673344) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.673380) EVENT_LOG_v1 {"time_micros": 1764402739673368, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.673405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3502696, prev total WAL file size 3502737, number of live WAL files 2.
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.675142) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3367KB)], [50(8440KB)]
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402739675211, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 12091510, "oldest_snapshot_seqno": -1}
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5351 keys, 10227556 bytes, temperature: kUnknown
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402739756094, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 10227556, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10187573, "index_size": 25473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 132708, "raw_average_key_size": 24, "raw_value_size": 10086889, "raw_average_value_size": 1885, "num_data_blocks": 1059, "num_entries": 5351, "num_filter_entries": 5351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764402739, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.756787) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 10227556 bytes
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.816123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.8 rd, 125.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 5887, records dropped: 536 output_compression: NoCompression
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.816175) EVENT_LOG_v1 {"time_micros": 1764402739816155, "job": 26, "event": "compaction_finished", "compaction_time_micros": 81256, "compaction_time_cpu_micros": 42145, "output_level": 6, "num_output_files": 1, "total_output_size": 10227556, "num_input_records": 5887, "num_output_records": 5351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402739816999, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402739818352, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.674668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.818437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.818443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.818445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.818447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:52:19 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:52:19.818449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:52:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094215751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094215751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:20 compute-0 ceph-mon[75050]: osdmap e196: 3 total, 3 up, 3 in
Nov 29 07:52:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2094215751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2094215751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 7.8 KiB/s wr, 109 op/s
Nov 29 07:52:21 compute-0 ceph-mon[75050]: pgmap v1394: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 7.8 KiB/s wr, 109 op/s
Nov 29 07:52:21 compute-0 nova_compute[256729]: 2025-11-29 07:52:21.769 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:22 compute-0 nova_compute[256729]: 2025-11-29 07:52:22.147 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 29 07:52:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 29 07:52:22 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 29 07:52:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 105 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 13 MiB/s wr, 174 op/s
Nov 29 07:52:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2556161313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2556161313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:23 compute-0 ceph-mon[75050]: osdmap e197: 3 total, 3 up, 3 in
Nov 29 07:52:23 compute-0 ceph-mon[75050]: pgmap v1396: 305 pgs: 305 active+clean; 105 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 13 MiB/s wr, 174 op/s
Nov 29 07:52:23 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2556161313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:23 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2556161313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:24 compute-0 sshd-session[274082]: Connection closed by authenticating user root 143.14.121.41 port 52582 [preauth]
Nov 29 07:52:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572201594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/572201594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:24 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/572201594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:24 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/572201594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 177 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 23 MiB/s wr, 137 op/s
Nov 29 07:52:25 compute-0 ceph-mon[75050]: pgmap v1397: 305 pgs: 305 active+clean; 177 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 23 MiB/s wr, 137 op/s
Nov 29 07:52:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:52:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3723134447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3723134447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:52:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 281 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 30 MiB/s wr, 137 op/s
Nov 29 07:52:26 compute-0 nova_compute[256729]: 2025-11-29 07:52:26.770 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:27 compute-0 nova_compute[256729]: 2025-11-29 07:52:27.112 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402732.111238, 77a189a8-6952-4618-9fbd-4fc89e13179f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:52:27 compute-0 nova_compute[256729]: 2025-11-29 07:52:27.113 256736 INFO nova.compute.manager [-] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] VM Stopped (Lifecycle Event)
Nov 29 07:52:27 compute-0 nova_compute[256729]: 2025-11-29 07:52:27.149 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:27 compute-0 nova_compute[256729]: 2025-11-29 07:52:27.167 256736 DEBUG nova.compute.manager [None req-db7fb117-ab6d-4aa1-8084-d9074a800f16 - - - - - -] [instance: 77a189a8-6952-4618-9fbd-4fc89e13179f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:52:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 29 07:52:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 29 07:52:27 compute-0 ceph-mon[75050]: pgmap v1398: 305 pgs: 305 active+clean; 281 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 30 MiB/s wr, 137 op/s
Nov 29 07:52:27 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 29 07:52:27 compute-0 sshd-session[274084]: Connection closed by authenticating user root 143.14.121.41 port 56470 [preauth]
Nov 29 07:52:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 361 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 40 MiB/s wr, 170 op/s
Nov 29 07:52:28 compute-0 ceph-mon[75050]: osdmap e198: 3 total, 3 up, 3 in
Nov 29 07:52:29 compute-0 ceph-mon[75050]: pgmap v1400: 305 pgs: 305 active+clean; 361 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 40 MiB/s wr, 170 op/s
Nov 29 07:52:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 361 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 32 MiB/s wr, 91 op/s
Nov 29 07:52:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 29 07:52:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 29 07:52:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 29 07:52:31 compute-0 nova_compute[256729]: 2025-11-29 07:52:31.773 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:31 compute-0 ceph-mon[75050]: pgmap v1401: 305 pgs: 305 active+clean; 361 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 32 MiB/s wr, 91 op/s
Nov 29 07:52:31 compute-0 ceph-mon[75050]: osdmap e199: 3 total, 3 up, 3 in
Nov 29 07:52:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2579500996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2579500996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:32 compute-0 sshd-session[274086]: Connection closed by authenticating user root 143.14.121.41 port 56480 [preauth]
Nov 29 07:52:32 compute-0 nova_compute[256729]: 2025-11-29 07:52:32.153 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 537 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 45 MiB/s wr, 128 op/s
Nov 29 07:52:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Nov 29 07:52:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Nov 29 07:52:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Nov 29 07:52:32 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2579500996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:32 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2579500996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:33 compute-0 ceph-mon[75050]: pgmap v1403: 305 pgs: 305 active+clean; 537 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 45 MiB/s wr, 128 op/s
Nov 29 07:52:33 compute-0 ceph-mon[75050]: osdmap e200: 3 total, 3 up, 3 in
Nov 29 07:52:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 617 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 46 MiB/s wr, 152 op/s
Nov 29 07:52:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:52:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:52:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Nov 29 07:52:36 compute-0 ceph-mon[75050]: pgmap v1405: 305 pgs: 305 active+clean; 617 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 46 MiB/s wr, 152 op/s
Nov 29 07:52:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Nov 29 07:52:36 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Nov 29 07:52:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 726 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 59 MiB/s wr, 218 op/s
Nov 29 07:52:36 compute-0 nova_compute[256729]: 2025-11-29 07:52:36.775 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:37 compute-0 nova_compute[256729]: 2025-11-29 07:52:37.156 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:37 compute-0 sshd-session[274088]: Connection closed by authenticating user root 143.14.121.41 port 56116 [preauth]
Nov 29 07:52:37 compute-0 ceph-mon[75050]: osdmap e201: 3 total, 3 up, 3 in
Nov 29 07:52:37 compute-0 ceph-mon[75050]: pgmap v1407: 305 pgs: 305 active+clean; 726 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 59 MiB/s wr, 218 op/s
Nov 29 07:52:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Nov 29 07:52:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Nov 29 07:52:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Nov 29 07:52:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 848 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 883 KiB/s rd, 48 MiB/s wr, 188 op/s
Nov 29 07:52:39 compute-0 ceph-mon[75050]: osdmap e202: 3 total, 3 up, 3 in
Nov 29 07:52:39 compute-0 ceph-mon[75050]: pgmap v1409: 305 pgs: 305 active+clean; 848 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 883 KiB/s rd, 48 MiB/s wr, 188 op/s
Nov 29 07:52:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1031580995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1031580995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834826426' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834826426' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1031580995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1031580995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2834826426' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2834826426' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 848 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 36 MiB/s wr, 143 op/s
Nov 29 07:52:41 compute-0 sshd-session[274091]: Connection closed by authenticating user root 143.14.121.41 port 56128 [preauth]
Nov 29 07:52:41 compute-0 nova_compute[256729]: 2025-11-29 07:52:41.778 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:41 compute-0 ceph-mon[75050]: pgmap v1410: 305 pgs: 305 active+clean; 848 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 36 MiB/s wr, 143 op/s
Nov 29 07:52:42 compute-0 nova_compute[256729]: 2025-11-29 07:52:42.160 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 896 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 104 KiB/s rd, 32 MiB/s wr, 156 op/s
Nov 29 07:52:43 compute-0 ceph-mon[75050]: pgmap v1411: 305 pgs: 305 active+clean; 896 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 104 KiB/s rd, 32 MiB/s wr, 156 op/s
Nov 29 07:52:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Nov 29 07:52:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Nov 29 07:52:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Nov 29 07:52:43 compute-0 ovn_controller[153383]: 2025-11-29T07:52:43Z|00081|memory_trim|INFO|Detected inactivity (last active 30026 ms ago): trimming memory
Nov 29 07:52:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:44.243 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:52:44 compute-0 nova_compute[256729]: 2025-11-29 07:52:44.243 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:44.246 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:52:44 compute-0 ceph-mon[75050]: osdmap e203: 3 total, 3 up, 3 in
Nov 29 07:52:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1026405076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1026405076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 940 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 25 MiB/s wr, 104 op/s
Nov 29 07:52:45 compute-0 sshd-session[274093]: Connection closed by authenticating user root 143.14.121.41 port 56130 [preauth]
Nov 29 07:52:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1026405076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1026405076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:45 compute-0 ceph-mon[75050]: pgmap v1413: 305 pgs: 305 active+clean; 940 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 25 MiB/s wr, 104 op/s
Nov 29 07:52:46 compute-0 nova_compute[256729]: 2025-11-29 07:52:46.145 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 968 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 22 MiB/s wr, 92 op/s
Nov 29 07:52:46 compute-0 nova_compute[256729]: 2025-11-29 07:52:46.781 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:47 compute-0 ceph-mon[75050]: pgmap v1414: 305 pgs: 305 active+clean; 968 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 22 MiB/s wr, 92 op/s
Nov 29 07:52:47 compute-0 nova_compute[256729]: 2025-11-29 07:52:47.162 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:48 compute-0 nova_compute[256729]: 2025-11-29 07:52:48.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 993 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 19 MiB/s wr, 72 op/s
Nov 29 07:52:48 compute-0 podman[274098]: 2025-11-29 07:52:48.699473022 +0000 UTC m=+0.067449226 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 07:52:48 compute-0 podman[274099]: 2025-11-29 07:52:48.724892732 +0000 UTC m=+0.076373145 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:52:48 compute-0 podman[274097]: 2025-11-29 07:52:48.736817601 +0000 UTC m=+0.101922288 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 29 07:52:49 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:49.249 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:52:49 compute-0 ceph-mon[75050]: pgmap v1415: 305 pgs: 305 active+clean; 993 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 19 MiB/s wr, 72 op/s
Nov 29 07:52:50 compute-0 nova_compute[256729]: 2025-11-29 07:52:50.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:50 compute-0 nova_compute[256729]: 2025-11-29 07:52:50.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 993 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 19 MiB/s wr, 72 op/s
Nov 29 07:52:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 29 07:52:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 29 07:52:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 29 07:52:51 compute-0 nova_compute[256729]: 2025-11-29 07:52:51.783 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:51 compute-0 ceph-mon[75050]: pgmap v1416: 305 pgs: 305 active+clean; 993 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 19 MiB/s wr, 72 op/s
Nov 29 07:52:51 compute-0 ceph-mon[75050]: osdmap e204: 3 total, 3 up, 3 in
Nov 29 07:52:52 compute-0 sshd-session[274095]: Connection closed by authenticating user root 143.14.121.41 port 34742 [preauth]
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.164 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.499 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.500 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.522 256736 DEBUG nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.618 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.619 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.632 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.633 256736 INFO nova.compute.claims [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:52:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 21 MiB/s wr, 47 op/s
Nov 29 07:52:52 compute-0 nova_compute[256729]: 2025-11-29 07:52:52.944 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:53 compute-0 nova_compute[256729]: 2025-11-29 07:52:53.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:53 compute-0 nova_compute[256729]: 2025-11-29 07:52:53.267 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766393555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:53 compute-0 nova_compute[256729]: 2025-11-29 07:52:53.392 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:53 compute-0 nova_compute[256729]: 2025-11-29 07:52:53.397 256736 DEBUG nova.compute.provider_tree [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:52:53 compute-0 nova_compute[256729]: 2025-11-29 07:52:53.515 256736 DEBUG nova.scheduler.client.report [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:52:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 29 07:52:54 compute-0 ceph-mon[75050]: pgmap v1418: 305 pgs: 305 active+clean; 1.0 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 21 MiB/s wr, 47 op/s
Nov 29 07:52:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1766393555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 29 07:52:54 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.260 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.261 256736 DEBUG nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.263 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.263 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.263 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.264 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.438 256736 DEBUG nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.439 256736 DEBUG nova.network.neutron [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.469 256736 INFO nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:52:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544727264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.688 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 1.0 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 16 MiB/s wr, 35 op/s
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.699 256736 DEBUG nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.840 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.841 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4675MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.841 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:54 compute-0 nova_compute[256729]: 2025-11-29 07:52:54.841 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.025 256736 DEBUG nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.027 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.027 256736 INFO nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Creating image(s)
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.050 256736 DEBUG nova.storage.rbd_utils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.075 256736 DEBUG nova.storage.rbd_utils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.099 256736 DEBUG nova.storage.rbd_utils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.104 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:55 compute-0 ceph-mon[75050]: osdmap e205: 3 total, 3 up, 3 in
Nov 29 07:52:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/544727264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:55 compute-0 ceph-mon[75050]: pgmap v1420: 305 pgs: 305 active+clean; 1.0 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 16 MiB/s wr, 35 op/s
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.171 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.172 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.173 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.173 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.194 256736 DEBUG nova.storage.rbd_utils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.198 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.531 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.577 256736 DEBUG nova.storage.rbd_utils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] resizing rbd image 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.663 256736 DEBUG nova.objects.instance [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'migration_context' on Instance uuid 28704ae1-91ab-4ea5-99cd-c2ec5475f015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.712 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 28704ae1-91ab-4ea5-99cd-c2ec5475f015 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.712 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.713 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.732 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.733 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Ensure instance console log exists: /var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.733 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.733 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.733 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.882 256736 DEBUG nova.policy [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6bef1230e3de4a87aa01df74ec671a23', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8117debb786c4549812cc6e7571f6d4d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:52:55 compute-0 nova_compute[256729]: 2025-11-29 07:52:55.886 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:52:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:52:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086102721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:56 compute-0 nova_compute[256729]: 2025-11-29 07:52:56.296 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:52:56 compute-0 nova_compute[256729]: 2025-11-29 07:52:56.301 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:52:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3086102721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:52:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 626 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 11 MiB/s wr, 59 op/s
Nov 29 07:52:56 compute-0 nova_compute[256729]: 2025-11-29 07:52:56.785 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:56 compute-0 sshd-session[274162]: Connection closed by authenticating user root 143.14.121.41 port 54756 [preauth]
Nov 29 07:52:57 compute-0 nova_compute[256729]: 2025-11-29 07:52:57.166 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:52:57 compute-0 nova_compute[256729]: 2025-11-29 07:52:57.232 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:52:57 compute-0 ceph-mon[75050]: pgmap v1421: 305 pgs: 305 active+clean; 626 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 11 MiB/s wr, 59 op/s
Nov 29 07:52:57 compute-0 nova_compute[256729]: 2025-11-29 07:52:57.392 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:52:57 compute-0 nova_compute[256729]: 2025-11-29 07:52:57.392 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:52:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/374214557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:52:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/374214557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.151 256736 DEBUG nova.network.neutron [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Successfully created port: 1f274eee-58f6-4dd7-94e0-15819552d2c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.394 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.394 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.394 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.474 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.474 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.475 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.475 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.475 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:52:58 compute-0 nova_compute[256729]: 2025-11-29 07:52:58.475 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:52:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/374214557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:52:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/374214557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:52:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 12 MiB/s wr, 112 op/s
Nov 29 07:52:59 compute-0 ceph-mon[75050]: pgmap v1422: 305 pgs: 305 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 12 MiB/s wr, 112 op/s
Nov 29 07:52:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:59.771 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:52:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:59.772 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:52:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:52:59.772 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:52:59 compute-0 nova_compute[256729]: 2025-11-29 07:52:59.913 256736 DEBUG nova.network.neutron [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Successfully updated port: 1f274eee-58f6-4dd7-94e0-15819552d2c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:52:59 compute-0 nova_compute[256729]: 2025-11-29 07:52:59.933 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:52:59 compute-0 nova_compute[256729]: 2025-11-29 07:52:59.934 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquired lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:52:59 compute-0 nova_compute[256729]: 2025-11-29 07:52:59.934 256736 DEBUG nova.network.neutron [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.069 256736 DEBUG nova.compute.manager [req-4053f2c8-367c-417e-9688-7c69ee3b0f11 req-0e3283c8-bad4-40d8-902b-24caa86bcb94 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received event network-changed-1f274eee-58f6-4dd7-94e0-15819552d2c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.069 256736 DEBUG nova.compute.manager [req-4053f2c8-367c-417e-9688-7c69ee3b0f11 req-0e3283c8-bad4-40d8-902b-24caa86bcb94 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Refreshing instance network info cache due to event network-changed-1f274eee-58f6-4dd7-94e0-15819552d2c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.070 256736 DEBUG oslo_concurrency.lockutils [req-4053f2c8-367c-417e-9688-7c69ee3b0f11 req-0e3283c8-bad4-40d8-902b-24caa86bcb94 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.120 256736 DEBUG nova.network.neutron [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:53:00 compute-0 sshd-session[274398]: Connection closed by authenticating user root 143.14.121.41 port 54766 [preauth]
Nov 29 07:53:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 5.5 MiB/s wr, 83 op/s
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.906 256736 DEBUG nova.network.neutron [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Updating instance_info_cache with network_info: [{"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.930 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Releasing lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.931 256736 DEBUG nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Instance network_info: |[{"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.932 256736 DEBUG oslo_concurrency.lockutils [req-4053f2c8-367c-417e-9688-7c69ee3b0f11 req-0e3283c8-bad4-40d8-902b-24caa86bcb94 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.933 256736 DEBUG nova.network.neutron [req-4053f2c8-367c-417e-9688-7c69ee3b0f11 req-0e3283c8-bad4-40d8-902b-24caa86bcb94 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Refreshing network info cache for port 1f274eee-58f6-4dd7-94e0-15819552d2c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.938 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Start _get_guest_xml network_info=[{"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.947 256736 WARNING nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.954 256736 DEBUG nova.virt.libvirt.host [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.955 256736 DEBUG nova.virt.libvirt.host [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.963 256736 DEBUG nova.virt.libvirt.host [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.964 256736 DEBUG nova.virt.libvirt.host [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.965 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.965 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.966 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.966 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.966 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.966 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.967 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.967 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.967 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.968 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.968 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.968 256736 DEBUG nova.virt.hardware [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:53:00 compute-0 nova_compute[256729]: 2025-11-29 07:53:00.972 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916520848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:01 compute-0 nova_compute[256729]: 2025-11-29 07:53:01.420 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:01 compute-0 nova_compute[256729]: 2025-11-29 07:53:01.442 256736 DEBUG nova.storage.rbd_utils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:01 compute-0 nova_compute[256729]: 2025-11-29 07:53:01.445 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:01 compute-0 ceph-mon[75050]: pgmap v1423: 305 pgs: 305 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 5.5 MiB/s wr, 83 op/s
Nov 29 07:53:01 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2916520848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:01 compute-0 nova_compute[256729]: 2025-11-29 07:53:01.786 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451382673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:01 compute-0 nova_compute[256729]: 2025-11-29 07:53:01.855 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:01 compute-0 nova_compute[256729]: 2025-11-29 07:53:01.857 256736 DEBUG nova.virt.libvirt.vif [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-861192464',display_name='tempest-VolumesBackupsTest-instance-861192464',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-861192464',id=6,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPgC7F+GuAVv7pee5KUxxh7J9VHVUJK4LOVvzjaFnO1uxjktB3qZYM2R8ZzJHE1gAojvS7nudW3izVdrZ1YPNIOapaXoiQj3zOzSQuXjFqRwT3xc4gfY2+/Hmzvl0JplTA==',key_name='tempest-keypair-121257784',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8117debb786c4549812cc6e7571f6d4d',ramdisk_id='',reservation_id='r-u0j1smmb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-12225578',owner_user_name='tempest-VolumesBackupsTest-12225578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6bef1230e3de4a87aa01df74ec671a23',uuid=28704ae1-91ab-4ea5-99cd-c2ec5475f015,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:53:01 compute-0 nova_compute[256729]: 2025-11-29 07:53:01.857 256736 DEBUG nova.network.os_vif_util [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converting VIF {"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:53:01 compute-0 nova_compute[256729]: 2025-11-29 07:53:01.858 256736 DEBUG nova.network.os_vif_util [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:01:7f,bridge_name='br-int',has_traffic_filtering=True,id=1f274eee-58f6-4dd7-94e0-15819552d2c0,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f274eee-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:53:01 compute-0 nova_compute[256729]: 2025-11-29 07:53:01.859 256736 DEBUG nova.objects.instance [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'pci_devices' on Instance uuid 28704ae1-91ab-4ea5-99cd-c2ec5475f015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.006 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <uuid>28704ae1-91ab-4ea5-99cd-c2ec5475f015</uuid>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <name>instance-00000006</name>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <nova:name>tempest-VolumesBackupsTest-instance-861192464</nova:name>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:53:00</nova:creationTime>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <nova:user uuid="6bef1230e3de4a87aa01df74ec671a23">tempest-VolumesBackupsTest-12225578-project-member</nova:user>
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <nova:project uuid="8117debb786c4549812cc6e7571f6d4d">tempest-VolumesBackupsTest-12225578</nova:project>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <nova:port uuid="1f274eee-58f6-4dd7-94e0-15819552d2c0">
Nov 29 07:53:02 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <system>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <entry name="serial">28704ae1-91ab-4ea5-99cd-c2ec5475f015</entry>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <entry name="uuid">28704ae1-91ab-4ea5-99cd-c2ec5475f015</entry>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     </system>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <os>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   </os>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <features>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   </features>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk">
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       </source>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk.config">
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       </source>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:53:02 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:8f:01:7f"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <target dev="tap1f274eee-58"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015/console.log" append="off"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <video>
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     </video>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:53:02 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:53:02 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:53:02 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:53:02 compute-0 nova_compute[256729]: </domain>
Nov 29 07:53:02 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.008 256736 DEBUG nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Preparing to wait for external event network-vif-plugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.008 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.009 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.009 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.010 256736 DEBUG nova.virt.libvirt.vif [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-861192464',display_name='tempest-VolumesBackupsTest-instance-861192464',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-861192464',id=6,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPgC7F+GuAVv7pee5KUxxh7J9VHVUJK4LOVvzjaFnO1uxjktB3qZYM2R8ZzJHE1gAojvS7nudW3izVdrZ1YPNIOapaXoiQj3zOzSQuXjFqRwT3xc4gfY2+/Hmzvl0JplTA==',key_name='tempest-keypair-121257784',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8117debb786c4549812cc6e7571f6d4d',ramdisk_id='',reservation_id='r-u0j1smmb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-12225578',owner_user_name='tempest-VolumesBackupsTest-12225578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6bef1230e3de4a87aa01df74ec671a23',uuid=28704ae1-91ab-4ea5-99cd-c2ec5475f015,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.010 256736 DEBUG nova.network.os_vif_util [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converting VIF {"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.011 256736 DEBUG nova.network.os_vif_util [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:01:7f,bridge_name='br-int',has_traffic_filtering=True,id=1f274eee-58f6-4dd7-94e0-15819552d2c0,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f274eee-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.011 256736 DEBUG os_vif [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:01:7f,bridge_name='br-int',has_traffic_filtering=True,id=1f274eee-58f6-4dd7-94e0-15819552d2c0,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f274eee-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.012 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.012 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.012 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.018 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.019 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f274eee-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.019 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f274eee-58, col_values=(('external_ids', {'iface-id': '1f274eee-58f6-4dd7-94e0-15819552d2c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:01:7f', 'vm-uuid': '28704ae1-91ab-4ea5-99cd-c2ec5475f015'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.021 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:02 compute-0 NetworkManager[48962]: <info>  [1764402782.0229] manager: (tap1f274eee-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.024 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.027 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.028 256736 INFO os_vif [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:01:7f,bridge_name='br-int',has_traffic_filtering=True,id=1f274eee-58f6-4dd7-94e0-15819552d2c0,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f274eee-58')
Nov 29 07:53:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.7 MiB/s wr, 87 op/s
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.707 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.708 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.709 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No VIF found with MAC fa:16:3e:8f:01:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.709 256736 INFO nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Using config drive
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.729 256736 DEBUG nova.storage.rbd_utils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.763 256736 DEBUG nova.network.neutron [req-4053f2c8-367c-417e-9688-7c69ee3b0f11 req-0e3283c8-bad4-40d8-902b-24caa86bcb94 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Updated VIF entry in instance network info cache for port 1f274eee-58f6-4dd7-94e0-15819552d2c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.763 256736 DEBUG nova.network.neutron [req-4053f2c8-367c-417e-9688-7c69ee3b0f11 req-0e3283c8-bad4-40d8-902b-24caa86bcb94 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Updating instance_info_cache with network_info: [{"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:02 compute-0 nova_compute[256729]: 2025-11-29 07:53:02.873 256736 DEBUG oslo_concurrency.lockutils [req-4053f2c8-367c-417e-9688-7c69ee3b0f11 req-0e3283c8-bad4-40d8-902b-24caa86bcb94 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:53:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3451382673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.281 256736 INFO nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Creating config drive at /var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015/disk.config
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.287 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqjhpujto execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.415 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqjhpujto" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.438 256736 DEBUG nova.storage.rbd_utils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] rbd image 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.441 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015/disk.config 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.627 256736 DEBUG oslo_concurrency.processutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015/disk.config 28704ae1-91ab-4ea5-99cd-c2ec5475f015_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.628 256736 INFO nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Deleting local config drive /var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015/disk.config because it was imported into RBD.
Nov 29 07:53:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 29 07:53:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 29 07:53:03 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 29 07:53:03 compute-0 kernel: tap1f274eee-58: entered promiscuous mode
Nov 29 07:53:03 compute-0 NetworkManager[48962]: <info>  [1764402783.6783] manager: (tap1f274eee-58): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Nov 29 07:53:03 compute-0 ovn_controller[153383]: 2025-11-29T07:53:03Z|00082|binding|INFO|Claiming lport 1f274eee-58f6-4dd7-94e0-15819552d2c0 for this chassis.
Nov 29 07:53:03 compute-0 ovn_controller[153383]: 2025-11-29T07:53:03Z|00083|binding|INFO|1f274eee-58f6-4dd7-94e0-15819552d2c0: Claiming fa:16:3e:8f:01:7f 10.100.0.4
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.724 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:03 compute-0 systemd-udevd[274536]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:53:03 compute-0 systemd-machined[217781]: New machine qemu-6-instance-00000006.
Nov 29 07:53:03 compute-0 NetworkManager[48962]: <info>  [1764402783.7617] device (tap1f274eee-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:53:03 compute-0 NetworkManager[48962]: <info>  [1764402783.7633] device (tap1f274eee-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:53:03 compute-0 sshd-session[274462]: Connection closed by authenticating user root 143.14.121.41 port 54768 [preauth]
Nov 29 07:53:03 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.797 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:03 compute-0 ovn_controller[153383]: 2025-11-29T07:53:03Z|00084|binding|INFO|Setting lport 1f274eee-58f6-4dd7-94e0-15819552d2c0 ovn-installed in OVS
Nov 29 07:53:03 compute-0 nova_compute[256729]: 2025-11-29 07:53:03.802 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:04 compute-0 ovn_controller[153383]: 2025-11-29T07:53:04Z|00085|binding|INFO|Setting lport 1f274eee-58f6-4dd7-94e0-15819552d2c0 up in Southbound
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.148 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:01:7f 10.100.0.4'], port_security=['fa:16:3e:8f:01:7f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '28704ae1-91ab-4ea5-99cd-c2ec5475f015', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8117debb786c4549812cc6e7571f6d4d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac3c0b20-8827-4bae-b233-9118cf035682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b45dfb6d-5934-4acb-b62b-b7104c4a665d, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=1f274eee-58f6-4dd7-94e0-15819552d2c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.150 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 1f274eee-58f6-4dd7-94e0-15819552d2c0 in datapath a24c1904-53b2-4346-8806-9a1bad79dd5c bound to our chassis
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.151 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a24c1904-53b2-4346-8806-9a1bad79dd5c
Nov 29 07:53:04 compute-0 ceph-mon[75050]: pgmap v1424: 305 pgs: 305 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.7 MiB/s wr, 87 op/s
Nov 29 07:53:04 compute-0 ceph-mon[75050]: osdmap e206: 3 total, 3 up, 3 in
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.165 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ad05066c-3282-4c3f-88fb-353b2cbb4192]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.166 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa24c1904-51 in ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.170 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa24c1904-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.170 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[63e9e362-91f1-4fda-a4c9-cb6cf8fb62bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.172 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[08cf9ddb-b7c6-4f5c-bb76-d067d80f09db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.185 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[819884b5-3cfe-4cb4-85ef-7c9d98bbdee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.206 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8339c5-3a3b-49bb-a637-402a1c098f44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.238 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d33f24-1ca5-4889-ad15-1146047b80c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 systemd-udevd[274539]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:53:04 compute-0 NetworkManager[48962]: <info>  [1764402784.2455] manager: (tapa24c1904-50): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.245 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a366283a-63c8-4bb4-8c56-cea5a8389306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.271 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[2268a189-6527-48c3-a33d-0fc89524e9bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.273 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab02e74-ee91-4306-b93b-31ebf57bdf74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 NetworkManager[48962]: <info>  [1764402784.2961] device (tapa24c1904-50): carrier: link connected
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.301 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bca446-c195-45f2-970c-91f650c4a897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.317 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[043ab087-295f-4000-8e95-c46465f78305]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa24c1904-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:15:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509936, 'reachable_time': 23655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274613, 'error': None, 'target': 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.331 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf1859f-5624-40df-8445-17ddcb4cc7cf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:158a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509936, 'tstamp': 509936}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274615, 'error': None, 'target': 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 nova_compute[256729]: 2025-11-29 07:53:04.342 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402784.342149, 28704ae1-91ab-4ea5-99cd-c2ec5475f015 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:04 compute-0 nova_compute[256729]: 2025-11-29 07:53:04.342 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] VM Started (Lifecycle Event)
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.345 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[544ca476-4006-406d-9ae1-2611cb6277e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa24c1904-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:15:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509936, 'reachable_time': 23655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274616, 'error': None, 'target': 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.366 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[88e03170-c197-431f-985b-d3f3508fea04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.422 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[729adebe-b0fa-4ab6-a10e-8d03b344cfd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.425 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa24c1904-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.425 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.426 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa24c1904-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:04 compute-0 nova_compute[256729]: 2025-11-29 07:53:04.428 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:04 compute-0 NetworkManager[48962]: <info>  [1764402784.4286] manager: (tapa24c1904-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 29 07:53:04 compute-0 kernel: tapa24c1904-50: entered promiscuous mode
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.432 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa24c1904-50, col_values=(('external_ids', {'iface-id': '11f9d079-79cd-4588-8ec9-e7d71108206b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:04 compute-0 nova_compute[256729]: 2025-11-29 07:53:04.433 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:04 compute-0 nova_compute[256729]: 2025-11-29 07:53:04.435 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.436 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a24c1904-53b2-4346-8806-9a1bad79dd5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a24c1904-53b2-4346-8806-9a1bad79dd5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.436 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[520d3d90-5454-448a-b878-c7331ce4e7c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.437 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-a24c1904-53b2-4346-8806-9a1bad79dd5c
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/a24c1904-53b2-4346-8806-9a1bad79dd5c.pid.haproxy
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID a24c1904-53b2-4346-8806-9a1bad79dd5c
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:53:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:04.437 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'env', 'PROCESS_TAG=haproxy-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a24c1904-53b2-4346-8806-9a1bad79dd5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:53:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 07:53:04 compute-0 podman[274648]: 2025-11-29 07:53:04.771647703 +0000 UTC m=+0.050198865 container create 386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:53:04 compute-0 ovn_controller[153383]: 2025-11-29T07:53:04Z|00086|binding|INFO|Releasing lport 11f9d079-79cd-4588-8ec9-e7d71108206b from this chassis (sb_readonly=0)
Nov 29 07:53:04 compute-0 systemd[1]: Started libpod-conmon-386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541.scope.
Nov 29 07:53:04 compute-0 podman[274648]: 2025-11-29 07:53:04.742551915 +0000 UTC m=+0.021103127 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:53:04 compute-0 nova_compute[256729]: 2025-11-29 07:53:04.883 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eaf912f6665db441d6f00dd0d05490513669784fb88f84adf237d95072c56f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:04 compute-0 podman[274648]: 2025-11-29 07:53:04.903680756 +0000 UTC m=+0.182231938 container init 386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 07:53:04 compute-0 podman[274648]: 2025-11-29 07:53:04.913773886 +0000 UTC m=+0.192325048 container start 386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:53:04 compute-0 nova_compute[256729]: 2025-11-29 07:53:04.925 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:04 compute-0 nova_compute[256729]: 2025-11-29 07:53:04.930 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402784.342252, 28704ae1-91ab-4ea5-99cd-c2ec5475f015 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:04 compute-0 nova_compute[256729]: 2025-11-29 07:53:04.931 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] VM Paused (Lifecycle Event)
Nov 29 07:53:04 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[274663]: [NOTICE]   (274667) : New worker (274669) forked
Nov 29 07:53:04 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[274663]: [NOTICE]   (274667) : Loading success.
Nov 29 07:53:05 compute-0 sudo[274678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:05 compute-0 sudo[274678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:05 compute-0 sudo[274678]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:05 compute-0 nova_compute[256729]: 2025-11-29 07:53:05.025 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:05 compute-0 nova_compute[256729]: 2025-11-29 07:53:05.029 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:53:05 compute-0 sudo[274703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:05 compute-0 sudo[274703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:05 compute-0 sudo[274703]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:53:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3270 syncs, 3.45 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5611 writes, 17K keys, 5611 commit groups, 1.0 writes per commit group, ingest: 9.49 MB, 0.02 MB/s
                                           Interval WAL: 5611 writes, 2346 syncs, 2.39 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:53:05 compute-0 sudo[274728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:05 compute-0 sudo[274728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:05 compute-0 sudo[274728]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:05 compute-0 ceph-mon[75050]: pgmap v1426: 305 pgs: 305 active+clean; 88 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 07:53:05 compute-0 sudo[274753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:53:05 compute-0 sudo[274753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:05 compute-0 nova_compute[256729]: 2025-11-29 07:53:05.261 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:53:05
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:53:05 compute-0 sudo[274753]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:53:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:53:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:53:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:53:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1168e73e-6009-478f-aa79-20eabc0f6246 does not exist
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev f353bcbc-ef0d-44af-9716-964d3ca6aa45 does not exist
Nov 29 07:53:05 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 46ada53c-0485-4640-b865-62a9deffa367 does not exist
Nov 29 07:53:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:53:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:53:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:53:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:53:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:53:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:05 compute-0 sudo[274808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:05 compute-0 sudo[274808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:05 compute-0 sudo[274808]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:05 compute-0 sudo[274833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:05 compute-0 sudo[274833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:05 compute-0 sudo[274833]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:06 compute-0 sudo[274858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:06 compute-0 sudo[274858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:06 compute-0 sudo[274858]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:06 compute-0 sudo[274883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:53:06 compute-0 sudo[274883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.180 256736 DEBUG nova.compute.manager [req-1d5d02d9-f9f3-44ad-9c34-470b3a2689c0 req-074178eb-c6e6-4d74-9baf-f6d6e18ad98d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received event network-vif-plugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.183 256736 DEBUG oslo_concurrency.lockutils [req-1d5d02d9-f9f3-44ad-9c34-470b3a2689c0 req-074178eb-c6e6-4d74-9baf-f6d6e18ad98d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.183 256736 DEBUG oslo_concurrency.lockutils [req-1d5d02d9-f9f3-44ad-9c34-470b3a2689c0 req-074178eb-c6e6-4d74-9baf-f6d6e18ad98d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.184 256736 DEBUG oslo_concurrency.lockutils [req-1d5d02d9-f9f3-44ad-9c34-470b3a2689c0 req-074178eb-c6e6-4d74-9baf-f6d6e18ad98d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.184 256736 DEBUG nova.compute.manager [req-1d5d02d9-f9f3-44ad-9c34-470b3a2689c0 req-074178eb-c6e6-4d74-9baf-f6d6e18ad98d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Processing event network-vif-plugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.185 256736 DEBUG nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.194 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402786.1899307, 28704ae1-91ab-4ea5-99cd-c2ec5475f015 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.194 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] VM Resumed (Lifecycle Event)
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.197 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.201 256736 INFO nova.virt.libvirt.driver [-] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Instance spawned successfully.
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.201 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.219 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.234 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.242 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.242 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.243 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.244 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.244 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.245 256736 DEBUG nova.virt.libvirt.driver [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:06 compute-0 sshd-session[274546]: Connection closed by authenticating user root 143.14.121.41 port 39048 [preauth]
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.326 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:53:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:53:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:53:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:53:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:53:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.361 256736 INFO nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Took 11.34 seconds to spawn the instance on the hypervisor.
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.362 256736 DEBUG nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.439 256736 INFO nova.compute.manager [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Took 13.86 seconds to build instance.
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.459 256736 DEBUG oslo_concurrency.lockutils [None req-f50f3d50-5888-4ae6-97b3-a9815bd0c85b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:06 compute-0 podman[274946]: 2025-11-29 07:53:06.505041968 +0000 UTC m=+0.029562832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 685 KiB/s wr, 67 op/s
Nov 29 07:53:06 compute-0 nova_compute[256729]: 2025-11-29 07:53:06.788 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:53:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:53:07 compute-0 podman[274946]: 2025-11-29 07:53:07.036168601 +0000 UTC m=+0.560689445 container create 3ecd67ecc488e1454c977c1104c066d2ec2c1b897d7c1f2a5d21f2156d9123ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:53:07 compute-0 nova_compute[256729]: 2025-11-29 07:53:07.075 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:07 compute-0 systemd[1]: Started libpod-conmon-3ecd67ecc488e1454c977c1104c066d2ec2c1b897d7c1f2a5d21f2156d9123ec.scope.
Nov 29 07:53:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:07 compute-0 podman[274946]: 2025-11-29 07:53:07.175606952 +0000 UTC m=+0.700127816 container init 3ecd67ecc488e1454c977c1104c066d2ec2c1b897d7c1f2a5d21f2156d9123ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bohr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 07:53:07 compute-0 podman[274946]: 2025-11-29 07:53:07.183488073 +0000 UTC m=+0.708008917 container start 3ecd67ecc488e1454c977c1104c066d2ec2c1b897d7c1f2a5d21f2156d9123ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:53:07 compute-0 distracted_bohr[274963]: 167 167
Nov 29 07:53:07 compute-0 systemd[1]: libpod-3ecd67ecc488e1454c977c1104c066d2ec2c1b897d7c1f2a5d21f2156d9123ec.scope: Deactivated successfully.
Nov 29 07:53:07 compute-0 podman[274946]: 2025-11-29 07:53:07.227932663 +0000 UTC m=+0.752453547 container attach 3ecd67ecc488e1454c977c1104c066d2ec2c1b897d7c1f2a5d21f2156d9123ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bohr, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 07:53:07 compute-0 podman[274946]: 2025-11-29 07:53:07.229261348 +0000 UTC m=+0.753782202 container died 3ecd67ecc488e1454c977c1104c066d2ec2c1b897d7c1f2a5d21f2156d9123ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:53:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f316cefd683eb9db0704084f2c1074d9f5d1689f33dcb7ba756ba2c93c5197e-merged.mount: Deactivated successfully.
Nov 29 07:53:07 compute-0 podman[274946]: 2025-11-29 07:53:07.326444609 +0000 UTC m=+0.850965453 container remove 3ecd67ecc488e1454c977c1104c066d2ec2c1b897d7c1f2a5d21f2156d9123ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:53:07 compute-0 systemd[1]: libpod-conmon-3ecd67ecc488e1454c977c1104c066d2ec2c1b897d7c1f2a5d21f2156d9123ec.scope: Deactivated successfully.
Nov 29 07:53:07 compute-0 ceph-mon[75050]: pgmap v1427: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 685 KiB/s wr, 67 op/s
Nov 29 07:53:07 compute-0 podman[274989]: 2025-11-29 07:53:07.55746354 +0000 UTC m=+0.065113163 container create 400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:53:07 compute-0 podman[274989]: 2025-11-29 07:53:07.528709711 +0000 UTC m=+0.036359354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:07 compute-0 systemd[1]: Started libpod-conmon-400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b.scope.
Nov 29 07:53:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600f5738b8b83ca0c3feda8b3bd3763d868e758e3275114df37cd022aba3fd38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600f5738b8b83ca0c3feda8b3bd3763d868e758e3275114df37cd022aba3fd38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600f5738b8b83ca0c3feda8b3bd3763d868e758e3275114df37cd022aba3fd38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600f5738b8b83ca0c3feda8b3bd3763d868e758e3275114df37cd022aba3fd38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600f5738b8b83ca0c3feda8b3bd3763d868e758e3275114df37cd022aba3fd38/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:07 compute-0 podman[274989]: 2025-11-29 07:53:07.731461246 +0000 UTC m=+0.239110889 container init 400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:53:07 compute-0 podman[274989]: 2025-11-29 07:53:07.739930694 +0000 UTC m=+0.247580337 container start 400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:53:07 compute-0 podman[274989]: 2025-11-29 07:53:07.753435015 +0000 UTC m=+0.261084658 container attach 400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:53:08 compute-0 nova_compute[256729]: 2025-11-29 07:53:08.361 256736 DEBUG nova.compute.manager [req-7185616c-addf-4179-84e6-7457bda42b25 req-bd338a1e-4d5c-4086-84d9-debffe274303 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received event network-vif-plugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:08 compute-0 nova_compute[256729]: 2025-11-29 07:53:08.363 256736 DEBUG oslo_concurrency.lockutils [req-7185616c-addf-4179-84e6-7457bda42b25 req-bd338a1e-4d5c-4086-84d9-debffe274303 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:08 compute-0 nova_compute[256729]: 2025-11-29 07:53:08.364 256736 DEBUG oslo_concurrency.lockutils [req-7185616c-addf-4179-84e6-7457bda42b25 req-bd338a1e-4d5c-4086-84d9-debffe274303 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:08 compute-0 nova_compute[256729]: 2025-11-29 07:53:08.364 256736 DEBUG oslo_concurrency.lockutils [req-7185616c-addf-4179-84e6-7457bda42b25 req-bd338a1e-4d5c-4086-84d9-debffe274303 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:08 compute-0 nova_compute[256729]: 2025-11-29 07:53:08.364 256736 DEBUG nova.compute.manager [req-7185616c-addf-4179-84e6-7457bda42b25 req-bd338a1e-4d5c-4086-84d9-debffe274303 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] No waiting events found dispatching network-vif-plugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:53:08 compute-0 nova_compute[256729]: 2025-11-29 07:53:08.364 256736 WARNING nova.compute.manager [req-7185616c-addf-4179-84e6-7457bda42b25 req-bd338a1e-4d5c-4086-84d9-debffe274303 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received unexpected event network-vif-plugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 for instance with vm_state active and task_state None.
Nov 29 07:53:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:53:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1163033074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:53:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1163033074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:53:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 540 KiB/s rd, 18 KiB/s wr, 43 op/s
Nov 29 07:53:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1163033074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1163033074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:53:09 compute-0 festive_turing[275005]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:53:09 compute-0 festive_turing[275005]: --> relative data size: 1.0
Nov 29 07:53:09 compute-0 festive_turing[275005]: --> All data devices are unavailable
Nov 29 07:53:09 compute-0 systemd[1]: libpod-400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b.scope: Deactivated successfully.
Nov 29 07:53:09 compute-0 systemd[1]: libpod-400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b.scope: Consumed 1.345s CPU time.
Nov 29 07:53:09 compute-0 podman[275035]: 2025-11-29 07:53:09.200873668 +0000 UTC m=+0.040672090 container died 400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-600f5738b8b83ca0c3feda8b3bd3763d868e758e3275114df37cd022aba3fd38-merged.mount: Deactivated successfully.
Nov 29 07:53:09 compute-0 podman[275035]: 2025-11-29 07:53:09.510422112 +0000 UTC m=+0.350220504 container remove 400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:53:09 compute-0 systemd[1]: libpod-conmon-400d79690dd9b46e308515b6478f4c75cf13b76887a823cee4a4596034e8586b.scope: Deactivated successfully.
Nov 29 07:53:09 compute-0 sudo[274883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:09 compute-0 sudo[275050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:09 compute-0 sudo[275050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:09 compute-0 sudo[275050]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:09 compute-0 sudo[275075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:09 compute-0 sudo[275075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:09 compute-0 sudo[275075]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:09 compute-0 sudo[275100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:09 compute-0 sudo[275100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:09 compute-0 sudo[275100]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:09 compute-0 sudo[275125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:53:09 compute-0 sudo[275125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:09 compute-0 ceph-mon[75050]: pgmap v1428: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 540 KiB/s rd, 18 KiB/s wr, 43 op/s
Nov 29 07:53:10 compute-0 podman[275192]: 2025-11-29 07:53:10.241850645 +0000 UTC m=+0.051263903 container create 8cdc5805a2ebe2260c3abddc076f9e97ffed1f43ebe76dd285954908c46610be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 07:53:10 compute-0 NetworkManager[48962]: <info>  [1764402790.2503] manager: (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 29 07:53:10 compute-0 NetworkManager[48962]: <info>  [1764402790.2510] manager: (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 29 07:53:10 compute-0 nova_compute[256729]: 2025-11-29 07:53:10.249 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:10 compute-0 systemd[1]: Started libpod-conmon-8cdc5805a2ebe2260c3abddc076f9e97ffed1f43ebe76dd285954908c46610be.scope.
Nov 29 07:53:10 compute-0 podman[275192]: 2025-11-29 07:53:10.215792047 +0000 UTC m=+0.025205335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:53:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3157 syncs, 3.66 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4854 writes, 17K keys, 4854 commit groups, 1.0 writes per commit group, ingest: 10.20 MB, 0.02 MB/s
                                           Interval WAL: 4854 writes, 1950 syncs, 2.49 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:53:10 compute-0 nova_compute[256729]: 2025-11-29 07:53:10.388 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:10 compute-0 podman[275192]: 2025-11-29 07:53:10.39826456 +0000 UTC m=+0.207677858 container init 8cdc5805a2ebe2260c3abddc076f9e97ffed1f43ebe76dd285954908c46610be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:53:10 compute-0 ovn_controller[153383]: 2025-11-29T07:53:10Z|00087|binding|INFO|Releasing lport 11f9d079-79cd-4588-8ec9-e7d71108206b from this chassis (sb_readonly=0)
Nov 29 07:53:10 compute-0 ovn_controller[153383]: 2025-11-29T07:53:10Z|00088|binding|INFO|Releasing lport 11f9d079-79cd-4588-8ec9-e7d71108206b from this chassis (sb_readonly=0)
Nov 29 07:53:10 compute-0 nova_compute[256729]: 2025-11-29 07:53:10.401 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:10 compute-0 podman[275192]: 2025-11-29 07:53:10.405633078 +0000 UTC m=+0.215046356 container start 8cdc5805a2ebe2260c3abddc076f9e97ffed1f43ebe76dd285954908c46610be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:53:10 compute-0 laughing_sammet[275207]: 167 167
Nov 29 07:53:10 compute-0 systemd[1]: libpod-8cdc5805a2ebe2260c3abddc076f9e97ffed1f43ebe76dd285954908c46610be.scope: Deactivated successfully.
Nov 29 07:53:10 compute-0 podman[275192]: 2025-11-29 07:53:10.411861274 +0000 UTC m=+0.221274672 container attach 8cdc5805a2ebe2260c3abddc076f9e97ffed1f43ebe76dd285954908c46610be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:53:10 compute-0 podman[275192]: 2025-11-29 07:53:10.412157552 +0000 UTC m=+0.221570820 container died 8cdc5805a2ebe2260c3abddc076f9e97ffed1f43ebe76dd285954908c46610be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-815372b469b179ef27be9044dd77e69008b82ad7239f8a34c642f9db804fa022-merged.mount: Deactivated successfully.
Nov 29 07:53:10 compute-0 podman[275192]: 2025-11-29 07:53:10.459339695 +0000 UTC m=+0.268752963 container remove 8cdc5805a2ebe2260c3abddc076f9e97ffed1f43ebe76dd285954908c46610be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 07:53:10 compute-0 systemd[1]: libpod-conmon-8cdc5805a2ebe2260c3abddc076f9e97ffed1f43ebe76dd285954908c46610be.scope: Deactivated successfully.
Nov 29 07:53:10 compute-0 podman[275234]: 2025-11-29 07:53:10.629402296 +0000 UTC m=+0.049020674 container create 993bb9690f05f4a6c7c54b8df85a73660810063eee85e38cd437dd0b35c954ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 29 07:53:10 compute-0 systemd[1]: Started libpod-conmon-993bb9690f05f4a6c7c54b8df85a73660810063eee85e38cd437dd0b35c954ec.scope.
Nov 29 07:53:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 540 KiB/s rd, 18 KiB/s wr, 43 op/s
Nov 29 07:53:10 compute-0 podman[275234]: 2025-11-29 07:53:10.606120703 +0000 UTC m=+0.025739111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc0fb39c9af1d732fcfd606d4acaec8a9fc78308bb05175a9b2927f2cf49fc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc0fb39c9af1d732fcfd606d4acaec8a9fc78308bb05175a9b2927f2cf49fc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc0fb39c9af1d732fcfd606d4acaec8a9fc78308bb05175a9b2927f2cf49fc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc0fb39c9af1d732fcfd606d4acaec8a9fc78308bb05175a9b2927f2cf49fc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:10 compute-0 podman[275234]: 2025-11-29 07:53:10.738613928 +0000 UTC m=+0.158232366 container init 993bb9690f05f4a6c7c54b8df85a73660810063eee85e38cd437dd0b35c954ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:53:10 compute-0 podman[275234]: 2025-11-29 07:53:10.745752339 +0000 UTC m=+0.165370727 container start 993bb9690f05f4a6c7c54b8df85a73660810063eee85e38cd437dd0b35c954ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:53:10 compute-0 podman[275234]: 2025-11-29 07:53:10.749694445 +0000 UTC m=+0.169312853 container attach 993bb9690f05f4a6c7c54b8df85a73660810063eee85e38cd437dd0b35c954ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:53:11 compute-0 sshd-session[274960]: Connection closed by authenticating user root 143.14.121.41 port 39058 [preauth]
Nov 29 07:53:11 compute-0 nova_compute[256729]: 2025-11-29 07:53:11.457 256736 DEBUG nova.compute.manager [req-29b3635f-8da4-4f94-b984-0ba43b7dda06 req-6babdef1-66d2-4504-8489-ca8f19327c1e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received event network-changed-1f274eee-58f6-4dd7-94e0-15819552d2c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:11 compute-0 nova_compute[256729]: 2025-11-29 07:53:11.458 256736 DEBUG nova.compute.manager [req-29b3635f-8da4-4f94-b984-0ba43b7dda06 req-6babdef1-66d2-4504-8489-ca8f19327c1e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Refreshing instance network info cache due to event network-changed-1f274eee-58f6-4dd7-94e0-15819552d2c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:53:11 compute-0 nova_compute[256729]: 2025-11-29 07:53:11.459 256736 DEBUG oslo_concurrency.lockutils [req-29b3635f-8da4-4f94-b984-0ba43b7dda06 req-6babdef1-66d2-4504-8489-ca8f19327c1e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:53:11 compute-0 nova_compute[256729]: 2025-11-29 07:53:11.459 256736 DEBUG oslo_concurrency.lockutils [req-29b3635f-8da4-4f94-b984-0ba43b7dda06 req-6babdef1-66d2-4504-8489-ca8f19327c1e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:53:11 compute-0 nova_compute[256729]: 2025-11-29 07:53:11.459 256736 DEBUG nova.network.neutron [req-29b3635f-8da4-4f94-b984-0ba43b7dda06 req-6babdef1-66d2-4504-8489-ca8f19327c1e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Refreshing network info cache for port 1f274eee-58f6-4dd7-94e0-15819552d2c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:53:11 compute-0 trusting_swartz[275250]: {
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:     "0": [
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:         {
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "devices": [
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "/dev/loop3"
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             ],
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_name": "ceph_lv0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_size": "21470642176",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "name": "ceph_lv0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "tags": {
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.cluster_name": "ceph",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.crush_device_class": "",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.encrypted": "0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.osd_id": "0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.type": "block",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.vdo": "0"
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             },
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "type": "block",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "vg_name": "ceph_vg0"
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:         }
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:     ],
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:     "1": [
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:         {
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "devices": [
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "/dev/loop4"
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             ],
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_name": "ceph_lv1",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_size": "21470642176",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "name": "ceph_lv1",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "tags": {
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.cluster_name": "ceph",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.crush_device_class": "",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.encrypted": "0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.osd_id": "1",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.type": "block",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.vdo": "0"
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             },
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "type": "block",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "vg_name": "ceph_vg1"
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:         }
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:     ],
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:     "2": [
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:         {
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "devices": [
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "/dev/loop5"
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             ],
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_name": "ceph_lv2",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_size": "21470642176",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "name": "ceph_lv2",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "tags": {
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.cluster_name": "ceph",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.crush_device_class": "",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.encrypted": "0",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.osd_id": "2",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.type": "block",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:                 "ceph.vdo": "0"
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             },
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "type": "block",
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:             "vg_name": "ceph_vg2"
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:         }
Nov 29 07:53:11 compute-0 trusting_swartz[275250]:     ]
Nov 29 07:53:11 compute-0 trusting_swartz[275250]: }
Nov 29 07:53:11 compute-0 systemd[1]: libpod-993bb9690f05f4a6c7c54b8df85a73660810063eee85e38cd437dd0b35c954ec.scope: Deactivated successfully.
Nov 29 07:53:11 compute-0 podman[275234]: 2025-11-29 07:53:11.551205363 +0000 UTC m=+0.970823731 container died 993bb9690f05f4a6c7c54b8df85a73660810063eee85e38cd437dd0b35c954ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecc0fb39c9af1d732fcfd606d4acaec8a9fc78308bb05175a9b2927f2cf49fc5-merged.mount: Deactivated successfully.
Nov 29 07:53:11 compute-0 podman[275234]: 2025-11-29 07:53:11.633254668 +0000 UTC m=+1.052873036 container remove 993bb9690f05f4a6c7c54b8df85a73660810063eee85e38cd437dd0b35c954ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:53:11 compute-0 systemd[1]: libpod-conmon-993bb9690f05f4a6c7c54b8df85a73660810063eee85e38cd437dd0b35c954ec.scope: Deactivated successfully.
Nov 29 07:53:11 compute-0 sudo[275125]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:11 compute-0 sudo[275272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:11 compute-0 sudo[275272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:11 compute-0 sudo[275272]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:11 compute-0 sudo[275297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:53:11 compute-0 sudo[275297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:11 compute-0 sudo[275297]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:11 compute-0 nova_compute[256729]: 2025-11-29 07:53:11.790 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:11 compute-0 sudo[275322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:11 compute-0 sudo[275322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:11 compute-0 sudo[275322]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:11 compute-0 ceph-mon[75050]: pgmap v1429: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 540 KiB/s rd, 18 KiB/s wr, 43 op/s
Nov 29 07:53:11 compute-0 sudo[275347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:53:11 compute-0 sudo[275347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:12 compute-0 nova_compute[256729]: 2025-11-29 07:53:12.077 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:12 compute-0 podman[275411]: 2025-11-29 07:53:12.239525152 +0000 UTC m=+0.028835253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:12 compute-0 podman[275411]: 2025-11-29 07:53:12.512947129 +0000 UTC m=+0.302257230 container create d9fc4ddb3838e1ccd03c59fbd5335a56dff259216fcd3da7ac83bf1d90f68c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cohen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:53:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 88 op/s
Nov 29 07:53:12 compute-0 systemd[1]: Started libpod-conmon-d9fc4ddb3838e1ccd03c59fbd5335a56dff259216fcd3da7ac83bf1d90f68c4f.scope.
Nov 29 07:53:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:12 compute-0 podman[275411]: 2025-11-29 07:53:12.97413011 +0000 UTC m=+0.763440201 container init d9fc4ddb3838e1ccd03c59fbd5335a56dff259216fcd3da7ac83bf1d90f68c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cohen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:53:12 compute-0 podman[275411]: 2025-11-29 07:53:12.981607281 +0000 UTC m=+0.770917352 container start d9fc4ddb3838e1ccd03c59fbd5335a56dff259216fcd3da7ac83bf1d90f68c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cohen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 07:53:12 compute-0 vigorous_cohen[275429]: 167 167
Nov 29 07:53:12 compute-0 podman[275411]: 2025-11-29 07:53:12.988176806 +0000 UTC m=+0.777486907 container attach d9fc4ddb3838e1ccd03c59fbd5335a56dff259216fcd3da7ac83bf1d90f68c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cohen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 07:53:12 compute-0 systemd[1]: libpod-d9fc4ddb3838e1ccd03c59fbd5335a56dff259216fcd3da7ac83bf1d90f68c4f.scope: Deactivated successfully.
Nov 29 07:53:12 compute-0 podman[275411]: 2025-11-29 07:53:12.988919966 +0000 UTC m=+0.778230027 container died d9fc4ddb3838e1ccd03c59fbd5335a56dff259216fcd3da7ac83bf1d90f68c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cohen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:53:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f84a396481748b5e246504616d247c46c74c5460cbce4d76a97c755aff29e5d0-merged.mount: Deactivated successfully.
Nov 29 07:53:13 compute-0 podman[275411]: 2025-11-29 07:53:13.034268819 +0000 UTC m=+0.823578920 container remove d9fc4ddb3838e1ccd03c59fbd5335a56dff259216fcd3da7ac83bf1d90f68c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:53:13 compute-0 systemd[1]: libpod-conmon-d9fc4ddb3838e1ccd03c59fbd5335a56dff259216fcd3da7ac83bf1d90f68c4f.scope: Deactivated successfully.
Nov 29 07:53:13 compute-0 nova_compute[256729]: 2025-11-29 07:53:13.103 256736 DEBUG nova.network.neutron [req-29b3635f-8da4-4f94-b984-0ba43b7dda06 req-6babdef1-66d2-4504-8489-ca8f19327c1e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Updated VIF entry in instance network info cache for port 1f274eee-58f6-4dd7-94e0-15819552d2c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:53:13 compute-0 nova_compute[256729]: 2025-11-29 07:53:13.105 256736 DEBUG nova.network.neutron [req-29b3635f-8da4-4f94-b984-0ba43b7dda06 req-6babdef1-66d2-4504-8489-ca8f19327c1e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Updating instance_info_cache with network_info: [{"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:13 compute-0 podman[275452]: 2025-11-29 07:53:13.207068593 +0000 UTC m=+0.036604040 container create dd7da58cfe1143e62a669649f46c9d230db44b19c71cf7543d5194f0d9041f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bartik, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:53:13 compute-0 systemd[1]: Started libpod-conmon-dd7da58cfe1143e62a669649f46c9d230db44b19c71cf7543d5194f0d9041f92.scope.
Nov 29 07:53:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9375b8e7661627daa191b3d8a12089d23e3a0d325a670355bd3c5a9614cd1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9375b8e7661627daa191b3d8a12089d23e3a0d325a670355bd3c5a9614cd1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9375b8e7661627daa191b3d8a12089d23e3a0d325a670355bd3c5a9614cd1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c9375b8e7661627daa191b3d8a12089d23e3a0d325a670355bd3c5a9614cd1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:13 compute-0 podman[275452]: 2025-11-29 07:53:13.191764574 +0000 UTC m=+0.021300051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:53:13 compute-0 podman[275452]: 2025-11-29 07:53:13.292688075 +0000 UTC m=+0.122223542 container init dd7da58cfe1143e62a669649f46c9d230db44b19c71cf7543d5194f0d9041f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 07:53:13 compute-0 podman[275452]: 2025-11-29 07:53:13.299173638 +0000 UTC m=+0.128709085 container start dd7da58cfe1143e62a669649f46c9d230db44b19c71cf7543d5194f0d9041f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:53:13 compute-0 nova_compute[256729]: 2025-11-29 07:53:13.304 256736 DEBUG oslo_concurrency.lockutils [req-29b3635f-8da4-4f94-b984-0ba43b7dda06 req-6babdef1-66d2-4504-8489-ca8f19327c1e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:53:13 compute-0 podman[275452]: 2025-11-29 07:53:13.307003368 +0000 UTC m=+0.136538835 container attach dd7da58cfe1143e62a669649f46c9d230db44b19c71cf7543d5194f0d9041f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:53:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:13 compute-0 ceph-mon[75050]: pgmap v1430: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 88 op/s
Nov 29 07:53:14 compute-0 crazy_bartik[275468]: {
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "osd_id": 2,
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "type": "bluestore"
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:     },
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "osd_id": 1,
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "type": "bluestore"
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:     },
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "osd_id": 0,
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:         "type": "bluestore"
Nov 29 07:53:14 compute-0 crazy_bartik[275468]:     }
Nov 29 07:53:14 compute-0 crazy_bartik[275468]: }
Nov 29 07:53:14 compute-0 systemd[1]: libpod-dd7da58cfe1143e62a669649f46c9d230db44b19c71cf7543d5194f0d9041f92.scope: Deactivated successfully.
Nov 29 07:53:14 compute-0 podman[275501]: 2025-11-29 07:53:14.329532321 +0000 UTC m=+0.025414101 container died dd7da58cfe1143e62a669649f46c9d230db44b19c71cf7543d5194f0d9041f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:53:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 80 op/s
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00017169491111545225 quantized to 32 (current 32)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:53:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:53:15 compute-0 sshd-session[275259]: Connection closed by authenticating user root 143.14.121.41 port 39072 [preauth]
Nov 29 07:53:16 compute-0 ceph-mon[75050]: pgmap v1431: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 80 op/s
Nov 29 07:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c9375b8e7661627daa191b3d8a12089d23e3a0d325a670355bd3c5a9614cd1c-merged.mount: Deactivated successfully.
Nov 29 07:53:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 07:53:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.5 total, 600.0 interval
                                           Cumulative writes: 9552 writes, 39K keys, 9552 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9552 writes, 2490 syncs, 3.84 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3896 writes, 15K keys, 3896 commit groups, 1.0 writes per commit group, ingest: 9.05 MB, 0.02 MB/s
                                           Interval WAL: 3896 writes, 1603 syncs, 2.43 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 07:53:16 compute-0 podman[275501]: 2025-11-29 07:53:16.508623603 +0000 UTC m=+2.204505363 container remove dd7da58cfe1143e62a669649f46c9d230db44b19c71cf7543d5194f0d9041f92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bartik, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 07:53:16 compute-0 systemd[1]: libpod-conmon-dd7da58cfe1143e62a669649f46c9d230db44b19c71cf7543d5194f0d9041f92.scope: Deactivated successfully.
Nov 29 07:53:16 compute-0 sudo[275347]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:53:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:53:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:53:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:53:16 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1fd0b38a-90e6-4dc4-865a-726b03c2c196 does not exist
Nov 29 07:53:16 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 0cd0a441-15a4-4001-a474-ce44010b9d14 does not exist
Nov 29 07:53:16 compute-0 sudo[275518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:53:16 compute-0 sudo[275518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:16 compute-0 sudo[275518]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Nov 29 07:53:16 compute-0 sudo[275543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:53:16 compute-0 sudo[275543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:53:16 compute-0 sudo[275543]: pam_unix(sudo:session): session closed for user root
Nov 29 07:53:16 compute-0 nova_compute[256729]: 2025-11-29 07:53:16.792 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:17 compute-0 nova_compute[256729]: 2025-11-29 07:53:17.080 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:17 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:53:17 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:53:17 compute-0 ceph-mon[75050]: pgmap v1432: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Nov 29 07:53:17 compute-0 sshd-session[275514]: Invalid user postgres from 143.14.121.41 port 46026
Nov 29 07:53:18 compute-0 sshd-session[275514]: Connection closed by invalid user postgres 143.14.121.41 port 46026 [preauth]
Nov 29 07:53:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 69 op/s
Nov 29 07:53:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:19 compute-0 podman[275572]: 2025-11-29 07:53:19.735653189 +0000 UTC m=+0.089768073 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:53:19 compute-0 podman[275571]: 2025-11-29 07:53:19.744202128 +0000 UTC m=+0.102597577 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 07:53:19 compute-0 ceph-mon[75050]: pgmap v1433: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 69 op/s
Nov 29 07:53:19 compute-0 podman[275570]: 2025-11-29 07:53:19.799851766 +0000 UTC m=+0.161684177 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:53:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 170 B/s wr, 54 op/s
Nov 29 07:53:21 compute-0 ceph-mgr[75345]: [devicehealth INFO root] Check health
Nov 29 07:53:21 compute-0 sshd-session[275568]: Invalid user postgres from 143.14.121.41 port 46028
Nov 29 07:53:21 compute-0 nova_compute[256729]: 2025-11-29 07:53:21.793 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:22 compute-0 sshd-session[275568]: Connection closed by invalid user postgres 143.14.121.41 port 46028 [preauth]
Nov 29 07:53:22 compute-0 ovn_controller[153383]: 2025-11-29T07:53:22Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:01:7f 10.100.0.4
Nov 29 07:53:22 compute-0 ovn_controller[153383]: 2025-11-29T07:53:22Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:01:7f 10.100.0.4
Nov 29 07:53:22 compute-0 nova_compute[256729]: 2025-11-29 07:53:22.122 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:22 compute-0 ceph-mon[75050]: pgmap v1434: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 170 B/s wr, 54 op/s
Nov 29 07:53:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 116 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 29 07:53:23 compute-0 ceph-mon[75050]: pgmap v1435: 305 pgs: 305 active+clean; 116 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 29 07:53:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:24 compute-0 nova_compute[256729]: 2025-11-29 07:53:24.667 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 150 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 96 op/s
Nov 29 07:53:25 compute-0 ceph-mon[75050]: pgmap v1436: 305 pgs: 305 active+clean; 150 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 96 op/s
Nov 29 07:53:25 compute-0 sshd-session[275631]: Invalid user postgres from 143.14.121.41 port 52778
Nov 29 07:53:26 compute-0 sshd-session[275631]: Connection closed by invalid user postgres 143.14.121.41 port 52778 [preauth]
Nov 29 07:53:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 109 op/s
Nov 29 07:53:26 compute-0 nova_compute[256729]: 2025-11-29 07:53:26.796 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:27 compute-0 nova_compute[256729]: 2025-11-29 07:53:27.124 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:27 compute-0 ceph-mon[75050]: pgmap v1437: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 109 op/s
Nov 29 07:53:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 108 op/s
Nov 29 07:53:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:29 compute-0 ceph-mon[75050]: pgmap v1438: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 108 op/s
Nov 29 07:53:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:53:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1957308754' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:53:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1957308754' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1957308754' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1957308754' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:53:30 compute-0 sshd-session[275633]: Invalid user peter from 143.14.121.41 port 52782
Nov 29 07:53:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 107 op/s
Nov 29 07:53:30 compute-0 sshd-session[275633]: Connection closed by invalid user peter 143.14.121.41 port 52782 [preauth]
Nov 29 07:53:31 compute-0 ceph-mon[75050]: pgmap v1439: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 107 op/s
Nov 29 07:53:31 compute-0 nova_compute[256729]: 2025-11-29 07:53:31.798 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:32 compute-0 nova_compute[256729]: 2025-11-29 07:53:32.126 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 139 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 122 op/s
Nov 29 07:53:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:33 compute-0 ceph-mon[75050]: pgmap v1440: 305 pgs: 305 active+clean; 139 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 122 op/s
Nov 29 07:53:34 compute-0 sshd-session[275635]: Invalid user admin from 143.14.121.41 port 52784
Nov 29 07:53:34 compute-0 sshd-session[275635]: Connection closed by invalid user admin 143.14.121.41 port 52784 [preauth]
Nov 29 07:53:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 29 07:53:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 29 07:53:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 29 07:53:34 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 29 07:53:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:53:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:53:35 compute-0 ceph-mon[75050]: pgmap v1441: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Nov 29 07:53:35 compute-0 ceph-mon[75050]: osdmap e207: 3 total, 3 up, 3 in
Nov 29 07:53:36 compute-0 nova_compute[256729]: 2025-11-29 07:53:36.218 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 20 KiB/s wr, 22 op/s
Nov 29 07:53:36 compute-0 nova_compute[256729]: 2025-11-29 07:53:36.801 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:37 compute-0 nova_compute[256729]: 2025-11-29 07:53:37.127 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:37 compute-0 sshd-session[275637]: Invalid user admin from 143.14.121.41 port 58876
Nov 29 07:53:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 29 07:53:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 29 07:53:37 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 29 07:53:37 compute-0 ceph-mon[75050]: pgmap v1443: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 20 KiB/s wr, 22 op/s
Nov 29 07:53:37 compute-0 sshd-session[275637]: Connection closed by invalid user admin 143.14.121.41 port 58876 [preauth]
Nov 29 07:53:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 24 KiB/s wr, 41 op/s
Nov 29 07:53:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:38 compute-0 ceph-mon[75050]: osdmap e208: 3 total, 3 up, 3 in
Nov 29 07:53:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:53:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150621528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:53:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150621528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:53:39 compute-0 ceph-mon[75050]: pgmap v1445: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 24 KiB/s wr, 41 op/s
Nov 29 07:53:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/150621528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:53:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/150621528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:53:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 6.6 KiB/s wr, 19 op/s
Nov 29 07:53:41 compute-0 nova_compute[256729]: 2025-11-29 07:53:41.803 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:41 compute-0 ceph-mon[75050]: pgmap v1446: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 6.6 KiB/s wr, 19 op/s
Nov 29 07:53:42 compute-0 nova_compute[256729]: 2025-11-29 07:53:42.130 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 47 op/s
Nov 29 07:53:43 compute-0 nova_compute[256729]: 2025-11-29 07:53:43.638 256736 DEBUG oslo_concurrency.lockutils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:43 compute-0 nova_compute[256729]: 2025-11-29 07:53:43.639 256736 DEBUG oslo_concurrency.lockutils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:43 compute-0 nova_compute[256729]: 2025-11-29 07:53:43.658 256736 DEBUG nova.objects.instance [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'flavor' on Instance uuid 28704ae1-91ab-4ea5-99cd-c2ec5475f015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:43 compute-0 nova_compute[256729]: 2025-11-29 07:53:43.714 256736 INFO nova.virt.libvirt.driver [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Ignoring supplied device name: /dev/vdb
Nov 29 07:53:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 29 07:53:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 29 07:53:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 29 07:53:43 compute-0 ceph-mon[75050]: pgmap v1447: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 47 op/s
Nov 29 07:53:43 compute-0 ceph-mon[75050]: osdmap e209: 3 total, 3 up, 3 in
Nov 29 07:53:44 compute-0 nova_compute[256729]: 2025-11-29 07:53:44.020 256736 DEBUG oslo_concurrency.lockutils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:44 compute-0 nova_compute[256729]: 2025-11-29 07:53:44.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:44 compute-0 nova_compute[256729]: 2025-11-29 07:53:44.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 07:53:44 compute-0 sshd-session[275639]: Invalid user admin from 143.14.121.41 port 58882
Nov 29 07:53:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.7 KiB/s wr, 51 op/s
Nov 29 07:53:44 compute-0 sshd-session[275639]: Connection closed by invalid user admin 143.14.121.41 port 58882 [preauth]
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.554 256736 DEBUG oslo_concurrency.lockutils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.554 256736 DEBUG oslo_concurrency.lockutils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.555 256736 INFO nova.compute.manager [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Attaching volume cfb594ec-67c5-461f-a9db-5237721ba7ec to /dev/vdb
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.929 256736 DEBUG os_brick.utils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.930 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.949 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.949 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[957d22d5-ddaa-4692-99d4-2aefb44d3849]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.951 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.962 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.962 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1025e9-5bc8-4fde-b83e-592e67a2e902]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.964 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.973 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.974 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ef3978-52a8-487f-a764-c64eb516194d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.976 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[30a0d68c-f3b9-4af7-8eac-2cb6abdbab1b]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:45 compute-0 nova_compute[256729]: 2025-11-29 07:53:45.976 256736 DEBUG oslo_concurrency.processutils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:45 compute-0 ceph-mon[75050]: pgmap v1449: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.7 KiB/s wr, 51 op/s
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.011 256736 DEBUG oslo_concurrency.processutils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.014 256736 DEBUG os_brick.initiator.connectors.lightos [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.014 256736 DEBUG os_brick.initiator.connectors.lightos [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.014 256736 DEBUG os_brick.initiator.connectors.lightos [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.015 256736 DEBUG os_brick.utils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.015 256736 DEBUG nova.virt.block_device [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Updating existing volume attachment record: 6a9118f0-5462-4af5-91c8-5d183c3f33c9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:53:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.9 KiB/s wr, 35 op/s
Nov 29 07:53:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1317149670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:46 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:46.733 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:53:46 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:46.735 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.734 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.843 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.904 256736 DEBUG nova.objects.instance [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'flavor' on Instance uuid 28704ae1-91ab-4ea5-99cd-c2ec5475f015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1317149670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.989 256736 DEBUG nova.virt.libvirt.driver [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Attempting to attach volume cfb594ec-67c5-461f-a9db-5237721ba7ec with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:53:46 compute-0 nova_compute[256729]: 2025-11-29 07:53:46.994 256736 DEBUG nova.virt.libvirt.guest [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:53:46 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:53:46 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-cfb594ec-67c5-461f-a9db-5237721ba7ec">
Nov 29 07:53:46 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:53:46 compute-0 nova_compute[256729]:   </source>
Nov 29 07:53:46 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 07:53:46 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:53:46 compute-0 nova_compute[256729]:   </auth>
Nov 29 07:53:46 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:53:46 compute-0 nova_compute[256729]:   <serial>cfb594ec-67c5-461f-a9db-5237721ba7ec</serial>
Nov 29 07:53:46 compute-0 nova_compute[256729]: </disk>
Nov 29 07:53:46 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.132 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.217 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.218 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.219 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.312 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.396 256736 DEBUG nova.virt.libvirt.driver [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.397 256736 DEBUG nova.virt.libvirt.driver [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.397 256736 DEBUG nova.virt.libvirt.driver [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.398 256736 DEBUG nova.virt.libvirt.driver [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] No VIF found with MAC fa:16:3e:8f:01:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.796 256736 DEBUG oslo_concurrency.lockutils [None req-d7736a04-3a04-49b5-9a2f-5fc5af1e9602 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.856 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "8c45989b-e06e-4bd4-9961-e7756223b869" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.856 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:47 compute-0 nova_compute[256729]: 2025-11-29 07:53:47.888 256736 DEBUG nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.005 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.006 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.020 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.021 256736 INFO nova.compute.claims [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:53:48 compute-0 ceph-mon[75050]: pgmap v1450: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.9 KiB/s wr, 35 op/s
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.209 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:48 compute-0 sshd-session[275641]: Connection closed by authenticating user root 143.14.121.41 port 45088 [preauth]
Nov 29 07:53:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779437457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.679 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.686 256736 DEBUG nova.compute.provider_tree [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:53:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.6 KiB/s wr, 30 op/s
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.738 256736 DEBUG nova.scheduler.client.report [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:53:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.762 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.763 256736 DEBUG nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.881 256736 DEBUG nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.882 256736 DEBUG nova.network.neutron [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.907 256736 INFO nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:53:48 compute-0 nova_compute[256729]: 2025-11-29 07:53:48.930 256736 DEBUG nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:53:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3779437457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:49 compute-0 ceph-mon[75050]: pgmap v1451: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.6 KiB/s wr, 30 op/s
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.349 256736 DEBUG nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.352 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.352 256736 INFO nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Creating image(s)
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.384 256736 DEBUG nova.storage.rbd_utils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image 8c45989b-e06e-4bd4-9961-e7756223b869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.421 256736 DEBUG nova.storage.rbd_utils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image 8c45989b-e06e-4bd4-9961-e7756223b869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.457 256736 DEBUG nova.storage.rbd_utils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image 8c45989b-e06e-4bd4-9961-e7756223b869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.463 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.544 256736 DEBUG nova.policy [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '81f071491e4c48c59662c7feba200299', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0aa15e11d9794e608f3aebb38ea3606a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.558 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.559 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3454850913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.560 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.561 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.595 256736 DEBUG nova.storage.rbd_utils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image 8c45989b-e06e-4bd4-9961-e7756223b869_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:49 compute-0 nova_compute[256729]: 2025-11-29 07:53:49.600 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 8c45989b-e06e-4bd4-9961-e7756223b869_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3454850913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.242 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.308 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 8c45989b-e06e-4bd4-9961-e7756223b869_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.708s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.393 256736 DEBUG nova.storage.rbd_utils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] resizing rbd image 8c45989b-e06e-4bd4-9961-e7756223b869_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.499 256736 DEBUG nova.objects.instance [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'migration_context' on Instance uuid 8c45989b-e06e-4bd4-9961-e7756223b869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.515 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.516 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Ensure instance console log exists: /var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.516 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.517 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.517 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.6 KiB/s wr, 30 op/s
Nov 29 07:53:50 compute-0 podman[275863]: 2025-11-29 07:53:50.717915419 +0000 UTC m=+0.075046289 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:53:50 compute-0 podman[275862]: 2025-11-29 07:53:50.727995299 +0000 UTC m=+0.089200038 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd)
Nov 29 07:53:50 compute-0 podman[275861]: 2025-11-29 07:53:50.751134728 +0000 UTC m=+0.113361714 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:53:50 compute-0 nova_compute[256729]: 2025-11-29 07:53:50.825 256736 DEBUG nova.network.neutron [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Successfully created port: 071be225-ecaa-4260-bc91-73f144657155 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:53:51 compute-0 sshd-session[275692]: Connection closed by authenticating user root 143.14.121.41 port 45090 [preauth]
Nov 29 07:53:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 29 07:53:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 29 07:53:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 29 07:53:51 compute-0 ceph-mon[75050]: pgmap v1452: 305 pgs: 305 active+clean; 121 MiB data, 312 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.6 KiB/s wr, 30 op/s
Nov 29 07:53:51 compute-0 ceph-mon[75050]: osdmap e210: 3 total, 3 up, 3 in
Nov 29 07:53:51 compute-0 nova_compute[256729]: 2025-11-29 07:53:51.846 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:51 compute-0 nova_compute[256729]: 2025-11-29 07:53:51.865 256736 DEBUG nova.network.neutron [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Successfully updated port: 071be225-ecaa-4260-bc91-73f144657155 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:53:51 compute-0 nova_compute[256729]: 2025-11-29 07:53:51.881 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:53:51 compute-0 nova_compute[256729]: 2025-11-29 07:53:51.881 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquired lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:53:51 compute-0 nova_compute[256729]: 2025-11-29 07:53:51.881 256736 DEBUG nova.network.neutron [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:53:51 compute-0 nova_compute[256729]: 2025-11-29 07:53:51.976 256736 DEBUG nova.compute.manager [req-2e86968a-0723-48d7-89e2-ceda30dd979a req-f7c276aa-7126-4e27-9108-71b73c445ac6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Received event network-changed-071be225-ecaa-4260-bc91-73f144657155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:51 compute-0 nova_compute[256729]: 2025-11-29 07:53:51.976 256736 DEBUG nova.compute.manager [req-2e86968a-0723-48d7-89e2-ceda30dd979a req-f7c276aa-7126-4e27-9108-71b73c445ac6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Refreshing instance network info cache due to event network-changed-071be225-ecaa-4260-bc91-73f144657155. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:53:51 compute-0 nova_compute[256729]: 2025-11-29 07:53:51.977 256736 DEBUG oslo_concurrency.lockutils [req-2e86968a-0723-48d7-89e2-ceda30dd979a req-f7c276aa-7126-4e27-9108-71b73c445ac6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.134 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.156 256736 DEBUG nova.network.neutron [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:53:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 29 07:53:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 29 07:53:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 147 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 897 KiB/s wr, 29 op/s
Nov 29 07:53:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.898 256736 DEBUG nova.network.neutron [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updating instance_info_cache with network_info: [{"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.924 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Releasing lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.924 256736 DEBUG nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Instance network_info: |[{"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.925 256736 DEBUG oslo_concurrency.lockutils [req-2e86968a-0723-48d7-89e2-ceda30dd979a req-f7c276aa-7126-4e27-9108-71b73c445ac6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.925 256736 DEBUG nova.network.neutron [req-2e86968a-0723-48d7-89e2-ceda30dd979a req-f7c276aa-7126-4e27-9108-71b73c445ac6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Refreshing network info cache for port 071be225-ecaa-4260-bc91-73f144657155 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.930 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Start _get_guest_xml network_info=[{"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.936 256736 WARNING nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.945 256736 DEBUG nova.virt.libvirt.host [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.946 256736 DEBUG nova.virt.libvirt.host [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.956 256736 DEBUG nova.virt.libvirt.host [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.956 256736 DEBUG nova.virt.libvirt.host [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.957 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.957 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.958 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.958 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.959 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.959 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.959 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.959 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.960 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.960 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.960 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.961 256736 DEBUG nova.virt.hardware [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:53:52 compute-0 nova_compute[256729]: 2025-11-29 07:53:52.964 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:53 compute-0 nova_compute[256729]: 2025-11-29 07:53:53.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2731585066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:53 compute-0 nova_compute[256729]: 2025-11-29 07:53:53.414 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:53 compute-0 nova_compute[256729]: 2025-11-29 07:53:53.433 256736 DEBUG nova.storage.rbd_utils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image 8c45989b-e06e-4bd4-9961-e7756223b869_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:53 compute-0 nova_compute[256729]: 2025-11-29 07:53:53.437 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.126 256736 DEBUG nova.network.neutron [req-2e86968a-0723-48d7-89e2-ceda30dd979a req-f7c276aa-7126-4e27-9108-71b73c445ac6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updated VIF entry in instance network info cache for port 071be225-ecaa-4260-bc91-73f144657155. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.127 256736 DEBUG nova.network.neutron [req-2e86968a-0723-48d7-89e2-ceda30dd979a req-f7c276aa-7126-4e27-9108-71b73c445ac6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updating instance_info_cache with network_info: [{"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3586597824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.143 256736 DEBUG oslo_concurrency.lockutils [req-2e86968a-0723-48d7-89e2-ceda30dd979a req-f7c276aa-7126-4e27-9108-71b73c445ac6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:53:54 compute-0 ceph-mon[75050]: pgmap v1455: 305 pgs: 305 active+clean; 147 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 897 KiB/s wr, 29 op/s
Nov 29 07:53:54 compute-0 ceph-mon[75050]: osdmap e211: 3 total, 3 up, 3 in
Nov 29 07:53:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2731585066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.171 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.172 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.172 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.177 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.179 256736 DEBUG nova.virt.libvirt.vif [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-153694694',display_name='tempest-TestStampPattern-server-153694694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-153694694',id=7,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVIcG7iT8EuYRWwvh0xXPSujdlj7uKuKXhamDHlJ4QJb0wGzod0+Qsrv8DmE1TIN7tAAQa46X3+yrMq9A2yMt4mHHy/8wbOvohqcW7H1CuWupyv3Z+eB3t88xUDCWSqKQ==',key_name='tempest-TestStampPattern-886597490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0aa15e11d9794e608f3aebb38ea3606a',ramdisk_id='',reservation_id='r-rmkjsyj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1135660929',owner_user_name='tempest-TestStampPattern-1135660929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:53:48Z,user_data=None,user_id='81f071491e4c48c59662c7feba200299',uuid=8c45989b-e06e-4bd4-9961-e7756223b869,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.180 256736 DEBUG nova.network.os_vif_util [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converting VIF {"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.181 256736 DEBUG nova.network.os_vif_util [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:b6:df,bridge_name='br-int',has_traffic_filtering=True,id=071be225-ecaa-4260-bc91-73f144657155,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap071be225-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.183 256736 DEBUG nova.objects.instance [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c45989b-e06e-4bd4-9961-e7756223b869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.201 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <uuid>8c45989b-e06e-4bd4-9961-e7756223b869</uuid>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <name>instance-00000007</name>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <nova:name>tempest-TestStampPattern-server-153694694</nova:name>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:53:52</nova:creationTime>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <nova:user uuid="81f071491e4c48c59662c7feba200299">tempest-TestStampPattern-1135660929-project-member</nova:user>
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <nova:project uuid="0aa15e11d9794e608f3aebb38ea3606a">tempest-TestStampPattern-1135660929</nova:project>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <nova:port uuid="071be225-ecaa-4260-bc91-73f144657155">
Nov 29 07:53:54 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <system>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <entry name="serial">8c45989b-e06e-4bd4-9961-e7756223b869</entry>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <entry name="uuid">8c45989b-e06e-4bd4-9961-e7756223b869</entry>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     </system>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <os>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   </os>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <features>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   </features>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/8c45989b-e06e-4bd4-9961-e7756223b869_disk">
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       </source>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/8c45989b-e06e-4bd4-9961-e7756223b869_disk.config">
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       </source>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:53:54 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:e5:b6:df"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <target dev="tap071be225-ec"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869/console.log" append="off"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <video>
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     </video>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:53:54 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:53:54 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:53:54 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:53:54 compute-0 nova_compute[256729]: </domain>
Nov 29 07:53:54 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.203 256736 DEBUG nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Preparing to wait for external event network-vif-plugged-071be225-ecaa-4260-bc91-73f144657155 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.203 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.203 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.203 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.204 256736 DEBUG nova.virt.libvirt.vif [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-153694694',display_name='tempest-TestStampPattern-server-153694694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-153694694',id=7,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVIcG7iT8EuYRWwvh0xXPSujdlj7uKuKXhamDHlJ4QJb0wGzod0+Qsrv8DmE1TIN7tAAQa46X3+yrMq9A2yMt4mHHy/8wbOvohqcW7H1CuWupyv3Z+eB3t88xUDCWSqKQ==',key_name='tempest-TestStampPattern-886597490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0aa15e11d9794e608f3aebb38ea3606a',ramdisk_id='',reservation_id='r-rmkjsyj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1135660929',owner_user_name='tempest-TestStampPattern-1135660929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:53:48Z,user_data=None,user_id='81f071491e4c48c59662c7feba200299',uuid=8c45989b-e06e-4bd4-9961-e7756223b869,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.204 256736 DEBUG nova.network.os_vif_util [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converting VIF {"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.205 256736 DEBUG nova.network.os_vif_util [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:b6:df,bridge_name='br-int',has_traffic_filtering=True,id=071be225-ecaa-4260-bc91-73f144657155,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap071be225-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.205 256736 DEBUG os_vif [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:b6:df,bridge_name='br-int',has_traffic_filtering=True,id=071be225-ecaa-4260-bc91-73f144657155,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap071be225-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.206 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.206 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.206 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.208 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.209 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.209 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.209 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.209 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.232 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.233 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap071be225-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.234 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap071be225-ec, col_values=(('external_ids', {'iface-id': '071be225-ecaa-4260-bc91-73f144657155', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:b6:df', 'vm-uuid': '8c45989b-e06e-4bd4-9961-e7756223b869'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.236 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:54 compute-0 NetworkManager[48962]: <info>  [1764402834.2373] manager: (tap071be225-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.240 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.247 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.249 256736 INFO os_vif [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:b6:df,bridge_name='br-int',has_traffic_filtering=True,id=071be225-ecaa-4260-bc91-73f144657155,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap071be225-ec')
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.466 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.467 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.468 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No VIF found with MAC fa:16:3e:e5:b6:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.469 256736 INFO nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Using config drive
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.506 256736 DEBUG nova.storage.rbd_utils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image 8c45989b-e06e-4bd4-9961-e7756223b869_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401628304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.703 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 MiB/s wr, 62 op/s
Nov 29 07:53:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:54.737 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.787 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.788 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.793 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.793 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.794 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.907 256736 INFO nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Creating config drive at /var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869/disk.config
Nov 29 07:53:54 compute-0 nova_compute[256729]: 2025-11-29 07:53:54.913 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp47pk0p_j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.056 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp47pk0p_j" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.092 256736 DEBUG nova.storage.rbd_utils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image 8c45989b-e06e-4bd4-9961-e7756223b869_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.110 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869/disk.config 8c45989b-e06e-4bd4-9961-e7756223b869_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.172 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.175 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4455MB free_disk=59.93585968017578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.175 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.176 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3586597824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1401628304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:55 compute-0 sshd-session[275922]: Connection closed by authenticating user root 143.14.121.41 port 45102 [preauth]
Nov 29 07:53:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:53:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/537792580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.661 256736 DEBUG oslo_concurrency.processutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869/disk.config 8c45989b-e06e-4bd4-9961-e7756223b869_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.662 256736 INFO nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Deleting local config drive /var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869/disk.config because it was imported into RBD.
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.727 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 28704ae1-91ab-4ea5-99cd-c2ec5475f015 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.727 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 8c45989b-e06e-4bd4-9961-e7756223b869 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.728 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.728 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:53:55 compute-0 NetworkManager[48962]: <info>  [1764402835.7576] manager: (tap071be225-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Nov 29 07:53:55 compute-0 kernel: tap071be225-ec: entered promiscuous mode
Nov 29 07:53:55 compute-0 ovn_controller[153383]: 2025-11-29T07:53:55Z|00089|binding|INFO|Claiming lport 071be225-ecaa-4260-bc91-73f144657155 for this chassis.
Nov 29 07:53:55 compute-0 ovn_controller[153383]: 2025-11-29T07:53:55Z|00090|binding|INFO|071be225-ecaa-4260-bc91-73f144657155: Claiming fa:16:3e:e5:b6:df 10.100.0.11
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.761 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.769 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:b6:df 10.100.0.11'], port_security=['fa:16:3e:e5:b6:df 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8c45989b-e06e-4bd4-9961-e7756223b869', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0aa15e11d9794e608f3aebb38ea3606a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '12be058e-47a2-4b10-9928-e2f6336ca894', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17005c51-b13f-40d9-a999-415174c76777, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=071be225-ecaa-4260-bc91-73f144657155) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.771 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 071be225-ecaa-4260-bc91-73f144657155 in datapath e678432d-7aa3-4fc9-8ccb-76ec3ffbd276 bound to our chassis
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.773 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e678432d-7aa3-4fc9-8ccb-76ec3ffbd276
Nov 29 07:53:55 compute-0 ovn_controller[153383]: 2025-11-29T07:53:55Z|00091|binding|INFO|Setting lport 071be225-ecaa-4260-bc91-73f144657155 ovn-installed in OVS
Nov 29 07:53:55 compute-0 ovn_controller[153383]: 2025-11-29T07:53:55Z|00092|binding|INFO|Setting lport 071be225-ecaa-4260-bc91-73f144657155 up in Southbound
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.781 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.783 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.786 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[53fc28f4-7756-46a9-81cc-4db2b5c89ac2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.787 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape678432d-71 in ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.790 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape678432d-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.790 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e4d146-dc9f-422b-a904-2ac8ecfb157c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.791 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b135fea1-f119-4a2a-abb5-ff6323a783b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 systemd-machined[217781]: New machine qemu-7-instance-00000007.
Nov 29 07:53:55 compute-0 systemd-udevd[276084]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:53:55 compute-0 NetworkManager[48962]: <info>  [1764402835.8088] device (tap071be225-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:53:55 compute-0 NetworkManager[48962]: <info>  [1764402835.8098] device (tap071be225-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.810 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[a1377bf8-c615-4fea-aa3b-89daf87705e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.837 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf4ad21-1543-4314-9e58-c212cc5e92aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.866 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[e93926b6-2a39-4c4b-9fa2-fb82db7a1f7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.871 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[64772d5c-4106-432e-8f32-cd2298ab8e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 NetworkManager[48962]: <info>  [1764402835.8722] manager: (tape678432d-70): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Nov 29 07:53:55 compute-0 systemd-udevd[276087]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:53:55 compute-0 nova_compute[256729]: 2025-11-29 07:53:55.891 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.904 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3662f9-fb64-47cd-a2fd-5339dd3097a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.907 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[2feff8f6-b13c-4c26-92c3-98343c066b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 NetworkManager[48962]: <info>  [1764402835.9259] device (tape678432d-70): carrier: link connected
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.931 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf25846-3b98-4548-8ae5-620bb71ddcae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.947 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[66e93b9b-9e9f-449e-a4f9-9ca72b096082]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape678432d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:f5:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515099, 'reachable_time': 28884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276117, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.964 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e63fc321-228f-4a6a-8a1c-85dc6c6004fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:f5e6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515099, 'tstamp': 515099}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276118, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:55.986 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9ab0ba-3071-4f7d-954d-38b37b8c2565]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape678432d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:f5:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515099, 'reachable_time': 28884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276119, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.021 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e1dbd6-5284-42bb-8b47-6e2a9ffc2d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.092 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b48cc0e8-df50-4dd6-971e-f83fb666a528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.093 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape678432d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.094 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.094 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape678432d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:56 compute-0 NetworkManager[48962]: <info>  [1764402836.0964] manager: (tape678432d-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 29 07:53:56 compute-0 kernel: tape678432d-70: entered promiscuous mode
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.096 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.098 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape678432d-70, col_values=(('external_ids', {'iface-id': '83156f7b-0983-4e0f-a70a-261a0d3fbf52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:53:56 compute-0 ovn_controller[153383]: 2025-11-29T07:53:56Z|00093|binding|INFO|Releasing lport 83156f7b-0983-4e0f-a70a-261a0d3fbf52 from this chassis (sb_readonly=0)
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.116 256736 DEBUG nova.compute.manager [req-340ac481-3d05-4da8-b968-dd36ba789f99 req-c61ba61b-58dc-44bc-bafa-6d76c2c1857f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Received event network-vif-plugged-071be225-ecaa-4260-bc91-73f144657155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.116 256736 DEBUG oslo_concurrency.lockutils [req-340ac481-3d05-4da8-b968-dd36ba789f99 req-c61ba61b-58dc-44bc-bafa-6d76c2c1857f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.117 256736 DEBUG oslo_concurrency.lockutils [req-340ac481-3d05-4da8-b968-dd36ba789f99 req-c61ba61b-58dc-44bc-bafa-6d76c2c1857f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.117 256736 DEBUG oslo_concurrency.lockutils [req-340ac481-3d05-4da8-b968-dd36ba789f99 req-c61ba61b-58dc-44bc-bafa-6d76c2c1857f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.118 256736 DEBUG nova.compute.manager [req-340ac481-3d05-4da8-b968-dd36ba789f99 req-c61ba61b-58dc-44bc-bafa-6d76c2c1857f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Processing event network-vif-plugged-071be225-ecaa-4260-bc91-73f144657155 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.121 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.122 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e678432d-7aa3-4fc9-8ccb-76ec3ffbd276.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e678432d-7aa3-4fc9-8ccb-76ec3ffbd276.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.123 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f73f6d68-df6e-49d4-a9b4-508e71c3a955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.124 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/e678432d-7aa3-4fc9-8ccb-76ec3ffbd276.pid.haproxy
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID e678432d-7aa3-4fc9-8ccb-76ec3ffbd276
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:53:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:56.124 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'env', 'PROCESS_TAG=haproxy-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e678432d-7aa3-4fc9-8ccb-76ec3ffbd276.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:53:56 compute-0 ceph-mon[75050]: pgmap v1456: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 MiB/s wr, 62 op/s
Nov 29 07:53:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/537792580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:53:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:53:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4183247007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.358 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.368 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.388 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.418 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.418 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 29 07:53:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 29 07:53:56 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.463 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402836.4622002, 8c45989b-e06e-4bd4-9961-e7756223b869 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.464 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] VM Started (Lifecycle Event)
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.467 256736 DEBUG nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.472 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.475 256736 INFO nova.virt.libvirt.driver [-] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Instance spawned successfully.
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.475 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.496 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.503 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.506 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.507 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.507 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.507 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.508 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.508 256736 DEBUG nova.virt.libvirt.driver [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.537 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.538 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402836.4674158, 8c45989b-e06e-4bd4-9961-e7756223b869 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.538 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] VM Paused (Lifecycle Event)
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.569 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.575 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402836.4706652, 8c45989b-e06e-4bd4-9961-e7756223b869 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.575 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] VM Resumed (Lifecycle Event)
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.586 256736 INFO nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Took 7.24 seconds to spawn the instance on the hypervisor.
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.586 256736 DEBUG nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.600 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.603 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:53:56 compute-0 podman[276215]: 2025-11-29 07:53:56.616398762 +0000 UTC m=+0.087789730 container create a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:53:56 compute-0 podman[276215]: 2025-11-29 07:53:56.573937817 +0000 UTC m=+0.045328865 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.657 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:53:56 compute-0 systemd[1]: Started libpod-conmon-a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362.scope.
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.685 256736 INFO nova.compute.manager [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Took 8.72 seconds to build instance.
Nov 29 07:53:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ecbf876d976597b54a7545fab95108622a93dfc09d4b719cb4cde87b1a9d2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:53:56 compute-0 podman[276215]: 2025-11-29 07:53:56.706106853 +0000 UTC m=+0.177497841 container init a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.708 256736 DEBUG oslo_concurrency.lockutils [None req-09aa6389-e2a7-41d2-ab54-70f76479fd45 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:56 compute-0 podman[276215]: 2025-11-29 07:53:56.711576679 +0000 UTC m=+0.182967637 container start a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:53:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Nov 29 07:53:56 compute-0 neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276[276230]: [NOTICE]   (276234) : New worker (276236) forked
Nov 29 07:53:56 compute-0 neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276[276230]: [NOTICE]   (276234) : Loading success.
Nov 29 07:53:56 compute-0 nova_compute[256729]: 2025-11-29 07:53:56.847 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4183247007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:53:57 compute-0 ceph-mon[75050]: osdmap e212: 3 total, 3 up, 3 in
Nov 29 07:53:57 compute-0 nova_compute[256729]: 2025-11-29 07:53:57.395 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:57 compute-0 nova_compute[256729]: 2025-11-29 07:53:57.396 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:53:57 compute-0 nova_compute[256729]: 2025-11-29 07:53:57.396 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:53:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 29 07:53:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 29 07:53:57 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 29 07:53:57 compute-0 nova_compute[256729]: 2025-11-29 07:53:57.717 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:53:57 compute-0 nova_compute[256729]: 2025-11-29 07:53:57.718 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:53:57 compute-0 nova_compute[256729]: 2025-11-29 07:53:57.718 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:53:57 compute-0 nova_compute[256729]: 2025-11-29 07:53:57.718 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 28704ae1-91ab-4ea5-99cd-c2ec5475f015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:53:58 compute-0 nova_compute[256729]: 2025-11-29 07:53:58.240 256736 DEBUG nova.compute.manager [req-da411522-7079-4197-8eff-fd2d6ca6b412 req-7a860cf7-f352-43c9-bd37-34dce48bd807 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Received event network-vif-plugged-071be225-ecaa-4260-bc91-73f144657155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:53:58 compute-0 nova_compute[256729]: 2025-11-29 07:53:58.241 256736 DEBUG oslo_concurrency.lockutils [req-da411522-7079-4197-8eff-fd2d6ca6b412 req-7a860cf7-f352-43c9-bd37-34dce48bd807 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:58 compute-0 nova_compute[256729]: 2025-11-29 07:53:58.242 256736 DEBUG oslo_concurrency.lockutils [req-da411522-7079-4197-8eff-fd2d6ca6b412 req-7a860cf7-f352-43c9-bd37-34dce48bd807 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:58 compute-0 nova_compute[256729]: 2025-11-29 07:53:58.242 256736 DEBUG oslo_concurrency.lockutils [req-da411522-7079-4197-8eff-fd2d6ca6b412 req-7a860cf7-f352-43c9-bd37-34dce48bd807 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:58 compute-0 nova_compute[256729]: 2025-11-29 07:53:58.243 256736 DEBUG nova.compute.manager [req-da411522-7079-4197-8eff-fd2d6ca6b412 req-7a860cf7-f352-43c9-bd37-34dce48bd807 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] No waiting events found dispatching network-vif-plugged-071be225-ecaa-4260-bc91-73f144657155 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:53:58 compute-0 nova_compute[256729]: 2025-11-29 07:53:58.243 256736 WARNING nova.compute.manager [req-da411522-7079-4197-8eff-fd2d6ca6b412 req-7a860cf7-f352-43c9-bd37-34dce48bd807 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Received unexpected event network-vif-plugged-071be225-ecaa-4260-bc91-73f144657155 for instance with vm_state active and task_state None.
Nov 29 07:53:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 29 07:53:58 compute-0 ceph-mon[75050]: pgmap v1458: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Nov 29 07:53:58 compute-0 ceph-mon[75050]: osdmap e213: 3 total, 3 up, 3 in
Nov 29 07:53:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 29 07:53:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 29 07:53:58 compute-0 sshd-session[276069]: Connection closed by authenticating user root 143.14.121.41 port 53814 [preauth]
Nov 29 07:53:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 189 op/s
Nov 29 07:53:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.149222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402839149411, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1592, "num_deletes": 512, "total_data_size": 1778909, "memory_usage": 1812960, "flush_reason": "Manual Compaction"}
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402839172128, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1585256, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23987, "largest_seqno": 25577, "table_properties": {"data_size": 1578528, "index_size": 3290, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18342, "raw_average_key_size": 19, "raw_value_size": 1562589, "raw_average_value_size": 1680, "num_data_blocks": 145, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402739, "oldest_key_time": 1764402739, "file_creation_time": 1764402839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 22987 microseconds, and 9027 cpu microseconds.
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.172218) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1585256 bytes OK
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.172236) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.174422) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.174435) EVENT_LOG_v1 {"time_micros": 1764402839174431, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.174454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1770769, prev total WAL file size 1770769, number of live WAL files 2.
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.175097) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1548KB)], [53(9987KB)]
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402839175166, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11812812, "oldest_snapshot_seqno": -1}
Nov 29 07:53:59 compute-0 nova_compute[256729]: 2025-11-29 07:53:59.236 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5254 keys, 8565685 bytes, temperature: kUnknown
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402839291796, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 8565685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8528456, "index_size": 22975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 132560, "raw_average_key_size": 25, "raw_value_size": 8431527, "raw_average_value_size": 1604, "num_data_blocks": 943, "num_entries": 5254, "num_filter_entries": 5254, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764402839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.292395) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 8565685 bytes
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.294187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.0 rd, 73.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.8 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(12.9) write-amplify(5.4) OK, records in: 6281, records dropped: 1027 output_compression: NoCompression
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.294222) EVENT_LOG_v1 {"time_micros": 1764402839294204, "job": 28, "event": "compaction_finished", "compaction_time_micros": 116919, "compaction_time_cpu_micros": 59386, "output_level": 6, "num_output_files": 1, "total_output_size": 8565685, "num_input_records": 6281, "num_output_records": 5254, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402839294887, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402839298290, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.174946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.298507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.298513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.298515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.298517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:59 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:53:59.298519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:53:59 compute-0 ceph-mon[75050]: osdmap e214: 3 total, 3 up, 3 in
Nov 29 07:53:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:59.773 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:53:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:59.774 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:53:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:53:59.775 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:53:59 compute-0 nova_compute[256729]: 2025-11-29 07:53:59.849 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Updating instance_info_cache with network_info: [{"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:53:59 compute-0 nova_compute[256729]: 2025-11-29 07:53:59.869 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-28704ae1-91ab-4ea5-99cd-c2ec5475f015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:53:59 compute-0 nova_compute[256729]: 2025-11-29 07:53:59.870 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:53:59 compute-0 nova_compute[256729]: 2025-11-29 07:53:59.870 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:53:59 compute-0 nova_compute[256729]: 2025-11-29 07:53:59.871 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:54:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2692976688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:54:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2692976688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:54:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570219736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:00 compute-0 ceph-mon[75050]: pgmap v1461: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 189 op/s
Nov 29 07:54:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2692976688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2692976688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3570219736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 30 KiB/s wr, 145 op/s
Nov 29 07:54:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 29 07:54:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 29 07:54:01 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 29 07:54:01 compute-0 nova_compute[256729]: 2025-11-29 07:54:01.850 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:01 compute-0 nova_compute[256729]: 2025-11-29 07:54:01.952 256736 DEBUG nova.compute.manager [req-72f4777f-50c6-4953-84d6-20198f519c55 req-6d9d98f3-6577-4f3c-8a5a-7afa76428215 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Received event network-changed-071be225-ecaa-4260-bc91-73f144657155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:54:01 compute-0 nova_compute[256729]: 2025-11-29 07:54:01.953 256736 DEBUG nova.compute.manager [req-72f4777f-50c6-4953-84d6-20198f519c55 req-6d9d98f3-6577-4f3c-8a5a-7afa76428215 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Refreshing instance network info cache due to event network-changed-071be225-ecaa-4260-bc91-73f144657155. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:54:01 compute-0 nova_compute[256729]: 2025-11-29 07:54:01.953 256736 DEBUG oslo_concurrency.lockutils [req-72f4777f-50c6-4953-84d6-20198f519c55 req-6d9d98f3-6577-4f3c-8a5a-7afa76428215 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:54:01 compute-0 nova_compute[256729]: 2025-11-29 07:54:01.954 256736 DEBUG oslo_concurrency.lockutils [req-72f4777f-50c6-4953-84d6-20198f519c55 req-6d9d98f3-6577-4f3c-8a5a-7afa76428215 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:54:01 compute-0 nova_compute[256729]: 2025-11-29 07:54:01.954 256736 DEBUG nova.network.neutron [req-72f4777f-50c6-4953-84d6-20198f519c55 req-6d9d98f3-6577-4f3c-8a5a-7afa76428215 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Refreshing network info cache for port 071be225-ecaa-4260-bc91-73f144657155 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:54:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:54:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571225279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:54:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1571225279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:02 compute-0 ceph-mon[75050]: pgmap v1462: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 30 KiB/s wr, 145 op/s
Nov 29 07:54:02 compute-0 ceph-mon[75050]: osdmap e215: 3 total, 3 up, 3 in
Nov 29 07:54:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1571225279' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1571225279' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 29 07:54:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 29 07:54:02 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 29 07:54:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.0 KiB/s wr, 232 op/s
Nov 29 07:54:03 compute-0 nova_compute[256729]: 2025-11-29 07:54:03.017 256736 DEBUG nova.network.neutron [req-72f4777f-50c6-4953-84d6-20198f519c55 req-6d9d98f3-6577-4f3c-8a5a-7afa76428215 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updated VIF entry in instance network info cache for port 071be225-ecaa-4260-bc91-73f144657155. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:54:03 compute-0 nova_compute[256729]: 2025-11-29 07:54:03.018 256736 DEBUG nova.network.neutron [req-72f4777f-50c6-4953-84d6-20198f519c55 req-6d9d98f3-6577-4f3c-8a5a-7afa76428215 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updating instance_info_cache with network_info: [{"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:54:03 compute-0 nova_compute[256729]: 2025-11-29 07:54:03.036 256736 DEBUG oslo_concurrency.lockutils [req-72f4777f-50c6-4953-84d6-20198f519c55 req-6d9d98f3-6577-4f3c-8a5a-7afa76428215 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:54:03 compute-0 sshd-session[276245]: Connection closed by authenticating user root 143.14.121.41 port 53818 [preauth]
Nov 29 07:54:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 29 07:54:03 compute-0 ceph-mon[75050]: osdmap e216: 3 total, 3 up, 3 in
Nov 29 07:54:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 29 07:54:03 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 29 07:54:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:54:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/611667370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:54:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/611667370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:04 compute-0 nova_compute[256729]: 2025-11-29 07:54:04.240 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:04 compute-0 ceph-mon[75050]: pgmap v1465: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.0 KiB/s wr, 232 op/s
Nov 29 07:54:04 compute-0 ceph-mon[75050]: osdmap e217: 3 total, 3 up, 3 in
Nov 29 07:54:04 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/611667370' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:04 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/611667370' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.0 KiB/s wr, 129 op/s
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:54:05
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.data', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups', 'default.rgw.log']
Nov 29 07:54:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:54:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 29 07:54:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 29 07:54:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 29 07:54:06 compute-0 ceph-mon[75050]: pgmap v1467: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.0 KiB/s wr, 129 op/s
Nov 29 07:54:06 compute-0 ceph-mon[75050]: osdmap e218: 3 total, 3 up, 3 in
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 KiB/s wr, 136 op/s
Nov 29 07:54:06 compute-0 nova_compute[256729]: 2025-11-29 07:54:06.852 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:06 compute-0 sshd-session[276247]: Connection closed by authenticating user root 143.14.121.41 port 32982 [preauth]
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:54:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:54:07 compute-0 ceph-mon[75050]: pgmap v1469: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 KiB/s wr, 136 op/s
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.070 256736 DEBUG oslo_concurrency.lockutils [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.070 256736 DEBUG oslo_concurrency.lockutils [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.091 256736 INFO nova.compute.manager [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Detaching volume cfb594ec-67c5-461f-a9db-5237721ba7ec
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.242 256736 INFO nova.virt.block_device [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Attempting to driver detach volume cfb594ec-67c5-461f-a9db-5237721ba7ec from mountpoint /dev/vdb
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.253 256736 DEBUG nova.virt.libvirt.driver [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Attempting to detach device vdb from instance 28704ae1-91ab-4ea5-99cd-c2ec5475f015 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.255 256736 DEBUG nova.virt.libvirt.guest [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-cfb594ec-67c5-461f-a9db-5237721ba7ec">
Nov 29 07:54:08 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   </source>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <serial>cfb594ec-67c5-461f-a9db-5237721ba7ec</serial>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:54:08 compute-0 nova_compute[256729]: </disk>
Nov 29 07:54:08 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.265 256736 INFO nova.virt.libvirt.driver [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Successfully detached device vdb from instance 28704ae1-91ab-4ea5-99cd-c2ec5475f015 from the persistent domain config.
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.265 256736 DEBUG nova.virt.libvirt.driver [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 28704ae1-91ab-4ea5-99cd-c2ec5475f015 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.266 256736 DEBUG nova.virt.libvirt.guest [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-cfb594ec-67c5-461f-a9db-5237721ba7ec">
Nov 29 07:54:08 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   </source>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <serial>cfb594ec-67c5-461f-a9db-5237721ba7ec</serial>
Nov 29 07:54:08 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:54:08 compute-0 nova_compute[256729]: </disk>
Nov 29 07:54:08 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.381 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764402848.3812938, 28704ae1-91ab-4ea5-99cd-c2ec5475f015 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.384 256736 DEBUG nova.virt.libvirt.driver [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 28704ae1-91ab-4ea5-99cd-c2ec5475f015 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.386 256736 INFO nova.virt.libvirt.driver [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Successfully detached device vdb from instance 28704ae1-91ab-4ea5-99cd-c2ec5475f015 from the live domain config.
Nov 29 07:54:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:54:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523302054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:54:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523302054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.649 256736 DEBUG nova.objects.instance [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'flavor' on Instance uuid 28704ae1-91ab-4ea5-99cd-c2ec5475f015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:54:08 compute-0 nova_compute[256729]: 2025-11-29 07:54:08.692 256736 DEBUG oslo_concurrency.lockutils [None req-cb09e4c3-1a57-492b-91a2-d0a32b6cde5b 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 4.0 KiB/s wr, 142 op/s
Nov 29 07:54:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3523302054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3523302054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.172 256736 DEBUG oslo_concurrency.lockutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.174 256736 DEBUG oslo_concurrency.lockutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.174 256736 DEBUG oslo_concurrency.lockutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.175 256736 DEBUG oslo_concurrency.lockutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.175 256736 DEBUG oslo_concurrency.lockutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.177 256736 INFO nova.compute.manager [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Terminating instance
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.179 256736 DEBUG nova.compute.manager [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.245 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 29 07:54:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 29 07:54:09 compute-0 kernel: tap1f274eee-58 (unregistering): left promiscuous mode
Nov 29 07:54:09 compute-0 NetworkManager[48962]: <info>  [1764402849.6850] device (tap1f274eee-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:54:09 compute-0 ovn_controller[153383]: 2025-11-29T07:54:09Z|00094|binding|INFO|Releasing lport 1f274eee-58f6-4dd7-94e0-15819552d2c0 from this chassis (sb_readonly=0)
Nov 29 07:54:09 compute-0 ovn_controller[153383]: 2025-11-29T07:54:09Z|00095|binding|INFO|Setting lport 1f274eee-58f6-4dd7-94e0-15819552d2c0 down in Southbound
Nov 29 07:54:09 compute-0 ovn_controller[153383]: 2025-11-29T07:54:09Z|00096|binding|INFO|Removing iface tap1f274eee-58 ovn-installed in OVS
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.702 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.703 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:09.709 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:01:7f 10.100.0.4'], port_security=['fa:16:3e:8f:01:7f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '28704ae1-91ab-4ea5-99cd-c2ec5475f015', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8117debb786c4549812cc6e7571f6d4d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac3c0b20-8827-4bae-b233-9118cf035682', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b45dfb6d-5934-4acb-b62b-b7104c4a665d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=1f274eee-58f6-4dd7-94e0-15819552d2c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:54:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:09.711 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 1f274eee-58f6-4dd7-94e0-15819552d2c0 in datapath a24c1904-53b2-4346-8806-9a1bad79dd5c unbound from our chassis
Nov 29 07:54:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:09.712 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a24c1904-53b2-4346-8806-9a1bad79dd5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:54:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:09.714 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[69d4c1d1-8666-41a3-b1a6-340feecf9ad8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:09.714 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c namespace which is not needed anymore
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.723 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:09 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 29 07:54:09 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 17.613s CPU time.
Nov 29 07:54:09 compute-0 systemd-machined[217781]: Machine qemu-6-instance-00000006 terminated.
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.812 256736 INFO nova.virt.libvirt.driver [-] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Instance destroyed successfully.
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.813 256736 DEBUG nova.objects.instance [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lazy-loading 'resources' on Instance uuid 28704ae1-91ab-4ea5-99cd-c2ec5475f015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.830 256736 DEBUG nova.virt.libvirt.vif [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:52:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-861192464',display_name='tempest-VolumesBackupsTest-instance-861192464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-861192464',id=6,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPgC7F+GuAVv7pee5KUxxh7J9VHVUJK4LOVvzjaFnO1uxjktB3qZYM2R8ZzJHE1gAojvS7nudW3izVdrZ1YPNIOapaXoiQj3zOzSQuXjFqRwT3xc4gfY2+/Hmzvl0JplTA==',key_name='tempest-keypair-121257784',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8117debb786c4549812cc6e7571f6d4d',ramdisk_id='',reservation_id='r-u0j1smmb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-12225578',owner_user_name='tempest-VolumesBackupsTest-12225578-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6bef1230e3de4a87aa01df74ec671a23',uuid=28704ae1-91ab-4ea5-99cd-c2ec5475f015,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.831 256736 DEBUG nova.network.os_vif_util [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converting VIF {"id": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "address": "fa:16:3e:8f:01:7f", "network": {"id": "a24c1904-53b2-4346-8806-9a1bad79dd5c", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-163036087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8117debb786c4549812cc6e7571f6d4d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f274eee-58", "ovs_interfaceid": "1f274eee-58f6-4dd7-94e0-15819552d2c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.832 256736 DEBUG nova.network.os_vif_util [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:01:7f,bridge_name='br-int',has_traffic_filtering=True,id=1f274eee-58f6-4dd7-94e0-15819552d2c0,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f274eee-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.833 256736 DEBUG os_vif [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:01:7f,bridge_name='br-int',has_traffic_filtering=True,id=1f274eee-58f6-4dd7-94e0-15819552d2c0,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f274eee-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.835 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.835 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f274eee-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.837 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.839 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.841 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:09 compute-0 nova_compute[256729]: 2025-11-29 07:54:09.844 256736 INFO os_vif [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:01:7f,bridge_name='br-int',has_traffic_filtering=True,id=1f274eee-58f6-4dd7-94e0-15819552d2c0,network=Network(a24c1904-53b2-4346-8806-9a1bad79dd5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f274eee-58')
Nov 29 07:54:09 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[274663]: [NOTICE]   (274667) : haproxy version is 2.8.14-c23fe91
Nov 29 07:54:09 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[274663]: [NOTICE]   (274667) : path to executable is /usr/sbin/haproxy
Nov 29 07:54:09 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[274663]: [WARNING]  (274667) : Exiting Master process...
Nov 29 07:54:09 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[274663]: [WARNING]  (274667) : Exiting Master process...
Nov 29 07:54:09 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[274663]: [ALERT]    (274667) : Current worker (274669) exited with code 143 (Terminated)
Nov 29 07:54:09 compute-0 neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c[274663]: [WARNING]  (274667) : All workers exited. Exiting... (0)
Nov 29 07:54:09 compute-0 systemd[1]: libpod-386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541.scope: Deactivated successfully.
Nov 29 07:54:09 compute-0 podman[276283]: 2025-11-29 07:54:09.900239646 +0000 UTC m=+0.075800249 container died 386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 07:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541-userdata-shm.mount: Deactivated successfully.
Nov 29 07:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eaf912f6665db441d6f00dd0d05490513669784fb88f84adf237d95072c56f0-merged.mount: Deactivated successfully.
Nov 29 07:54:09 compute-0 podman[276283]: 2025-11-29 07:54:09.935565781 +0000 UTC m=+0.111126384 container cleanup 386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:54:09 compute-0 systemd[1]: libpod-conmon-386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541.scope: Deactivated successfully.
Nov 29 07:54:10 compute-0 podman[276336]: 2025-11-29 07:54:10.002103432 +0000 UTC m=+0.043408302 container remove 386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:54:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:10.019 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b913bbf9-864b-4a53-bfd5-421d23b4e713]: (4, ('Sat Nov 29 07:54:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c (386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541)\n386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541\nSat Nov 29 07:54:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c (386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541)\n386d22addcd02894681fd3722e810e36f9b928c6e01a2dc05a26f305c030b541\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:10.022 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[00f63588-53ed-4eea-b925-0bdfb729f9d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:10.024 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa24c1904-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.026 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:10 compute-0 kernel: tapa24c1904-50: left promiscuous mode
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.028 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:10.034 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0c6aee-a75f-4e01-88d0-48b2d7806ef8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.048 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:10.057 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5952d802-3c77-460a-a96b-60b2c8b75c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:10.058 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[db419fef-3c28-4808-8e58-35ce826c3c95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:10.074 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4f9219-71f2-4b54-ab48-5de9822bc9c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509929, 'reachable_time': 24002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276352, 'error': None, 'target': 'ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:10 compute-0 systemd[1]: run-netns-ovnmeta\x2da24c1904\x2d53b2\x2d4346\x2d8806\x2d9a1bad79dd5c.mount: Deactivated successfully.
Nov 29 07:54:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:10.080 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a24c1904-53b2-4346-8806-9a1bad79dd5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:54:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:10.080 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[401d08dd-8b68-46d6-99d4-4807d5793a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.293 256736 INFO nova.virt.libvirt.driver [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Deleting instance files /var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015_del
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.295 256736 INFO nova.virt.libvirt.driver [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Deletion of /var/lib/nova/instances/28704ae1-91ab-4ea5-99cd-c2ec5475f015_del complete
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.392 256736 INFO nova.compute.manager [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Took 1.21 seconds to destroy the instance on the hypervisor.
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.393 256736 DEBUG oslo.service.loopingcall [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.394 256736 DEBUG nova.compute.manager [-] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.394 256736 DEBUG nova.network.neutron [-] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:54:10 compute-0 ceph-mon[75050]: pgmap v1470: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 4.0 KiB/s wr, 142 op/s
Nov 29 07:54:10 compute-0 ceph-mon[75050]: osdmap e219: 3 total, 3 up, 3 in
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.661 256736 DEBUG nova.compute.manager [req-5654edbf-d43e-464e-943d-6d3bf8ca2394 req-5b2d7f47-0e62-4240-9ed7-e990fda37c20 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received event network-vif-unplugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.662 256736 DEBUG oslo_concurrency.lockutils [req-5654edbf-d43e-464e-943d-6d3bf8ca2394 req-5b2d7f47-0e62-4240-9ed7-e990fda37c20 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.662 256736 DEBUG oslo_concurrency.lockutils [req-5654edbf-d43e-464e-943d-6d3bf8ca2394 req-5b2d7f47-0e62-4240-9ed7-e990fda37c20 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.662 256736 DEBUG oslo_concurrency.lockutils [req-5654edbf-d43e-464e-943d-6d3bf8ca2394 req-5b2d7f47-0e62-4240-9ed7-e990fda37c20 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.662 256736 DEBUG nova.compute.manager [req-5654edbf-d43e-464e-943d-6d3bf8ca2394 req-5b2d7f47-0e62-4240-9ed7-e990fda37c20 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] No waiting events found dispatching network-vif-unplugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:54:10 compute-0 nova_compute[256729]: 2025-11-29 07:54:10.663 256736 DEBUG nova.compute.manager [req-5654edbf-d43e-464e-943d-6d3bf8ca2394 req-5b2d7f47-0e62-4240-9ed7-e990fda37c20 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received event network-vif-unplugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:54:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.2 KiB/s wr, 96 op/s
Nov 29 07:54:11 compute-0 sshd-session[276249]: Connection closed by authenticating user root 143.14.121.41 port 32990 [preauth]
Nov 29 07:54:11 compute-0 nova_compute[256729]: 2025-11-29 07:54:11.853 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:11 compute-0 nova_compute[256729]: 2025-11-29 07:54:11.996 256736 DEBUG nova.network.neutron [-] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.020 256736 INFO nova.compute.manager [-] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Took 1.63 seconds to deallocate network for instance.
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.074 256736 DEBUG oslo_concurrency.lockutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.075 256736 DEBUG oslo_concurrency.lockutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.106 256736 DEBUG nova.compute.manager [req-30dd76c7-e1d6-4ad9-ba4f-9fcce9704ebe req-0f314d9f-f383-4bb2-a1d1-de25ba3d3550 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received event network-vif-deleted-1f274eee-58f6-4dd7-94e0-15819552d2c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.157 256736 DEBUG oslo_concurrency.processutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.354 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:54:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1606475403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:12 compute-0 ceph-mon[75050]: pgmap v1472: 305 pgs: 305 active+clean; 167 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.2 KiB/s wr, 96 op/s
Nov 29 07:54:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1606475403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.625 256736 DEBUG oslo_concurrency.processutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.631 256736 DEBUG nova.compute.provider_tree [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.646 256736 DEBUG nova.scheduler.client.report [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.669 256736 DEBUG oslo_concurrency.lockutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.695 256736 INFO nova.scheduler.client.report [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Deleted allocations for instance 28704ae1-91ab-4ea5-99cd-c2ec5475f015
Nov 29 07:54:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 125 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 381 KiB/s rd, 1.3 MiB/s wr, 164 op/s
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.748 256736 DEBUG nova.compute.manager [req-05f86ee8-4250-4f0e-b4a4-0313ebc4372a req-eb937689-87e4-4374-9fc5-1a0b447c313a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received event network-vif-plugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.748 256736 DEBUG oslo_concurrency.lockutils [req-05f86ee8-4250-4f0e-b4a4-0313ebc4372a req-eb937689-87e4-4374-9fc5-1a0b447c313a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.749 256736 DEBUG oslo_concurrency.lockutils [req-05f86ee8-4250-4f0e-b4a4-0313ebc4372a req-eb937689-87e4-4374-9fc5-1a0b447c313a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.749 256736 DEBUG oslo_concurrency.lockutils [req-05f86ee8-4250-4f0e-b4a4-0313ebc4372a req-eb937689-87e4-4374-9fc5-1a0b447c313a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.749 256736 DEBUG nova.compute.manager [req-05f86ee8-4250-4f0e-b4a4-0313ebc4372a req-eb937689-87e4-4374-9fc5-1a0b447c313a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] No waiting events found dispatching network-vif-plugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.749 256736 WARNING nova.compute.manager [req-05f86ee8-4250-4f0e-b4a4-0313ebc4372a req-eb937689-87e4-4374-9fc5-1a0b447c313a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Received unexpected event network-vif-plugged-1f274eee-58f6-4dd7-94e0-15819552d2c0 for instance with vm_state deleted and task_state None.
Nov 29 07:54:12 compute-0 nova_compute[256729]: 2025-11-29 07:54:12.766 256736 DEBUG oslo_concurrency.lockutils [None req-065c27a8-c783-4b9a-bde3-caf81e9874a4 6bef1230e3de4a87aa01df74ec671a23 8117debb786c4549812cc6e7571f6d4d - - default default] Lock "28704ae1-91ab-4ea5-99cd-c2ec5475f015" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:12 compute-0 ovn_controller[153383]: 2025-11-29T07:54:12Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:b6:df 10.100.0.11
Nov 29 07:54:12 compute-0 ovn_controller[153383]: 2025-11-29T07:54:12Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:b6:df 10.100.0.11
Nov 29 07:54:14 compute-0 sshd-session[276355]: Connection closed by authenticating user root 143.14.121.41 port 49602 [preauth]
Nov 29 07:54:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:14 compute-0 ceph-mon[75050]: pgmap v1473: 305 pgs: 305 active+clean; 125 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 381 KiB/s rd, 1.3 MiB/s wr, 164 op/s
Nov 29 07:54:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 108 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 157 op/s
Nov 29 07:54:14 compute-0 nova_compute[256729]: 2025-11-29 07:54:14.837 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006596900029302599 of space, bias 1.0, pg target 0.19790700087907795 quantized to 32 (current 32)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.902699094875301e-07 of space, bias 1.0, pg target 0.00026708097284625906 quantized to 32 (current 32)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:54:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:54:15 compute-0 ceph-mon[75050]: pgmap v1474: 305 pgs: 305 active+clean; 108 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 157 op/s
Nov 29 07:54:15 compute-0 nova_compute[256729]: 2025-11-29 07:54:15.902 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 29 07:54:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 114 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 2.5 MiB/s wr, 152 op/s
Nov 29 07:54:16 compute-0 sudo[276380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:16 compute-0 sudo[276380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:16 compute-0 sudo[276380]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:16 compute-0 nova_compute[256729]: 2025-11-29 07:54:16.855 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:16 compute-0 sudo[276405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:54:16 compute-0 sudo[276405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:16 compute-0 sudo[276405]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:16 compute-0 sudo[276430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:16 compute-0 sudo[276430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:16 compute-0 sudo[276430]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 29 07:54:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 29 07:54:17 compute-0 sudo[276455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:54:17 compute-0 sudo[276455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:17 compute-0 sudo[276455]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:54:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:54:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:54:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:54:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:54:17 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3a30e9a3-4da6-4298-8b15-5a3641d674d8 does not exist
Nov 29 07:54:17 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev fb4fa5b4-2cab-410f-803b-1f2b9b7f1f03 does not exist
Nov 29 07:54:17 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev af3bbf5e-3f11-40e9-bdd5-54cdea94f796 does not exist
Nov 29 07:54:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:54:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:54:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:54:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:54:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:54:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:17 compute-0 sudo[276511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:17 compute-0 sudo[276511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:17 compute-0 sudo[276511]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:17 compute-0 sudo[276536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:54:17 compute-0 sudo[276536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:17 compute-0 sudo[276536]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:17 compute-0 sudo[276561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:17 compute-0 sudo[276561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:17 compute-0 sudo[276561]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:18 compute-0 ceph-mon[75050]: pgmap v1475: 305 pgs: 305 active+clean; 114 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 2.5 MiB/s wr, 152 op/s
Nov 29 07:54:18 compute-0 ceph-mon[75050]: osdmap e220: 3 total, 3 up, 3 in
Nov 29 07:54:18 compute-0 sudo[276586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:54:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:54:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:54:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:54:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:54:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:54:18 compute-0 sudo[276586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 29 07:54:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 29 07:54:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 29 07:54:18 compute-0 sshd-session[276378]: Connection closed by authenticating user root 143.14.121.41 port 49618 [preauth]
Nov 29 07:54:18 compute-0 podman[276652]: 2025-11-29 07:54:18.395785985 +0000 UTC m=+0.051691214 container create bc83fc7cac7d5c844c593b49fcd297bab4b50f3b71718c5b69efd1b412123348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:54:18 compute-0 systemd[1]: Started libpod-conmon-bc83fc7cac7d5c844c593b49fcd297bab4b50f3b71718c5b69efd1b412123348.scope.
Nov 29 07:54:18 compute-0 podman[276652]: 2025-11-29 07:54:18.368295309 +0000 UTC m=+0.024200538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:18 compute-0 podman[276652]: 2025-11-29 07:54:18.497195248 +0000 UTC m=+0.153100477 container init bc83fc7cac7d5c844c593b49fcd297bab4b50f3b71718c5b69efd1b412123348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:54:18 compute-0 podman[276652]: 2025-11-29 07:54:18.508590984 +0000 UTC m=+0.164496223 container start bc83fc7cac7d5c844c593b49fcd297bab4b50f3b71718c5b69efd1b412123348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:54:18 compute-0 podman[276652]: 2025-11-29 07:54:18.512883119 +0000 UTC m=+0.168788328 container attach bc83fc7cac7d5c844c593b49fcd297bab4b50f3b71718c5b69efd1b412123348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:54:18 compute-0 boring_lamarr[276669]: 167 167
Nov 29 07:54:18 compute-0 systemd[1]: libpod-bc83fc7cac7d5c844c593b49fcd297bab4b50f3b71718c5b69efd1b412123348.scope: Deactivated successfully.
Nov 29 07:54:18 compute-0 podman[276652]: 2025-11-29 07:54:18.518890009 +0000 UTC m=+0.174795218 container died bc83fc7cac7d5c844c593b49fcd297bab4b50f3b71718c5b69efd1b412123348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:54:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-316f94ab2e5eb5f38ca2bcc00e94caa384ff0a7ca10d454d9e32d0e79ec87a6d-merged.mount: Deactivated successfully.
Nov 29 07:54:18 compute-0 podman[276652]: 2025-11-29 07:54:18.564637074 +0000 UTC m=+0.220542303 container remove bc83fc7cac7d5c844c593b49fcd297bab4b50f3b71718c5b69efd1b412123348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 07:54:18 compute-0 systemd[1]: libpod-conmon-bc83fc7cac7d5c844c593b49fcd297bab4b50f3b71718c5b69efd1b412123348.scope: Deactivated successfully.
Nov 29 07:54:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 566 KiB/s rd, 3.2 MiB/s wr, 168 op/s
Nov 29 07:54:18 compute-0 podman[276694]: 2025-11-29 07:54:18.821677152 +0000 UTC m=+0.072507552 container create 57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:54:18 compute-0 systemd[1]: Started libpod-conmon-57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0.scope.
Nov 29 07:54:18 compute-0 podman[276694]: 2025-11-29 07:54:18.791153095 +0000 UTC m=+0.041983545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd73f0ccb9001c6f8b600222f421193bcbdc7935a122b4fbd48beffdcf782ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd73f0ccb9001c6f8b600222f421193bcbdc7935a122b4fbd48beffdcf782ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd73f0ccb9001c6f8b600222f421193bcbdc7935a122b4fbd48beffdcf782ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd73f0ccb9001c6f8b600222f421193bcbdc7935a122b4fbd48beffdcf782ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd73f0ccb9001c6f8b600222f421193bcbdc7935a122b4fbd48beffdcf782ff9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:18 compute-0 podman[276694]: 2025-11-29 07:54:18.931335806 +0000 UTC m=+0.182166246 container init 57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hopper, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:54:18 compute-0 podman[276694]: 2025-11-29 07:54:18.942608178 +0000 UTC m=+0.193438558 container start 57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 07:54:18 compute-0 podman[276694]: 2025-11-29 07:54:18.948323102 +0000 UTC m=+0.199153502 container attach 57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:54:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 29 07:54:19 compute-0 ceph-mon[75050]: osdmap e221: 3 total, 3 up, 3 in
Nov 29 07:54:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 29 07:54:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 29 07:54:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:54:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1310565814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:54:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1310565814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:19 compute-0 nova_compute[256729]: 2025-11-29 07:54:19.839 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:54:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2073099126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:20 compute-0 nostalgic_hopper[276711]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:54:20 compute-0 nostalgic_hopper[276711]: --> relative data size: 1.0
Nov 29 07:54:20 compute-0 nostalgic_hopper[276711]: --> All data devices are unavailable
Nov 29 07:54:20 compute-0 systemd[1]: libpod-57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0.scope: Deactivated successfully.
Nov 29 07:54:20 compute-0 systemd[1]: libpod-57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0.scope: Consumed 1.144s CPU time.
Nov 29 07:54:20 compute-0 podman[276741]: 2025-11-29 07:54:20.219568069 +0000 UTC m=+0.037071133 container died 57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:54:20 compute-0 ceph-mon[75050]: pgmap v1478: 305 pgs: 305 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 566 KiB/s rd, 3.2 MiB/s wr, 168 op/s
Nov 29 07:54:20 compute-0 ceph-mon[75050]: osdmap e222: 3 total, 3 up, 3 in
Nov 29 07:54:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1310565814' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1310565814' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2073099126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd73f0ccb9001c6f8b600222f421193bcbdc7935a122b4fbd48beffdcf782ff9-merged.mount: Deactivated successfully.
Nov 29 07:54:20 compute-0 podman[276741]: 2025-11-29 07:54:20.290138397 +0000 UTC m=+0.107641401 container remove 57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hopper, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:54:20 compute-0 systemd[1]: libpod-conmon-57fd865818cfb531e84728375a56a2b08fa0e7fa4682ac1aecbb8707e8e354c0.scope: Deactivated successfully.
Nov 29 07:54:20 compute-0 sudo[276586]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:20 compute-0 sudo[276756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:20 compute-0 sudo[276756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:20 compute-0 sudo[276756]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:20 compute-0 sudo[276781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:54:20 compute-0 sudo[276781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:20 compute-0 sudo[276781]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:20 compute-0 sudo[276806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:20 compute-0 sudo[276806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:20 compute-0 sudo[276806]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:20 compute-0 sudo[276831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:54:20 compute-0 sudo[276831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 1.1 MiB/s wr, 102 op/s
Nov 29 07:54:21 compute-0 podman[276893]: 2025-11-29 07:54:21.023238185 +0000 UTC m=+0.056444211 container create 3370956f43ec93a469658914ece208d1c9cab97826ee90e25fa0da6c5fe01ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:54:21 compute-0 systemd[1]: Started libpod-conmon-3370956f43ec93a469658914ece208d1c9cab97826ee90e25fa0da6c5fe01ace.scope.
Nov 29 07:54:21 compute-0 podman[276893]: 2025-11-29 07:54:20.990382686 +0000 UTC m=+0.023588732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:21 compute-0 podman[276893]: 2025-11-29 07:54:21.147024687 +0000 UTC m=+0.180230723 container init 3370956f43ec93a469658914ece208d1c9cab97826ee90e25fa0da6c5fe01ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:54:21 compute-0 podman[276908]: 2025-11-29 07:54:21.146405321 +0000 UTC m=+0.079077577 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 07:54:21 compute-0 podman[276907]: 2025-11-29 07:54:21.148492567 +0000 UTC m=+0.085921340 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 07:54:21 compute-0 podman[276893]: 2025-11-29 07:54:21.155088523 +0000 UTC m=+0.188294549 container start 3370956f43ec93a469658914ece208d1c9cab97826ee90e25fa0da6c5fe01ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:54:21 compute-0 interesting_elion[276935]: 167 167
Nov 29 07:54:21 compute-0 systemd[1]: libpod-3370956f43ec93a469658914ece208d1c9cab97826ee90e25fa0da6c5fe01ace.scope: Deactivated successfully.
Nov 29 07:54:21 compute-0 podman[276893]: 2025-11-29 07:54:21.433319629 +0000 UTC m=+0.466525745 container attach 3370956f43ec93a469658914ece208d1c9cab97826ee90e25fa0da6c5fe01ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:54:21 compute-0 podman[276893]: 2025-11-29 07:54:21.433936435 +0000 UTC m=+0.467142511 container died 3370956f43ec93a469658914ece208d1c9cab97826ee90e25fa0da6c5fe01ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 07:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-78960af7a18df926f31d677062036468cf3ef91ae46c97b3a5a081f0f982d540-merged.mount: Deactivated successfully.
Nov 29 07:54:21 compute-0 podman[276906]: 2025-11-29 07:54:21.597940433 +0000 UTC m=+0.530351652 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:54:21 compute-0 podman[276893]: 2025-11-29 07:54:21.611933328 +0000 UTC m=+0.645139354 container remove 3370956f43ec93a469658914ece208d1c9cab97826ee90e25fa0da6c5fe01ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_elion, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:54:21 compute-0 systemd[1]: libpod-conmon-3370956f43ec93a469658914ece208d1c9cab97826ee90e25fa0da6c5fe01ace.scope: Deactivated successfully.
Nov 29 07:54:21 compute-0 podman[276998]: 2025-11-29 07:54:21.82686644 +0000 UTC m=+0.058229140 container create a7c1238e2b56d9745e4068358ecfdc48c88b43a760482056dd71547034feaf5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 07:54:21 compute-0 nova_compute[256729]: 2025-11-29 07:54:21.857 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:21 compute-0 systemd[1]: Started libpod-conmon-a7c1238e2b56d9745e4068358ecfdc48c88b43a760482056dd71547034feaf5d.scope.
Nov 29 07:54:21 compute-0 podman[276998]: 2025-11-29 07:54:21.797872004 +0000 UTC m=+0.029234684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3fa7637cdf15caedc3e4449419c8fbdfb41588e7c47b1f75664fa83e50e9c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3fa7637cdf15caedc3e4449419c8fbdfb41588e7c47b1f75664fa83e50e9c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3fa7637cdf15caedc3e4449419c8fbdfb41588e7c47b1f75664fa83e50e9c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3fa7637cdf15caedc3e4449419c8fbdfb41588e7c47b1f75664fa83e50e9c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:21 compute-0 podman[276998]: 2025-11-29 07:54:21.982509445 +0000 UTC m=+0.213872165 container init a7c1238e2b56d9745e4068358ecfdc48c88b43a760482056dd71547034feaf5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:54:21 compute-0 podman[276998]: 2025-11-29 07:54:21.990237191 +0000 UTC m=+0.221599891 container start a7c1238e2b56d9745e4068358ecfdc48c88b43a760482056dd71547034feaf5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:54:21 compute-0 podman[276998]: 2025-11-29 07:54:21.999305284 +0000 UTC m=+0.230667944 container attach a7c1238e2b56d9745e4068358ecfdc48c88b43a760482056dd71547034feaf5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:54:22 compute-0 ceph-mon[75050]: pgmap v1480: 305 pgs: 305 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 1.1 MiB/s wr, 102 op/s
Nov 29 07:54:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 197 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 13 MiB/s wr, 127 op/s
Nov 29 07:54:22 compute-0 condescending_moser[277013]: {
Nov 29 07:54:22 compute-0 condescending_moser[277013]:     "0": [
Nov 29 07:54:22 compute-0 condescending_moser[277013]:         {
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "devices": [
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "/dev/loop3"
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             ],
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_name": "ceph_lv0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_size": "21470642176",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "name": "ceph_lv0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "tags": {
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.cluster_name": "ceph",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.crush_device_class": "",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.encrypted": "0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.osd_id": "0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.type": "block",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.vdo": "0"
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             },
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "type": "block",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "vg_name": "ceph_vg0"
Nov 29 07:54:22 compute-0 condescending_moser[277013]:         }
Nov 29 07:54:22 compute-0 condescending_moser[277013]:     ],
Nov 29 07:54:22 compute-0 condescending_moser[277013]:     "1": [
Nov 29 07:54:22 compute-0 condescending_moser[277013]:         {
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "devices": [
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "/dev/loop4"
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             ],
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_name": "ceph_lv1",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_size": "21470642176",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "name": "ceph_lv1",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "tags": {
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.cluster_name": "ceph",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.crush_device_class": "",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.encrypted": "0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.osd_id": "1",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.type": "block",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.vdo": "0"
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             },
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "type": "block",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "vg_name": "ceph_vg1"
Nov 29 07:54:22 compute-0 condescending_moser[277013]:         }
Nov 29 07:54:22 compute-0 condescending_moser[277013]:     ],
Nov 29 07:54:22 compute-0 condescending_moser[277013]:     "2": [
Nov 29 07:54:22 compute-0 condescending_moser[277013]:         {
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "devices": [
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "/dev/loop5"
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             ],
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_name": "ceph_lv2",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_size": "21470642176",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "name": "ceph_lv2",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "tags": {
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.cluster_name": "ceph",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.crush_device_class": "",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.encrypted": "0",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.osd_id": "2",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.type": "block",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:                 "ceph.vdo": "0"
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             },
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "type": "block",
Nov 29 07:54:22 compute-0 condescending_moser[277013]:             "vg_name": "ceph_vg2"
Nov 29 07:54:22 compute-0 condescending_moser[277013]:         }
Nov 29 07:54:22 compute-0 condescending_moser[277013]:     ]
Nov 29 07:54:22 compute-0 condescending_moser[277013]: }
Nov 29 07:54:22 compute-0 systemd[1]: libpod-a7c1238e2b56d9745e4068358ecfdc48c88b43a760482056dd71547034feaf5d.scope: Deactivated successfully.
Nov 29 07:54:22 compute-0 podman[276998]: 2025-11-29 07:54:22.869016097 +0000 UTC m=+1.100378787 container died a7c1238e2b56d9745e4068358ecfdc48c88b43a760482056dd71547034feaf5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 07:54:22 compute-0 sshd-session[276686]: Connection closed by authenticating user root 143.14.121.41 port 49624 [preauth]
Nov 29 07:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3fa7637cdf15caedc3e4449419c8fbdfb41588e7c47b1f75664fa83e50e9c5-merged.mount: Deactivated successfully.
Nov 29 07:54:23 compute-0 podman[276998]: 2025-11-29 07:54:23.010058501 +0000 UTC m=+1.241421161 container remove a7c1238e2b56d9745e4068358ecfdc48c88b43a760482056dd71547034feaf5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:54:23 compute-0 systemd[1]: libpod-conmon-a7c1238e2b56d9745e4068358ecfdc48c88b43a760482056dd71547034feaf5d.scope: Deactivated successfully.
Nov 29 07:54:23 compute-0 sudo[276831]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:23 compute-0 sudo[277035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:23 compute-0 sudo[277035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:23 compute-0 sudo[277035]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:23 compute-0 sudo[277060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:54:23 compute-0 sudo[277060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:23 compute-0 sudo[277060]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:23 compute-0 sudo[277087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:23 compute-0 sudo[277087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:23 compute-0 sudo[277087]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:23 compute-0 sudo[277112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:54:23 compute-0 sudo[277112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:23 compute-0 podman[277178]: 2025-11-29 07:54:23.906266884 +0000 UTC m=+0.116788076 container create b0596c727249a49363047c809c32af22eb6fac37622a71f4843fbc18f70d3769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:54:23 compute-0 podman[277178]: 2025-11-29 07:54:23.82765251 +0000 UTC m=+0.038173802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:24 compute-0 systemd[1]: Started libpod-conmon-b0596c727249a49363047c809c32af22eb6fac37622a71f4843fbc18f70d3769.scope.
Nov 29 07:54:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:24 compute-0 podman[277178]: 2025-11-29 07:54:24.063388138 +0000 UTC m=+0.273909370 container init b0596c727249a49363047c809c32af22eb6fac37622a71f4843fbc18f70d3769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 07:54:24 compute-0 podman[277178]: 2025-11-29 07:54:24.079385277 +0000 UTC m=+0.289906509 container start b0596c727249a49363047c809c32af22eb6fac37622a71f4843fbc18f70d3769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:54:24 compute-0 trusting_gould[277194]: 167 167
Nov 29 07:54:24 compute-0 systemd[1]: libpod-b0596c727249a49363047c809c32af22eb6fac37622a71f4843fbc18f70d3769.scope: Deactivated successfully.
Nov 29 07:54:24 compute-0 podman[277178]: 2025-11-29 07:54:24.113657544 +0000 UTC m=+0.324178756 container attach b0596c727249a49363047c809c32af22eb6fac37622a71f4843fbc18f70d3769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:54:24 compute-0 podman[277178]: 2025-11-29 07:54:24.115098583 +0000 UTC m=+0.325619835 container died b0596c727249a49363047c809c32af22eb6fac37622a71f4843fbc18f70d3769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:54:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e30ecf663ad84f272fe256cfbd60390d871bfd53fc0fa5fc4c3f3ccaf7ba8b4-merged.mount: Deactivated successfully.
Nov 29 07:54:24 compute-0 podman[277178]: 2025-11-29 07:54:24.290482425 +0000 UTC m=+0.501003657 container remove b0596c727249a49363047c809c32af22eb6fac37622a71f4843fbc18f70d3769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gould, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:54:24 compute-0 systemd[1]: libpod-conmon-b0596c727249a49363047c809c32af22eb6fac37622a71f4843fbc18f70d3769.scope: Deactivated successfully.
Nov 29 07:54:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 29 07:54:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 29 07:54:24 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 29 07:54:24 compute-0 ceph-mon[75050]: pgmap v1481: 305 pgs: 305 active+clean; 197 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 13 MiB/s wr, 127 op/s
Nov 29 07:54:24 compute-0 ceph-mon[75050]: osdmap e223: 3 total, 3 up, 3 in
Nov 29 07:54:24 compute-0 podman[277221]: 2025-11-29 07:54:24.605389282 +0000 UTC m=+0.107058395 container create 7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 07:54:24 compute-0 podman[277221]: 2025-11-29 07:54:24.536429777 +0000 UTC m=+0.038098930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:54:24 compute-0 systemd[1]: Started libpod-conmon-7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d.scope.
Nov 29 07:54:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6066287cf7b7104ac88a94d493d61517cca3b0b12e01b004c0ccb7318c5a548/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6066287cf7b7104ac88a94d493d61517cca3b0b12e01b004c0ccb7318c5a548/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6066287cf7b7104ac88a94d493d61517cca3b0b12e01b004c0ccb7318c5a548/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6066287cf7b7104ac88a94d493d61517cca3b0b12e01b004c0ccb7318c5a548/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:54:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 357 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 36 MiB/s wr, 147 op/s
Nov 29 07:54:24 compute-0 podman[277221]: 2025-11-29 07:54:24.747643799 +0000 UTC m=+0.249312992 container init 7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:54:24 compute-0 podman[277221]: 2025-11-29 07:54:24.758307024 +0000 UTC m=+0.259976137 container start 7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:54:24 compute-0 podman[277221]: 2025-11-29 07:54:24.780137938 +0000 UTC m=+0.281807081 container attach 7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:54:24 compute-0 nova_compute[256729]: 2025-11-29 07:54:24.811 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402849.809422, 28704ae1-91ab-4ea5-99cd-c2ec5475f015 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:54:24 compute-0 nova_compute[256729]: 2025-11-29 07:54:24.815 256736 INFO nova.compute.manager [-] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] VM Stopped (Lifecycle Event)
Nov 29 07:54:24 compute-0 nova_compute[256729]: 2025-11-29 07:54:24.842 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:24 compute-0 nova_compute[256729]: 2025-11-29 07:54:24.850 256736 DEBUG nova.compute.manager [None req-8008a6db-7d91-41b2-b6e5-84e63b4e0cb0 - - - - - -] [instance: 28704ae1-91ab-4ea5-99cd-c2ec5475f015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:54:25 compute-0 ovn_controller[153383]: 2025-11-29T07:54:25Z|00097|binding|INFO|Releasing lport 83156f7b-0983-4e0f-a70a-261a0d3fbf52 from this chassis (sb_readonly=0)
Nov 29 07:54:25 compute-0 nova_compute[256729]: 2025-11-29 07:54:25.355 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:25 compute-0 musing_mayer[277236]: {
Nov 29 07:54:25 compute-0 musing_mayer[277236]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "osd_id": 2,
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "type": "bluestore"
Nov 29 07:54:25 compute-0 musing_mayer[277236]:     },
Nov 29 07:54:25 compute-0 musing_mayer[277236]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "osd_id": 1,
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "type": "bluestore"
Nov 29 07:54:25 compute-0 musing_mayer[277236]:     },
Nov 29 07:54:25 compute-0 musing_mayer[277236]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "osd_id": 0,
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:54:25 compute-0 musing_mayer[277236]:         "type": "bluestore"
Nov 29 07:54:25 compute-0 musing_mayer[277236]:     }
Nov 29 07:54:25 compute-0 musing_mayer[277236]: }
Nov 29 07:54:25 compute-0 systemd[1]: libpod-7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d.scope: Deactivated successfully.
Nov 29 07:54:25 compute-0 podman[277221]: 2025-11-29 07:54:25.887218024 +0000 UTC m=+1.388887137 container died 7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:54:25 compute-0 systemd[1]: libpod-7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d.scope: Consumed 1.115s CPU time.
Nov 29 07:54:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6066287cf7b7104ac88a94d493d61517cca3b0b12e01b004c0ccb7318c5a548-merged.mount: Deactivated successfully.
Nov 29 07:54:26 compute-0 podman[277221]: 2025-11-29 07:54:26.133688129 +0000 UTC m=+1.635357232 container remove 7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:54:26 compute-0 sudo[277112]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:54:26 compute-0 systemd[1]: libpod-conmon-7e699d86105b5cd280219b47e447765afe0db13c9bc464371e9b72d542d67b4d.scope: Deactivated successfully.
Nov 29 07:54:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:54:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:54:26 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:54:26 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev e39b415b-433e-41aa-a5b9-5e7add24df15 does not exist
Nov 29 07:54:26 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ffd3709b-f904-4391-bb45-47dd027402fd does not exist
Nov 29 07:54:26 compute-0 sshd-session[277212]: Connection closed by authenticating user root 143.14.121.41 port 52548 [preauth]
Nov 29 07:54:26 compute-0 sudo[277283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:54:26 compute-0 sudo[277283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:26 compute-0 sudo[277283]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:26 compute-0 sudo[277308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:54:26 compute-0 sudo[277308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:54:26 compute-0 sudo[277308]: pam_unix(sudo:session): session closed for user root
Nov 29 07:54:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 509 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 49 MiB/s wr, 241 op/s
Nov 29 07:54:26 compute-0 nova_compute[256729]: 2025-11-29 07:54:26.896 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:27 compute-0 ceph-mon[75050]: pgmap v1483: 305 pgs: 305 active+clean; 357 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 36 MiB/s wr, 147 op/s
Nov 29 07:54:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:54:27 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:54:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 881 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 80 MiB/s wr, 213 op/s
Nov 29 07:54:28 compute-0 ceph-mon[75050]: pgmap v1484: 305 pgs: 305 active+clean; 509 MiB data, 625 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 49 MiB/s wr, 241 op/s
Nov 29 07:54:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 29 07:54:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 29 07:54:29 compute-0 nova_compute[256729]: 2025-11-29 07:54:29.843 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:29 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 29 07:54:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 881 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 85 MiB/s wr, 200 op/s
Nov 29 07:54:30 compute-0 sshd-session[277333]: Connection closed by authenticating user root 143.14.121.41 port 52550 [preauth]
Nov 29 07:54:31 compute-0 ceph-mon[75050]: pgmap v1485: 305 pgs: 305 active+clean; 881 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 126 KiB/s rd, 80 MiB/s wr, 213 op/s
Nov 29 07:54:31 compute-0 ceph-mon[75050]: osdmap e224: 3 total, 3 up, 3 in
Nov 29 07:54:31 compute-0 nova_compute[256729]: 2025-11-29 07:54:31.899 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 941 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 107 KiB/s rd, 70 MiB/s wr, 188 op/s
Nov 29 07:54:33 compute-0 ceph-mon[75050]: pgmap v1487: 305 pgs: 305 active+clean; 881 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 85 MiB/s wr, 200 op/s
Nov 29 07:54:34 compute-0 sshd-session[277335]: Connection closed by authenticating user root 143.14.121.41 port 52554 [preauth]
Nov 29 07:54:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 957 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 60 MiB/s wr, 158 op/s
Nov 29 07:54:34 compute-0 nova_compute[256729]: 2025-11-29 07:54:34.844 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:34 compute-0 nova_compute[256729]: 2025-11-29 07:54:34.864 256736 DEBUG oslo_concurrency.lockutils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "8c45989b-e06e-4bd4-9961-e7756223b869" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:34 compute-0 nova_compute[256729]: 2025-11-29 07:54:34.865 256736 DEBUG oslo_concurrency.lockutils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:34 compute-0 nova_compute[256729]: 2025-11-29 07:54:34.881 256736 DEBUG nova.objects.instance [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'flavor' on Instance uuid 8c45989b-e06e-4bd4-9961-e7756223b869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:54:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:34 compute-0 nova_compute[256729]: 2025-11-29 07:54:34.926 256736 DEBUG oslo_concurrency.lockutils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:35 compute-0 ceph-mon[75050]: pgmap v1488: 305 pgs: 305 active+clean; 941 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 107 KiB/s rd, 70 MiB/s wr, 188 op/s
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.118 256736 DEBUG oslo_concurrency.lockutils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "8c45989b-e06e-4bd4-9961-e7756223b869" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.119 256736 DEBUG oslo_concurrency.lockutils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.121 256736 INFO nova.compute.manager [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Attaching volume 866ed10f-44e6-4b2a-8c85-3f64974a7ca5 to /dev/vdb
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.266 256736 DEBUG os_brick.utils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.268 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.280 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.280 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[5b478a67-9352-4847-afb8-d30143554e68]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.282 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.288 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.288 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[396d3dbf-ead2-4aa1-ac68-8590a1a2a32c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.289 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.296 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.296 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[159aeae7-c7d4-4261-aec3-aad8f3e2bea0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.297 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[90b0bed2-e92a-444d-9e1f-6995aebafc7c]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.297 256736 DEBUG oslo_concurrency.processutils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.314 256736 DEBUG oslo_concurrency.processutils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.317 256736 DEBUG os_brick.initiator.connectors.lightos [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.317 256736 DEBUG os_brick.initiator.connectors.lightos [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.318 256736 DEBUG os_brick.initiator.connectors.lightos [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.318 256736 DEBUG os_brick.utils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] <== get_connector_properties: return (50ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:54:35 compute-0 nova_compute[256729]: 2025-11-29 07:54:35.318 256736 DEBUG nova.virt.block_device [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updating existing volume attachment record: ac6cf194-92d3-4a23-be32-acf39587b115 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:54:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:54:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:54:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:54:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3421723433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:36 compute-0 nova_compute[256729]: 2025-11-29 07:54:36.300 256736 DEBUG nova.objects.instance [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'flavor' on Instance uuid 8c45989b-e06e-4bd4-9961-e7756223b869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:54:36 compute-0 nova_compute[256729]: 2025-11-29 07:54:36.330 256736 DEBUG nova.virt.libvirt.driver [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Attempting to attach volume 866ed10f-44e6-4b2a-8c85-3f64974a7ca5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:54:36 compute-0 nova_compute[256729]: 2025-11-29 07:54:36.334 256736 DEBUG nova.virt.libvirt.guest [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:54:36 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:54:36 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-866ed10f-44e6-4b2a-8c85-3f64974a7ca5">
Nov 29 07:54:36 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:54:36 compute-0 nova_compute[256729]:   </source>
Nov 29 07:54:36 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 07:54:36 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:54:36 compute-0 nova_compute[256729]:   </auth>
Nov 29 07:54:36 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:54:36 compute-0 nova_compute[256729]:   <serial>866ed10f-44e6-4b2a-8c85-3f64974a7ca5</serial>
Nov 29 07:54:36 compute-0 nova_compute[256729]: </disk>
Nov 29 07:54:36 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:54:36 compute-0 ceph-mon[75050]: pgmap v1489: 305 pgs: 305 active+clean; 957 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 60 MiB/s wr, 158 op/s
Nov 29 07:54:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:54:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4210190173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 965 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 46 MiB/s wr, 52 op/s
Nov 29 07:54:36 compute-0 nova_compute[256729]: 2025-11-29 07:54:36.812 256736 DEBUG nova.virt.libvirt.driver [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:54:36 compute-0 nova_compute[256729]: 2025-11-29 07:54:36.812 256736 DEBUG nova.virt.libvirt.driver [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:54:36 compute-0 nova_compute[256729]: 2025-11-29 07:54:36.812 256736 DEBUG nova.virt.libvirt.driver [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:54:36 compute-0 nova_compute[256729]: 2025-11-29 07:54:36.813 256736 DEBUG nova.virt.libvirt.driver [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No VIF found with MAC fa:16:3e:e5:b6:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:54:36 compute-0 nova_compute[256729]: 2025-11-29 07:54:36.901 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:36 compute-0 nova_compute[256729]: 2025-11-29 07:54:36.996 256736 DEBUG oslo_concurrency.lockutils [None req-41b37cef-2311-4c4f-8db4-c95caf365a22 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:37 compute-0 sshd-session[277337]: Connection closed by authenticating user root 143.14.121.41 port 54792 [preauth]
Nov 29 07:54:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3421723433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4210190173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:54:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 1013 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 MiB/s wr, 53 op/s
Nov 29 07:54:39 compute-0 ceph-mon[75050]: pgmap v1490: 305 pgs: 305 active+clean; 965 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 46 MiB/s wr, 52 op/s
Nov 29 07:54:39 compute-0 nova_compute[256729]: 2025-11-29 07:54:39.845 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:40 compute-0 sshd-session[277366]: Connection closed by authenticating user root 143.14.121.41 port 54794 [preauth]
Nov 29 07:54:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 29 07:54:40 compute-0 ceph-mon[75050]: pgmap v1491: 305 pgs: 305 active+clean; 1013 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 MiB/s wr, 53 op/s
Nov 29 07:54:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 29 07:54:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 29 07:54:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 1013 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 MiB/s wr, 53 op/s
Nov 29 07:54:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 29 07:54:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 29 07:54:41 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 29 07:54:41 compute-0 ceph-mon[75050]: osdmap e225: 3 total, 3 up, 3 in
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.686 256736 DEBUG oslo_concurrency.lockutils [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "8c45989b-e06e-4bd4-9961-e7756223b869" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.687 256736 DEBUG oslo_concurrency.lockutils [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.705 256736 INFO nova.compute.manager [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Detaching volume 866ed10f-44e6-4b2a-8c85-3f64974a7ca5
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.838 256736 INFO nova.virt.block_device [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Attempting to driver detach volume 866ed10f-44e6-4b2a-8c85-3f64974a7ca5 from mountpoint /dev/vdb
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.849 256736 DEBUG nova.virt.libvirt.driver [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Attempting to detach device vdb from instance 8c45989b-e06e-4bd4-9961-e7756223b869 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.850 256736 DEBUG nova.virt.libvirt.guest [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-866ed10f-44e6-4b2a-8c85-3f64974a7ca5">
Nov 29 07:54:41 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   </source>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <serial>866ed10f-44e6-4b2a-8c85-3f64974a7ca5</serial>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:54:41 compute-0 nova_compute[256729]: </disk>
Nov 29 07:54:41 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.862 256736 INFO nova.virt.libvirt.driver [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Successfully detached device vdb from instance 8c45989b-e06e-4bd4-9961-e7756223b869 from the persistent domain config.
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.862 256736 DEBUG nova.virt.libvirt.driver [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 8c45989b-e06e-4bd4-9961-e7756223b869 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.862 256736 DEBUG nova.virt.libvirt.guest [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-866ed10f-44e6-4b2a-8c85-3f64974a7ca5">
Nov 29 07:54:41 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   </source>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <serial>866ed10f-44e6-4b2a-8c85-3f64974a7ca5</serial>
Nov 29 07:54:41 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:54:41 compute-0 nova_compute[256729]: </disk>
Nov 29 07:54:41 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:54:41 compute-0 nova_compute[256729]: 2025-11-29 07:54:41.905 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:42 compute-0 nova_compute[256729]: 2025-11-29 07:54:42.010 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764402882.0091665, 8c45989b-e06e-4bd4-9961-e7756223b869 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:54:42 compute-0 nova_compute[256729]: 2025-11-29 07:54:42.011 256736 DEBUG nova.virt.libvirt.driver [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 8c45989b-e06e-4bd4-9961-e7756223b869 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:54:42 compute-0 nova_compute[256729]: 2025-11-29 07:54:42.013 256736 INFO nova.virt.libvirt.driver [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Successfully detached device vdb from instance 8c45989b-e06e-4bd4-9961-e7756223b869 from the live domain config.
Nov 29 07:54:42 compute-0 nova_compute[256729]: 2025-11-29 07:54:42.198 256736 DEBUG nova.objects.instance [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'flavor' on Instance uuid 8c45989b-e06e-4bd4-9961-e7756223b869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:54:42 compute-0 nova_compute[256729]: 2025-11-29 07:54:42.250 256736 DEBUG oslo_concurrency.lockutils [None req-6588b580-b2e2-4d3e-9094-e7d6a742c969 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:42 compute-0 ceph-mon[75050]: pgmap v1493: 305 pgs: 305 active+clean; 1013 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 MiB/s wr, 53 op/s
Nov 29 07:54:42 compute-0 ceph-mon[75050]: osdmap e226: 3 total, 3 up, 3 in
Nov 29 07:54:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 19 MiB/s wr, 103 op/s
Nov 29 07:54:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 29 07:54:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 29 07:54:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 29 07:54:44 compute-0 sshd-session[277368]: Invalid user postgres from 143.14.121.41 port 54808
Nov 29 07:54:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 29 07:54:44 compute-0 ceph-mon[75050]: pgmap v1495: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 19 MiB/s wr, 103 op/s
Nov 29 07:54:44 compute-0 ceph-mon[75050]: osdmap e227: 3 total, 3 up, 3 in
Nov 29 07:54:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 29 07:54:44 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 29 07:54:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 779 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 168 KiB/s rd, 33 MiB/s wr, 289 op/s
Nov 29 07:54:44 compute-0 nova_compute[256729]: 2025-11-29 07:54:44.847 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:45 compute-0 sshd-session[277368]: Connection closed by invalid user postgres 143.14.121.41 port 54808 [preauth]
Nov 29 07:54:45 compute-0 nova_compute[256729]: 2025-11-29 07:54:45.220 256736 DEBUG nova.compute.manager [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:54:45 compute-0 nova_compute[256729]: 2025-11-29 07:54:45.272 256736 INFO nova.compute.manager [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] instance snapshotting
Nov 29 07:54:45 compute-0 nova_compute[256729]: 2025-11-29 07:54:45.496 256736 INFO nova.virt.libvirt.driver [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Beginning live snapshot process
Nov 29 07:54:45 compute-0 nova_compute[256729]: 2025-11-29 07:54:45.688 256736 DEBUG nova.virt.libvirt.imagebackend [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No parent info for 0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 29 07:54:45 compute-0 nova_compute[256729]: 2025-11-29 07:54:45.912 256736 DEBUG nova.storage.rbd_utils [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] creating snapshot(4644a9c7252749958c61401847f7a8b9) on rbd image(8c45989b-e06e-4bd4-9961-e7756223b869_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:54:46 compute-0 ceph-mon[75050]: osdmap e228: 3 total, 3 up, 3 in
Nov 29 07:54:46 compute-0 ceph-mon[75050]: pgmap v1498: 305 pgs: 305 active+clean; 779 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 168 KiB/s rd, 33 MiB/s wr, 289 op/s
Nov 29 07:54:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 159 KiB/s rd, 22 MiB/s wr, 274 op/s
Nov 29 07:54:46 compute-0 nova_compute[256729]: 2025-11-29 07:54:46.907 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 29 07:54:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 29 07:54:47 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 29 07:54:47 compute-0 nova_compute[256729]: 2025-11-29 07:54:47.330 256736 DEBUG nova.storage.rbd_utils [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] cloning vms/8c45989b-e06e-4bd4-9961-e7756223b869_disk@4644a9c7252749958c61401847f7a8b9 to images/8cb507a6-3d3e-48cc-8c73-be72eca3ddaa clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 07:54:47 compute-0 nova_compute[256729]: 2025-11-29 07:54:47.457 256736 DEBUG nova.storage.rbd_utils [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] flattening images/8cb507a6-3d3e-48cc-8c73-be72eca3ddaa flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 29 07:54:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:54:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/582486593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:54:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/582486593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:48 compute-0 sshd-session[277372]: Invalid user osmc from 143.14.121.41 port 54634
Nov 29 07:54:48 compute-0 nova_compute[256729]: 2025-11-29 07:54:48.258 256736 DEBUG nova.storage.rbd_utils [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] removing snapshot(4644a9c7252749958c61401847f7a8b9) on rbd image(8c45989b-e06e-4bd4-9961-e7756223b869_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 29 07:54:48 compute-0 ceph-mon[75050]: pgmap v1499: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 159 KiB/s rd, 22 MiB/s wr, 274 op/s
Nov 29 07:54:48 compute-0 ceph-mon[75050]: osdmap e229: 3 total, 3 up, 3 in
Nov 29 07:54:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/582486593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:54:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/582486593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:54:48 compute-0 sshd-session[277372]: Connection closed by invalid user osmc 143.14.121.41 port 54634 [preauth]
Nov 29 07:54:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 175 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 13 MiB/s wr, 386 op/s
Nov 29 07:54:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 29 07:54:49 compute-0 nova_compute[256729]: 2025-11-29 07:54:49.849 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 29 07:54:50 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 29 07:54:50 compute-0 nova_compute[256729]: 2025-11-29 07:54:50.147 256736 DEBUG nova.storage.rbd_utils [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] creating snapshot(snap) on rbd image(8cb507a6-3d3e-48cc-8c73-be72eca3ddaa) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 29 07:54:50 compute-0 nova_compute[256729]: 2025-11-29 07:54:50.193 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:50 compute-0 nova_compute[256729]: 2025-11-29 07:54:50.194 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:50 compute-0 nova_compute[256729]: 2025-11-29 07:54:50.237 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:50 compute-0 ceph-mon[75050]: pgmap v1501: 305 pgs: 305 active+clean; 175 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 13 MiB/s wr, 386 op/s
Nov 29 07:54:50 compute-0 ceph-mon[75050]: osdmap e230: 3 total, 3 up, 3 in
Nov 29 07:54:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 175 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.9 MiB/s wr, 284 op/s
Nov 29 07:54:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 29 07:54:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 29 07:54:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 29 07:54:51 compute-0 podman[277518]: 2025-11-29 07:54:51.694213757 +0000 UTC m=+0.054510450 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:54:51 compute-0 podman[277517]: 2025-11-29 07:54:51.704303047 +0000 UTC m=+0.072755508 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:54:51 compute-0 podman[277519]: 2025-11-29 07:54:51.763590993 +0000 UTC m=+0.127578195 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:54:51 compute-0 nova_compute[256729]: 2025-11-29 07:54:51.909 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:52 compute-0 nova_compute[256729]: 2025-11-29 07:54:52.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:52 compute-0 ceph-mon[75050]: pgmap v1503: 305 pgs: 305 active+clean; 175 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.9 MiB/s wr, 284 op/s
Nov 29 07:54:52 compute-0 ceph-mon[75050]: osdmap e231: 3 total, 3 up, 3 in
Nov 29 07:54:52 compute-0 sshd-session[277497]: Invalid user maria from 143.14.121.41 port 54640
Nov 29 07:54:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:52.607 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:54:52 compute-0 nova_compute[256729]: 2025-11-29 07:54:52.608 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:52.610 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:54:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 202 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 9.3 MiB/s rd, 7.8 MiB/s wr, 286 op/s
Nov 29 07:54:52 compute-0 sshd-session[277497]: Connection closed by invalid user maria 143.14.121.41 port 54640 [preauth]
Nov 29 07:54:54 compute-0 nova_compute[256729]: 2025-11-29 07:54:54.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:54 compute-0 ceph-mon[75050]: pgmap v1505: 305 pgs: 305 active+clean; 202 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 9.3 MiB/s rd, 7.8 MiB/s wr, 286 op/s
Nov 29 07:54:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 202 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 9.1 MiB/s rd, 6.2 MiB/s wr, 195 op/s
Nov 29 07:54:54 compute-0 nova_compute[256729]: 2025-11-29 07:54:54.850 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:55 compute-0 nova_compute[256729]: 2025-11-29 07:54:55.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:55 compute-0 nova_compute[256729]: 2025-11-29 07:54:55.186 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:55 compute-0 nova_compute[256729]: 2025-11-29 07:54:55.186 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:55 compute-0 nova_compute[256729]: 2025-11-29 07:54:55.187 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:55 compute-0 nova_compute[256729]: 2025-11-29 07:54:55.187 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:54:55 compute-0 nova_compute[256729]: 2025-11-29 07:54:55.188 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:54:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 29 07:54:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 29 07:54:55 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 29 07:54:55 compute-0 ceph-mon[75050]: pgmap v1506: 305 pgs: 305 active+clean; 202 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 9.1 MiB/s rd, 6.2 MiB/s wr, 195 op/s
Nov 29 07:54:55 compute-0 ceph-mon[75050]: osdmap e232: 3 total, 3 up, 3 in
Nov 29 07:54:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:54:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/623543776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.196 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.311 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.312 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:54:56 compute-0 sshd-session[277580]: Invalid user jack from 143.14.121.41 port 45730
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.558 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.561 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4330MB free_disk=59.94261169433594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.561 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.562 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 202 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.6 MiB/s wr, 88 op/s
Nov 29 07:54:56 compute-0 sshd-session[277580]: Connection closed by invalid user jack 143.14.121.41 port 45730 [preauth]
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.835 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 8c45989b-e06e-4bd4-9961-e7756223b869 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.836 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.836 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.876 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:54:56 compute-0 nova_compute[256729]: 2025-11-29 07:54:56.910 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:54:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/623543776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:57 compute-0 ovn_controller[153383]: 2025-11-29T07:54:57Z|00098|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 07:54:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:54:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291151774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:57 compute-0 nova_compute[256729]: 2025-11-29 07:54:57.691 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.815s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:54:57 compute-0 nova_compute[256729]: 2025-11-29 07:54:57.701 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:54:57 compute-0 nova_compute[256729]: 2025-11-29 07:54:57.723 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:54:57 compute-0 nova_compute[256729]: 2025-11-29 07:54:57.751 256736 INFO nova.virt.libvirt.driver [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Snapshot image upload complete
Nov 29 07:54:57 compute-0 nova_compute[256729]: 2025-11-29 07:54:57.752 256736 INFO nova.compute.manager [None req-22d4766c-3908-4c8a-a657-7bcd8293c1ea 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Took 12.48 seconds to snapshot the instance on the hypervisor.
Nov 29 07:54:57 compute-0 nova_compute[256729]: 2025-11-29 07:54:57.758 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:54:57 compute-0 nova_compute[256729]: 2025-11-29 07:54:57.759 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 202 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 1.3 MiB/s wr, 94 op/s
Nov 29 07:54:58 compute-0 nova_compute[256729]: 2025-11-29 07:54:58.757 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:54:58 compute-0 nova_compute[256729]: 2025-11-29 07:54:58.758 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:54:58 compute-0 ceph-mon[75050]: pgmap v1508: 305 pgs: 305 active+clean; 202 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.6 MiB/s wr, 88 op/s
Nov 29 07:54:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3291151774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:54:59 compute-0 nova_compute[256729]: 2025-11-29 07:54:59.024 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:54:59 compute-0 nova_compute[256729]: 2025-11-29 07:54:59.025 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:54:59 compute-0 nova_compute[256729]: 2025-11-29 07:54:59.025 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:54:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:59.774 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:54:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:59.774 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:54:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:54:59.775 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:54:59 compute-0 nova_compute[256729]: 2025-11-29 07:54:59.852 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.354 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updating instance_info_cache with network_info: [{"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.379 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.379 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.380 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.380 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.380 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.380 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.501 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.502 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:00 compute-0 ceph-mon[75050]: pgmap v1509: 305 pgs: 305 active+clean; 202 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 1.3 MiB/s wr, 94 op/s
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.521 256736 DEBUG nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:55:00 compute-0 sshd-session[277617]: Invalid user admin from 143.14.121.41 port 45732
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.598 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.599 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.607 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.608 256736 INFO nova.compute.claims [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:55:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:00.614 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 202 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 830 KiB/s wr, 42 op/s
Nov 29 07:55:00 compute-0 nova_compute[256729]: 2025-11-29 07:55:00.818 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 29 07:55:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 29 07:55:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 29 07:55:00 compute-0 sshd-session[277617]: Connection closed by invalid user admin 143.14.121.41 port 45732 [preauth]
Nov 29 07:55:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433952007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.243 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.248 256736 DEBUG nova.compute.provider_tree [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.274 256736 DEBUG nova.scheduler.client.report [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.303 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.304 256736 DEBUG nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.372 256736 DEBUG nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.373 256736 DEBUG nova.network.neutron [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.396 256736 INFO nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.430 256736 DEBUG nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.539 256736 DEBUG nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.540 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.540 256736 INFO nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Creating image(s)
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.563 256736 DEBUG nova.storage.rbd_utils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image a157d150-bd1c-4f7b-8068-764a8f3af802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.588 256736 DEBUG nova.storage.rbd_utils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image a157d150-bd1c-4f7b-8068-764a8f3af802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.641 256736 DEBUG nova.storage.rbd_utils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image a157d150-bd1c-4f7b-8068-764a8f3af802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.649 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "423728988e265f94dd1a0c83064e07619483092c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.650 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "423728988e265f94dd1a0c83064e07619483092c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.656 256736 DEBUG nova.policy [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '81f071491e4c48c59662c7feba200299', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0aa15e11d9794e608f3aebb38ea3606a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:55:01 compute-0 nova_compute[256729]: 2025-11-29 07:55:01.905 256736 DEBUG nova.virt.libvirt.imagebackend [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Image locations are: [{'url': 'rbd://14ff1f30-5059-58f1-9a23-69871bb275a1/images/8cb507a6-3d3e-48cc-8c73-be72eca3ddaa/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://14ff1f30-5059-58f1-9a23-69871bb275a1/images/8cb507a6-3d3e-48cc-8c73-be72eca3ddaa/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 29 07:55:02 compute-0 nova_compute[256729]: 2025-11-29 07:55:02.147 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:02 compute-0 nova_compute[256729]: 2025-11-29 07:55:02.154 256736 DEBUG nova.virt.libvirt.imagebackend [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Selected location: {'url': 'rbd://14ff1f30-5059-58f1-9a23-69871bb275a1/images/8cb507a6-3d3e-48cc-8c73-be72eca3ddaa/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 29 07:55:02 compute-0 nova_compute[256729]: 2025-11-29 07:55:02.155 256736 DEBUG nova.storage.rbd_utils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] cloning images/8cb507a6-3d3e-48cc-8c73-be72eca3ddaa@snap to None/a157d150-bd1c-4f7b-8068-764a8f3af802_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 29 07:55:02 compute-0 nova_compute[256729]: 2025-11-29 07:55:02.397 256736 DEBUG nova.network.neutron [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Successfully created port: 436ce809-d7f3-4287-867d-52ea26e65554 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:55:02 compute-0 ceph-mon[75050]: pgmap v1510: 305 pgs: 305 active+clean; 202 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 830 KiB/s wr, 42 op/s
Nov 29 07:55:02 compute-0 ceph-mon[75050]: osdmap e233: 3 total, 3 up, 3 in
Nov 29 07:55:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1433952007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 214 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 213 KiB/s wr, 73 op/s
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.258 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "423728988e265f94dd1a0c83064e07619483092c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.377 256736 DEBUG nova.objects.instance [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'migration_context' on Instance uuid a157d150-bd1c-4f7b-8068-764a8f3af802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.400 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.400 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Ensure instance console log exists: /var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.401 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.401 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.402 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.459 256736 DEBUG nova.network.neutron [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Successfully updated port: 436ce809-d7f3-4287-867d-52ea26e65554 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.478 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.479 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquired lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.479 256736 DEBUG nova.network.neutron [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.576 256736 DEBUG nova.compute.manager [req-b95aa506-4361-421a-8499-d8787f0378c2 req-de7a63fb-d47d-4748-8558-ae5b3f3b26fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received event network-changed-436ce809-d7f3-4287-867d-52ea26e65554 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.576 256736 DEBUG nova.compute.manager [req-b95aa506-4361-421a-8499-d8787f0378c2 req-de7a63fb-d47d-4748-8558-ae5b3f3b26fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Refreshing instance network info cache due to event network-changed-436ce809-d7f3-4287-867d-52ea26e65554. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:55:03 compute-0 nova_compute[256729]: 2025-11-29 07:55:03.576 256736 DEBUG oslo_concurrency.lockutils [req-b95aa506-4361-421a-8499-d8787f0378c2 req-de7a63fb-d47d-4748-8558-ae5b3f3b26fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:03 compute-0 ceph-mon[75050]: pgmap v1512: 305 pgs: 305 active+clean; 214 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 213 KiB/s wr, 73 op/s
Nov 29 07:55:04 compute-0 nova_compute[256729]: 2025-11-29 07:55:04.176 256736 DEBUG nova.network.neutron [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:55:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 232 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 102 op/s
Nov 29 07:55:04 compute-0 nova_compute[256729]: 2025-11-29 07:55:04.854 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:05 compute-0 sshd-session[277650]: Invalid user zabbix from 143.14.121.41 port 45740
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.144 256736 DEBUG nova.network.neutron [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Updating instance_info_cache with network_info: [{"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.184 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Releasing lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.184 256736 DEBUG nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Instance network_info: |[{"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.185 256736 DEBUG oslo_concurrency.lockutils [req-b95aa506-4361-421a-8499-d8787f0378c2 req-de7a63fb-d47d-4748-8558-ae5b3f3b26fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.185 256736 DEBUG nova.network.neutron [req-b95aa506-4361-421a-8499-d8787f0378c2 req-de7a63fb-d47d-4748-8558-ae5b3f3b26fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Refreshing network info cache for port 436ce809-d7f3-4287-867d-52ea26e65554 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.191 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Start _get_guest_xml network_info=[{"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T07:54:45Z,direct_url=<?>,disk_format='raw',id=8cb507a6-3d3e-48cc-8c73-be72eca3ddaa,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-85460294',owner='0aa15e11d9794e608f3aebb38ea3606a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T07:54:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '8cb507a6-3d3e-48cc-8c73-be72eca3ddaa'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.196 256736 WARNING nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.201 256736 DEBUG nova.virt.libvirt.host [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.202 256736 DEBUG nova.virt.libvirt.host [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.205 256736 DEBUG nova.virt.libvirt.host [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.206 256736 DEBUG nova.virt.libvirt.host [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.207 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.207 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T07:54:45Z,direct_url=<?>,disk_format='raw',id=8cb507a6-3d3e-48cc-8c73-be72eca3ddaa,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-85460294',owner='0aa15e11d9794e608f3aebb38ea3606a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T07:54:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.208 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.208 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.209 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.210 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.211 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.211 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.212 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.212 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.212 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.213 256736 DEBUG nova.virt.hardware [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.218 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:55:05
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control']
Nov 29 07:55:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:55:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107501283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.695 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:05 compute-0 sshd-session[277650]: Connection closed by invalid user zabbix 143.14.121.41 port 45740 [preauth]
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.728 256736 DEBUG nova.storage.rbd_utils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image a157d150-bd1c-4f7b-8068-764a8f3af802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:05 compute-0 nova_compute[256729]: 2025-11-29 07:55:05.735 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3824410093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.250 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.252 256736 DEBUG nova.virt.libvirt.vif [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1622337894',display_name='tempest-TestStampPattern-server-1622337894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1622337894',id=8,image_ref='8cb507a6-3d3e-48cc-8c73-be72eca3ddaa',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVIcG7iT8EuYRWwvh0xXPSujdlj7uKuKXhamDHlJ4QJb0wGzod0+Qsrv8DmE1TIN7tAAQa46X3+yrMq9A2yMt4mHHy/8wbOvohqcW7H1CuWupyv3Z+eB3t88xUDCWSqKQ==',key_name='tempest-TestStampPattern-886597490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0aa15e11d9794e608f3aebb38ea3606a',ramdisk_id='',reservation_id='r-7i74n3ae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8c45989b-e06e-4bd4-9961-e7756223b869',image_min_disk='1',image_min_ram='0',image_owner_id='0aa15e11d9794e608f3aebb38ea3606a',image_owner_project_name='tempest-TestStampPattern-1135660929',image_owner_user_name='tempest-TestStampPattern-1135660929-project-member',image_user_id='81f071491e4c48c59662c7feba200299',network_allocated='True',owner_project_name='tempest-TestStampPattern-1135660929',owner_user_name='tempest-TestStampPattern-1135660929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:01Z,user_data=None,user_id='81f071491e4c48c59662c7feba200299',uuid=a157d150-bd1c-4f7b-8068-764a8f3af802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.252 256736 DEBUG nova.network.os_vif_util [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converting VIF {"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.253 256736 DEBUG nova.network.os_vif_util [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:84:96,bridge_name='br-int',has_traffic_filtering=True,id=436ce809-d7f3-4287-867d-52ea26e65554,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap436ce809-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.254 256736 DEBUG nova.objects.instance [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'pci_devices' on Instance uuid a157d150-bd1c-4f7b-8068-764a8f3af802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.273 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <uuid>a157d150-bd1c-4f7b-8068-764a8f3af802</uuid>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <name>instance-00000008</name>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <nova:name>tempest-TestStampPattern-server-1622337894</nova:name>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:55:05</nova:creationTime>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <nova:user uuid="81f071491e4c48c59662c7feba200299">tempest-TestStampPattern-1135660929-project-member</nova:user>
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <nova:project uuid="0aa15e11d9794e608f3aebb38ea3606a">tempest-TestStampPattern-1135660929</nova:project>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="8cb507a6-3d3e-48cc-8c73-be72eca3ddaa"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <nova:port uuid="436ce809-d7f3-4287-867d-52ea26e65554">
Nov 29 07:55:06 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <system>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <entry name="serial">a157d150-bd1c-4f7b-8068-764a8f3af802</entry>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <entry name="uuid">a157d150-bd1c-4f7b-8068-764a8f3af802</entry>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     </system>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <os>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   </os>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <features>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   </features>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/a157d150-bd1c-4f7b-8068-764a8f3af802_disk">
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       </source>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/a157d150-bd1c-4f7b-8068-764a8f3af802_disk.config">
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       </source>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:55:06 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:b0:84:96"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <target dev="tap436ce809-d7"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802/console.log" append="off"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <video>
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     </video>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <input type="keyboard" bus="usb"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:55:06 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:55:06 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:55:06 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:55:06 compute-0 nova_compute[256729]: </domain>
Nov 29 07:55:06 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.275 256736 DEBUG nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Preparing to wait for external event network-vif-plugged-436ce809-d7f3-4287-867d-52ea26e65554 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.275 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.275 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.276 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.276 256736 DEBUG nova.virt.libvirt.vif [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1622337894',display_name='tempest-TestStampPattern-server-1622337894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1622337894',id=8,image_ref='8cb507a6-3d3e-48cc-8c73-be72eca3ddaa',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVIcG7iT8EuYRWwvh0xXPSujdlj7uKuKXhamDHlJ4QJb0wGzod0+Qsrv8DmE1TIN7tAAQa46X3+yrMq9A2yMt4mHHy/8wbOvohqcW7H1CuWupyv3Z+eB3t88xUDCWSqKQ==',key_name='tempest-TestStampPattern-886597490',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0aa15e11d9794e608f3aebb38ea3606a',ramdisk_id='',reservation_id='r-7i74n3ae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8c45989b-e06e-4bd4-9961-e7756223b869',image_min_disk='1',image_min_ram='0',image_owner_id='0aa15e11d9794e608f3aebb38ea3606a',image_owner_project_name='tempest-TestStampPattern-1135660929',image_owner_user_name='tempest-TestStampPattern-1135660929-project-member',image_user_id='81f071491e4c48c59662c7feba200299',network_allocated='True',owner_project_name='tempest-TestStampPattern-1135660929',owner_user_name='tempest-TestStampPattern-1135660929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:01Z,user_data=None,user_id='81f071491e4c48c59662c7feba200299',uuid=a157d150-bd1c-4f7b-8068-764a8f3af802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.277 256736 DEBUG nova.network.os_vif_util [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converting VIF {"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.277 256736 DEBUG nova.network.os_vif_util [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:84:96,bridge_name='br-int',has_traffic_filtering=True,id=436ce809-d7f3-4287-867d-52ea26e65554,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap436ce809-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.278 256736 DEBUG os_vif [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:84:96,bridge_name='br-int',has_traffic_filtering=True,id=436ce809-d7f3-4287-867d-52ea26e65554,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap436ce809-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.278 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.279 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.280 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.286 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.286 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap436ce809-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.287 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap436ce809-d7, col_values=(('external_ids', {'iface-id': '436ce809-d7f3-4287-867d-52ea26e65554', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:84:96', 'vm-uuid': 'a157d150-bd1c-4f7b-8068-764a8f3af802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.288 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:06 compute-0 NetworkManager[48962]: <info>  [1764402906.2896] manager: (tap436ce809-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.289 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.299 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.300 256736 INFO os_vif [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:84:96,bridge_name='br-int',has_traffic_filtering=True,id=436ce809-d7f3-4287-867d-52ea26e65554,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap436ce809-d7')
Nov 29 07:55:06 compute-0 ceph-mon[75050]: pgmap v1513: 305 pgs: 305 active+clean; 232 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 102 op/s
Nov 29 07:55:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2107501283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3824410093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.671 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.672 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.673 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No VIF found with MAC fa:16:3e:b0:84:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.674 256736 INFO nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Using config drive
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.712 256736 DEBUG nova.storage.rbd_utils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image a157d150-bd1c-4f7b-8068-764a8f3af802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 236 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 98 op/s
Nov 29 07:55:06 compute-0 nova_compute[256729]: 2025-11-29 07:55:06.914 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:55:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:55:07 compute-0 nova_compute[256729]: 2025-11-29 07:55:07.202 256736 DEBUG nova.network.neutron [req-b95aa506-4361-421a-8499-d8787f0378c2 req-de7a63fb-d47d-4748-8558-ae5b3f3b26fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Updated VIF entry in instance network info cache for port 436ce809-d7f3-4287-867d-52ea26e65554. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:55:07 compute-0 nova_compute[256729]: 2025-11-29 07:55:07.203 256736 DEBUG nova.network.neutron [req-b95aa506-4361-421a-8499-d8787f0378c2 req-de7a63fb-d47d-4748-8558-ae5b3f3b26fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Updating instance_info_cache with network_info: [{"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:07 compute-0 nova_compute[256729]: 2025-11-29 07:55:07.229 256736 DEBUG oslo_concurrency.lockutils [req-b95aa506-4361-421a-8499-d8787f0378c2 req-de7a63fb-d47d-4748-8558-ae5b3f3b26fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:07 compute-0 nova_compute[256729]: 2025-11-29 07:55:07.356 256736 INFO nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Creating config drive at /var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802/disk.config
Nov 29 07:55:07 compute-0 nova_compute[256729]: 2025-11-29 07:55:07.361 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ij6yy7b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:07 compute-0 nova_compute[256729]: 2025-11-29 07:55:07.505 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ij6yy7b" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:07 compute-0 nova_compute[256729]: 2025-11-29 07:55:07.550 256736 DEBUG nova.storage.rbd_utils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] rbd image a157d150-bd1c-4f7b-8068-764a8f3af802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:07 compute-0 nova_compute[256729]: 2025-11-29 07:55:07.555 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802/disk.config a157d150-bd1c-4f7b-8068-764a8f3af802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:08 compute-0 ceph-mon[75050]: pgmap v1514: 305 pgs: 305 active+clean; 236 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 98 op/s
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.358 256736 DEBUG oslo_concurrency.processutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802/disk.config a157d150-bd1c-4f7b-8068-764a8f3af802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.803s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.360 256736 INFO nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Deleting local config drive /var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802/disk.config because it was imported into RBD.
Nov 29 07:55:08 compute-0 kernel: tap436ce809-d7: entered promiscuous mode
Nov 29 07:55:08 compute-0 NetworkManager[48962]: <info>  [1764402908.4286] manager: (tap436ce809-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.430 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:08 compute-0 ovn_controller[153383]: 2025-11-29T07:55:08Z|00099|binding|INFO|Claiming lport 436ce809-d7f3-4287-867d-52ea26e65554 for this chassis.
Nov 29 07:55:08 compute-0 ovn_controller[153383]: 2025-11-29T07:55:08Z|00100|binding|INFO|436ce809-d7f3-4287-867d-52ea26e65554: Claiming fa:16:3e:b0:84:96 10.100.0.7
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.443 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:84:96 10.100.0.7'], port_security=['fa:16:3e:b0:84:96 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a157d150-bd1c-4f7b-8068-764a8f3af802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0aa15e11d9794e608f3aebb38ea3606a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '12be058e-47a2-4b10-9928-e2f6336ca894', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17005c51-b13f-40d9-a999-415174c76777, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=436ce809-d7f3-4287-867d-52ea26e65554) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.445 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 436ce809-d7f3-4287-867d-52ea26e65554 in datapath e678432d-7aa3-4fc9-8ccb-76ec3ffbd276 bound to our chassis
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.446 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e678432d-7aa3-4fc9-8ccb-76ec3ffbd276
Nov 29 07:55:08 compute-0 ovn_controller[153383]: 2025-11-29T07:55:08Z|00101|binding|INFO|Setting lport 436ce809-d7f3-4287-867d-52ea26e65554 ovn-installed in OVS
Nov 29 07:55:08 compute-0 ovn_controller[153383]: 2025-11-29T07:55:08Z|00102|binding|INFO|Setting lport 436ce809-d7f3-4287-867d-52ea26e65554 up in Southbound
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.461 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.464 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.464 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7cacc6ec-65df-4188-ac84-1c5c8cc7a341]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:08 compute-0 systemd-machined[217781]: New machine qemu-8-instance-00000008.
Nov 29 07:55:08 compute-0 systemd-udevd[277970]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:55:08 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 29 07:55:08 compute-0 NetworkManager[48962]: <info>  [1764402908.4930] device (tap436ce809-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:55:08 compute-0 NetworkManager[48962]: <info>  [1764402908.4942] device (tap436ce809-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.508 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[2f291fc3-f317-46ec-9484-3a0f27e17d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.511 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[293aec65-8605-4464-a740-ccedbee7bc77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.554 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[028f60f0-e7e3-4a14-a074-6b269fec77a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.577 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0d2c01-9d10-44a6-ab3f-0ac8acf0c9e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape678432d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:f5:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515099, 'reachable_time': 28884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277981, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.600 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed58476-90d6-4b85-8f77-5a0d7ae8b8fd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape678432d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515111, 'tstamp': 515111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277983, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape678432d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515114, 'tstamp': 515114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277983, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.601 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape678432d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:55:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3908417186' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.602 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.605 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape678432d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.606 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.606 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape678432d-70, col_values=(('external_ids', {'iface-id': '83156f7b-0983-4e0f-a70a-261a0d3fbf52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:08 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:08.606 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:55:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:55:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3908417186' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 248 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.784 256736 DEBUG nova.compute.manager [req-297e0182-3cce-44b3-8b61-a20c71cf275e req-63fc3a4a-b964-41aa-84c0-4dab4e0cc36e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received event network-vif-plugged-436ce809-d7f3-4287-867d-52ea26e65554 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.784 256736 DEBUG oslo_concurrency.lockutils [req-297e0182-3cce-44b3-8b61-a20c71cf275e req-63fc3a4a-b964-41aa-84c0-4dab4e0cc36e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.784 256736 DEBUG oslo_concurrency.lockutils [req-297e0182-3cce-44b3-8b61-a20c71cf275e req-63fc3a4a-b964-41aa-84c0-4dab4e0cc36e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.784 256736 DEBUG oslo_concurrency.lockutils [req-297e0182-3cce-44b3-8b61-a20c71cf275e req-63fc3a4a-b964-41aa-84c0-4dab4e0cc36e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:08 compute-0 nova_compute[256729]: 2025-11-29 07:55:08.784 256736 DEBUG nova.compute.manager [req-297e0182-3cce-44b3-8b61-a20c71cf275e req-63fc3a4a-b964-41aa-84c0-4dab4e0cc36e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Processing event network-vif-plugged-436ce809-d7f3-4287-867d-52ea26e65554 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:55:09 compute-0 sshd-session[277891]: Connection closed by authenticating user root 143.14.121.41 port 60854 [preauth]
Nov 29 07:55:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3908417186' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3908417186' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.545 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402910.5443537, a157d150-bd1c-4f7b-8068-764a8f3af802 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.545 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] VM Started (Lifecycle Event)
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.548 256736 DEBUG nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.551 256736 DEBUG nova.virt.libvirt.driver [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.555 256736 INFO nova.virt.libvirt.driver [-] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Instance spawned successfully.
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.555 256736 INFO nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Took 9.02 seconds to spawn the instance on the hypervisor.
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.556 256736 DEBUG nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.588 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.592 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.630 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.631 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402910.5476055, a157d150-bd1c-4f7b-8068-764a8f3af802 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.631 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] VM Paused (Lifecycle Event)
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.645 256736 INFO nova.compute.manager [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Took 10.07 seconds to build instance.
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.662 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.667 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402910.550881, a157d150-bd1c-4f7b-8068-764a8f3af802 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.667 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] VM Resumed (Lifecycle Event)
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.688 256736 DEBUG oslo_concurrency.lockutils [None req-e797438b-cabe-4805-9f44-72be915eb6dc 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.699 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.704 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.726 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.727 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 248 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.755 256736 DEBUG nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:55:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.858 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.858 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.864 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.865 256736 INFO nova.compute.claims [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.885 256736 DEBUG nova.compute.manager [req-f160f3cb-12da-40ba-8e95-41ad065ca316 req-f75dbfa5-c908-4bb4-931e-9b43b363ee2d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received event network-vif-plugged-436ce809-d7f3-4287-867d-52ea26e65554 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.886 256736 DEBUG oslo_concurrency.lockutils [req-f160f3cb-12da-40ba-8e95-41ad065ca316 req-f75dbfa5-c908-4bb4-931e-9b43b363ee2d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.886 256736 DEBUG oslo_concurrency.lockutils [req-f160f3cb-12da-40ba-8e95-41ad065ca316 req-f75dbfa5-c908-4bb4-931e-9b43b363ee2d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.886 256736 DEBUG oslo_concurrency.lockutils [req-f160f3cb-12da-40ba-8e95-41ad065ca316 req-f75dbfa5-c908-4bb4-931e-9b43b363ee2d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.887 256736 DEBUG nova.compute.manager [req-f160f3cb-12da-40ba-8e95-41ad065ca316 req-f75dbfa5-c908-4bb4-931e-9b43b363ee2d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] No waiting events found dispatching network-vif-plugged-436ce809-d7f3-4287-867d-52ea26e65554 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:55:10 compute-0 nova_compute[256729]: 2025-11-29 07:55:10.887 256736 WARNING nova.compute.manager [req-f160f3cb-12da-40ba-8e95-41ad065ca316 req-f75dbfa5-c908-4bb4-931e-9b43b363ee2d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received unexpected event network-vif-plugged-436ce809-d7f3-4287-867d-52ea26e65554 for instance with vm_state active and task_state None.
Nov 29 07:55:11 compute-0 nova_compute[256729]: 2025-11-29 07:55:11.032 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:11 compute-0 nova_compute[256729]: 2025-11-29 07:55:11.289 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:11 compute-0 ceph-mon[75050]: pgmap v1515: 305 pgs: 305 active+clean; 248 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Nov 29 07:55:11 compute-0 nova_compute[256729]: 2025-11-29 07:55:11.917 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 277 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 116 op/s
Nov 29 07:55:13 compute-0 sshd-session[278004]: Connection closed by authenticating user root 143.14.121.41 port 60860 [preauth]
Nov 29 07:55:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 149 op/s
Nov 29 07:55:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209382016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:14 compute-0 nova_compute[256729]: 2025-11-29 07:55:14.893 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:14 compute-0 nova_compute[256729]: 2025-11-29 07:55:14.905 256736 DEBUG nova.compute.provider_tree [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:55:14 compute-0 nova_compute[256729]: 2025-11-29 07:55:14.938 256736 DEBUG nova.scheduler.client.report [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:55:14 compute-0 nova_compute[256729]: 2025-11-29 07:55:14.965 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:14 compute-0 nova_compute[256729]: 2025-11-29 07:55:14.967 256736 DEBUG nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.109 256736 DEBUG nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.110 256736 DEBUG nova.network.neutron [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.187 256736 INFO nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.217 256736 DEBUG nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007638515823403008 of space, bias 1.0, pg target 0.22915547470209024 quantized to 32 (current 32)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0007254427948166959 of space, bias 1.0, pg target 0.21763283844500877 quantized to 32 (current 32)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014247498087191508 of space, bias 1.0, pg target 0.42742494261574526 quantized to 32 (current 32)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:55:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.321 256736 DEBUG nova.policy [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c0b3479158714faaa4e8c3c336457d6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aede5de4449e445582aa074918be39c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.542 256736 DEBUG nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.544 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.546 256736 INFO nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Creating image(s)
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.588 256736 DEBUG nova.storage.rbd_utils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.627 256736 DEBUG nova.storage.rbd_utils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.662 256736 DEBUG nova.storage.rbd_utils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.667 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.744 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.746 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.748 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.749 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:15 compute-0 ceph-mon[75050]: pgmap v1516: 305 pgs: 305 active+clean; 248 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Nov 29 07:55:15 compute-0 ceph-mon[75050]: pgmap v1517: 305 pgs: 305 active+clean; 277 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 116 op/s
Nov 29 07:55:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3209382016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.792 256736 DEBUG nova.storage.rbd_utils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:15 compute-0 nova_compute[256729]: 2025-11-29 07:55:15.798 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:16 compute-0 nova_compute[256729]: 2025-11-29 07:55:16.265 256736 DEBUG nova.network.neutron [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Successfully created port: b8c51f74-6990-452a-b5b8-28fc5b51bef8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:55:16 compute-0 nova_compute[256729]: 2025-11-29 07:55:16.293 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:16 compute-0 nova_compute[256729]: 2025-11-29 07:55:16.462 256736 DEBUG nova.compute.manager [req-763f1650-8ca8-4a74-8912-bbf5a310038d req-da6e3621-1d02-4844-9254-6e6719a9fd2f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received event network-changed-436ce809-d7f3-4287-867d-52ea26e65554 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:16 compute-0 nova_compute[256729]: 2025-11-29 07:55:16.462 256736 DEBUG nova.compute.manager [req-763f1650-8ca8-4a74-8912-bbf5a310038d req-da6e3621-1d02-4844-9254-6e6719a9fd2f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Refreshing instance network info cache due to event network-changed-436ce809-d7f3-4287-867d-52ea26e65554. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:55:16 compute-0 nova_compute[256729]: 2025-11-29 07:55:16.462 256736 DEBUG oslo_concurrency.lockutils [req-763f1650-8ca8-4a74-8912-bbf5a310038d req-da6e3621-1d02-4844-9254-6e6719a9fd2f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:16 compute-0 nova_compute[256729]: 2025-11-29 07:55:16.463 256736 DEBUG oslo_concurrency.lockutils [req-763f1650-8ca8-4a74-8912-bbf5a310038d req-da6e3621-1d02-4844-9254-6e6719a9fd2f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:16 compute-0 nova_compute[256729]: 2025-11-29 07:55:16.463 256736 DEBUG nova.network.neutron [req-763f1650-8ca8-4a74-8912-bbf5a310038d req-da6e3621-1d02-4844-9254-6e6719a9fd2f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Refreshing network info cache for port 436ce809-d7f3-4287-867d-52ea26e65554 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:55:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 126 op/s
Nov 29 07:55:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 29 07:55:16 compute-0 nova_compute[256729]: 2025-11-29 07:55:16.920 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 29 07:55:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 29 07:55:17 compute-0 ceph-mon[75050]: pgmap v1518: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 149 op/s
Nov 29 07:55:17 compute-0 sshd-session[278051]: Connection closed by authenticating user root 143.14.121.41 port 46646 [preauth]
Nov 29 07:55:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.104 256736 DEBUG nova.network.neutron [req-763f1650-8ca8-4a74-8912-bbf5a310038d req-da6e3621-1d02-4844-9254-6e6719a9fd2f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Updated VIF entry in instance network info cache for port 436ce809-d7f3-4287-867d-52ea26e65554. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.105 256736 DEBUG nova.network.neutron [req-763f1650-8ca8-4a74-8912-bbf5a310038d req-da6e3621-1d02-4844-9254-6e6719a9fd2f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Updating instance_info_cache with network_info: [{"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.153 256736 DEBUG oslo_concurrency.lockutils [req-763f1650-8ca8-4a74-8912-bbf5a310038d req-da6e3621-1d02-4844-9254-6e6719a9fd2f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.187 256736 DEBUG nova.network.neutron [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Successfully updated port: b8c51f74-6990-452a-b5b8-28fc5b51bef8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.240 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "refresh_cache-5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.241 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquired lock "refresh_cache-5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.241 256736 DEBUG nova.network.neutron [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.311 256736 DEBUG nova.compute.manager [req-384941fe-c275-495b-8603-d1bd3fdf178c req-7ffe47a3-fece-4977-b98d-257c67caef2a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Received event network-changed-b8c51f74-6990-452a-b5b8-28fc5b51bef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.312 256736 DEBUG nova.compute.manager [req-384941fe-c275-495b-8603-d1bd3fdf178c req-7ffe47a3-fece-4977-b98d-257c67caef2a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Refreshing instance network info cache due to event network-changed-b8c51f74-6990-452a-b5b8-28fc5b51bef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.312 256736 DEBUG oslo_concurrency.lockutils [req-384941fe-c275-495b-8603-d1bd3fdf178c req-7ffe47a3-fece-4977-b98d-257c67caef2a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:19 compute-0 nova_compute[256729]: 2025-11-29 07:55:19.444 256736 DEBUG nova.network.neutron [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:55:20 compute-0 ceph-mon[75050]: pgmap v1519: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 126 op/s
Nov 29 07:55:20 compute-0 ceph-mon[75050]: osdmap e234: 3 total, 3 up, 3 in
Nov 29 07:55:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 29 07:55:21 compute-0 nova_compute[256729]: 2025-11-29 07:55:21.297 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:21 compute-0 nova_compute[256729]: 2025-11-29 07:55:21.398 256736 DEBUG nova.network.neutron [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Updating instance_info_cache with network_info: [{"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:21 compute-0 nova_compute[256729]: 2025-11-29 07:55:21.439 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Releasing lock "refresh_cache-5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:21 compute-0 nova_compute[256729]: 2025-11-29 07:55:21.439 256736 DEBUG nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Instance network_info: |[{"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:55:21 compute-0 nova_compute[256729]: 2025-11-29 07:55:21.440 256736 DEBUG oslo_concurrency.lockutils [req-384941fe-c275-495b-8603-d1bd3fdf178c req-7ffe47a3-fece-4977-b98d-257c67caef2a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:21 compute-0 nova_compute[256729]: 2025-11-29 07:55:21.440 256736 DEBUG nova.network.neutron [req-384941fe-c275-495b-8603-d1bd3fdf178c req-7ffe47a3-fece-4977-b98d-257c67caef2a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Refreshing network info cache for port b8c51f74-6990-452a-b5b8-28fc5b51bef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:55:21 compute-0 nova_compute[256729]: 2025-11-29 07:55:21.921 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 770 KiB/s wr, 91 op/s
Nov 29 07:55:22 compute-0 podman[278151]: 2025-11-29 07:55:22.788094741 +0000 UTC m=+0.130424290 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 07:55:22 compute-0 podman[278150]: 2025-11-29 07:55:22.806841643 +0000 UTC m=+0.162222012 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:55:22 compute-0 podman[278152]: 2025-11-29 07:55:22.850616614 +0000 UTC m=+0.197958908 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 07:55:22 compute-0 ceph-mon[75050]: pgmap v1521: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.001 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.068 256736 DEBUG nova.storage.rbd_utils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] resizing rbd image 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:55:23 compute-0 sshd-session[278148]: Connection closed by authenticating user root 143.14.121.41 port 46654 [preauth]
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.628 256736 DEBUG nova.network.neutron [req-384941fe-c275-495b-8603-d1bd3fdf178c req-7ffe47a3-fece-4977-b98d-257c67caef2a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Updated VIF entry in instance network info cache for port b8c51f74-6990-452a-b5b8-28fc5b51bef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.629 256736 DEBUG nova.network.neutron [req-384941fe-c275-495b-8603-d1bd3fdf178c req-7ffe47a3-fece-4977-b98d-257c67caef2a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Updating instance_info_cache with network_info: [{"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.636 256736 DEBUG nova.objects.instance [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.662 256736 DEBUG oslo_concurrency.lockutils [req-384941fe-c275-495b-8603-d1bd3fdf178c req-7ffe47a3-fece-4977-b98d-257c67caef2a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.663 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.663 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Ensure instance console log exists: /var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.664 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.664 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.665 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.669 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Start _get_guest_xml network_info=[{"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.675 256736 WARNING nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.687 256736 DEBUG nova.virt.libvirt.host [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.688 256736 DEBUG nova.virt.libvirt.host [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.695 256736 DEBUG nova.virt.libvirt.host [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.697 256736 DEBUG nova.virt.libvirt.host [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.697 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.698 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.699 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.699 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.699 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.700 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.700 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.702 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.702 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.703 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.703 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.704 256736 DEBUG nova.virt.hardware [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:55:23 compute-0 nova_compute[256729]: 2025-11-29 07:55:23.708 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:23 compute-0 ceph-mon[75050]: pgmap v1522: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 29 07:55:23 compute-0 ceph-mon[75050]: pgmap v1523: 305 pgs: 305 active+clean; 295 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 770 KiB/s wr, 91 op/s
Nov 29 07:55:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1386414781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.158 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.192 256736 DEBUG nova.storage.rbd_utils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.196 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352986020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.662 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.664 256736 DEBUG nova.virt.libvirt.vif [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:55:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-280999621',display_name='tempest-VolumesSnapshotTestJSON-instance-280999621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-280999621',id=9,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMWhh6xMyheKz/qakJMV0PY8VIZMvGtjrEW0ajE8Jdkf1cphTUAFk9GAOHqhajE/ikW8Cc5/oTjLgctLvAjh2Ld3iPyA7H7nITvAJ5EuwsXy6Z3UfC3+qycUlKu4OGr0Q==',key_name='tempest-keypair-1243301541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aede5de4449e445582aa074918be39c9',ramdisk_id='',reservation_id='r-f0qvvlwe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1121052015',owner_user_name='tempest-VolumesSnapshotTestJSON-1121052015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c0b3479158714faaa4e8c3c336457d6d',uuid=5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.665 256736 DEBUG nova.network.os_vif_util [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converting VIF {"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.666 256736 DEBUG nova.network.os_vif_util [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:be:cd,bridge_name='br-int',has_traffic_filtering=True,id=b8c51f74-6990-452a-b5b8-28fc5b51bef8,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c51f74-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.667 256736 DEBUG nova.objects.instance [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.693 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <uuid>5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf</uuid>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <name>instance-00000009</name>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-280999621</nova:name>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:55:23</nova:creationTime>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <nova:user uuid="c0b3479158714faaa4e8c3c336457d6d">tempest-VolumesSnapshotTestJSON-1121052015-project-member</nova:user>
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <nova:project uuid="aede5de4449e445582aa074918be39c9">tempest-VolumesSnapshotTestJSON-1121052015</nova:project>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <nova:port uuid="b8c51f74-6990-452a-b5b8-28fc5b51bef8">
Nov 29 07:55:24 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <system>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <entry name="serial">5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf</entry>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <entry name="uuid">5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf</entry>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     </system>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <os>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   </os>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <features>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   </features>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk">
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       </source>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk.config">
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       </source>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:55:24 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:50:be:cd"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <target dev="tapb8c51f74-69"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf/console.log" append="off"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <video>
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     </video>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:55:24 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:55:24 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:55:24 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:55:24 compute-0 nova_compute[256729]: </domain>
Nov 29 07:55:24 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.694 256736 DEBUG nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Preparing to wait for external event network-vif-plugged-b8c51f74-6990-452a-b5b8-28fc5b51bef8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.695 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.696 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.696 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.697 256736 DEBUG nova.virt.libvirt.vif [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:55:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-280999621',display_name='tempest-VolumesSnapshotTestJSON-instance-280999621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-280999621',id=9,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMWhh6xMyheKz/qakJMV0PY8VIZMvGtjrEW0ajE8Jdkf1cphTUAFk9GAOHqhajE/ikW8Cc5/oTjLgctLvAjh2Ld3iPyA7H7nITvAJ5EuwsXy6Z3UfC3+qycUlKu4OGr0Q==',key_name='tempest-keypair-1243301541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aede5de4449e445582aa074918be39c9',ramdisk_id='',reservation_id='r-f0qvvlwe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1121052015',owner_user_name='tempest-VolumesSnapshotTestJSON-1121052015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c0b3479158714faaa4e8c3c336457d6d',uuid=5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.697 256736 DEBUG nova.network.os_vif_util [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converting VIF {"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.698 256736 DEBUG nova.network.os_vif_util [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:be:cd,bridge_name='br-int',has_traffic_filtering=True,id=b8c51f74-6990-452a-b5b8-28fc5b51bef8,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c51f74-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.698 256736 DEBUG os_vif [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:be:cd,bridge_name='br-int',has_traffic_filtering=True,id=b8c51f74-6990-452a-b5b8-28fc5b51bef8,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c51f74-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.699 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.700 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.700 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.703 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.703 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8c51f74-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.704 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb8c51f74-69, col_values=(('external_ids', {'iface-id': 'b8c51f74-6990-452a-b5b8-28fc5b51bef8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:be:cd', 'vm-uuid': '5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.706 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:24 compute-0 NetworkManager[48962]: <info>  [1764402924.7076] manager: (tapb8c51f74-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.710 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.717 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:24 compute-0 nova_compute[256729]: 2025-11-29 07:55:24.719 256736 INFO os_vif [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:be:cd,bridge_name='br-int',has_traffic_filtering=True,id=b8c51f74-6990-452a-b5b8-28fc5b51bef8,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c51f74-69')
Nov 29 07:55:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 306 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 573 KiB/s wr, 32 op/s
Nov 29 07:55:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1386414781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1352986020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.059 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.059 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.059 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No VIF found with MAC fa:16:3e:50:be:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.060 256736 INFO nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Using config drive
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.083 256736 DEBUG nova.storage.rbd_utils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.430 256736 INFO nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Creating config drive at /var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf/disk.config
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.436 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxyd1c6fc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.579 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxyd1c6fc" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.606 256736 DEBUG nova.storage.rbd_utils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.616 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf/disk.config 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.777 256736 DEBUG oslo_concurrency.processutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf/disk.config 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.778 256736 INFO nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Deleting local config drive /var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf/disk.config because it was imported into RBD.
Nov 29 07:55:25 compute-0 kernel: tapb8c51f74-69: entered promiscuous mode
Nov 29 07:55:25 compute-0 NetworkManager[48962]: <info>  [1764402925.8251] manager: (tapb8c51f74-69): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Nov 29 07:55:25 compute-0 ovn_controller[153383]: 2025-11-29T07:55:25Z|00103|binding|INFO|Claiming lport b8c51f74-6990-452a-b5b8-28fc5b51bef8 for this chassis.
Nov 29 07:55:25 compute-0 ovn_controller[153383]: 2025-11-29T07:55:25Z|00104|binding|INFO|b8c51f74-6990-452a-b5b8-28fc5b51bef8: Claiming fa:16:3e:50:be:cd 10.100.0.8
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.827 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:25 compute-0 ovn_controller[153383]: 2025-11-29T07:55:25Z|00105|binding|INFO|Setting lport b8c51f74-6990-452a-b5b8-28fc5b51bef8 ovn-installed in OVS
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.845 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:25 compute-0 nova_compute[256729]: 2025-11-29 07:55:25.848 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:25 compute-0 systemd-udevd[278424]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:55:25 compute-0 systemd-machined[217781]: New machine qemu-9-instance-00000009.
Nov 29 07:55:25 compute-0 NetworkManager[48962]: <info>  [1764402925.8706] device (tapb8c51f74-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:55:25 compute-0 NetworkManager[48962]: <info>  [1764402925.8724] device (tapb8c51f74-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:55:25 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.908 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:be:cd 10.100.0.8'], port_security=['fa:16:3e:50:be:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aede5de4449e445582aa074918be39c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4af7f879-b112-4921-902c-00a76a0cb23b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6a691ab-2be1-4362-9a9a-3c54aabcf5a5, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=b8c51f74-6990-452a-b5b8-28fc5b51bef8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:55:25 compute-0 ovn_controller[153383]: 2025-11-29T07:55:25Z|00106|binding|INFO|Setting lport b8c51f74-6990-452a-b5b8-28fc5b51bef8 up in Southbound
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.909 163655 INFO neutron.agent.ovn.metadata.agent [-] Port b8c51f74-6990-452a-b5b8-28fc5b51bef8 in datapath 5908d283-c1b3-46ec-8e8e-b81d59c13f9a bound to our chassis
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.910 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5908d283-c1b3-46ec-8e8e-b81d59c13f9a
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.921 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9c6d71-fdff-480a-9858-98d3f771902d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.922 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5908d283-c1 in ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.924 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5908d283-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.924 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[462058fc-7acd-48eb-8266-5b0f16073ced]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.925 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3f06a3fb-d440-4dda-9112-7e3470ce84ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.951 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[9d48b1aa-124b-4867-9ace-5f9dc7f27e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:25.970 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce0afca-b410-4a82-badf-c4bd5a16f742]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.003 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[82336983-f171-4786-9555-532542134ce8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.008 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4e39f175-ca32-46d0-bbd6-969cffa8fe73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 systemd-udevd[278426]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:55:26 compute-0 NetworkManager[48962]: <info>  [1764402926.0103] manager: (tap5908d283-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Nov 29 07:55:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:55:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2604068177' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:55:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2604068177' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.054 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[12760de5-f9ef-4175-8ddf-d69ebd8554b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.061 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7d44b3aa-d526-47b8-ab79-afad91b5e8c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 NetworkManager[48962]: <info>  [1764402926.0826] device (tap5908d283-c0): carrier: link connected
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.086 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[67e3506b-d820-410b-8988-7a424d16b6e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.105 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[74880114-1e51-428b-8713-dfb15a4639f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5908d283-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:cf:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524114, 'reachable_time': 35460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278457, 'error': None, 'target': 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.122 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[36e9dc53-044f-439b-9fc7-b0bbc15e344f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:cfee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524114, 'tstamp': 524114}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278458, 'error': None, 'target': 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.142 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[554f7d58-8f5a-4723-b1f5-19cdc4cfcc6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5908d283-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:cf:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524114, 'reachable_time': 35460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278459, 'error': None, 'target': 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.178 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6ca1e9-af7c-4b15-94a3-f6122136e1da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 sshd-session[278286]: Connection closed by authenticating user root 143.14.121.41 port 40702 [preauth]
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.232 256736 DEBUG nova.compute.manager [req-b8171bd9-b353-490c-aa5e-3bc54bf1c7cf req-657ef7b9-52aa-4c65-be8d-1a7acdc9b493 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Received event network-vif-plugged-b8c51f74-6990-452a-b5b8-28fc5b51bef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.233 256736 DEBUG oslo_concurrency.lockutils [req-b8171bd9-b353-490c-aa5e-3bc54bf1c7cf req-657ef7b9-52aa-4c65-be8d-1a7acdc9b493 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.234 256736 DEBUG oslo_concurrency.lockutils [req-b8171bd9-b353-490c-aa5e-3bc54bf1c7cf req-657ef7b9-52aa-4c65-be8d-1a7acdc9b493 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.235 256736 DEBUG oslo_concurrency.lockutils [req-b8171bd9-b353-490c-aa5e-3bc54bf1c7cf req-657ef7b9-52aa-4c65-be8d-1a7acdc9b493 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.235 256736 DEBUG nova.compute.manager [req-b8171bd9-b353-490c-aa5e-3bc54bf1c7cf req-657ef7b9-52aa-4c65-be8d-1a7acdc9b493 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Processing event network-vif-plugged-b8c51f74-6990-452a-b5b8-28fc5b51bef8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.263 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[95f8366a-5736-44a2-b2f0-aeaf6a6c135c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.265 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5908d283-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.265 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.265 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5908d283-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.267 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:26 compute-0 NetworkManager[48962]: <info>  [1764402926.2679] manager: (tap5908d283-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 29 07:55:26 compute-0 kernel: tap5908d283-c0: entered promiscuous mode
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.270 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.271 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5908d283-c0, col_values=(('external_ids', {'iface-id': '9b4bf2c3-157d-4772-ab63-bb4e179af153'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.272 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:26 compute-0 ovn_controller[153383]: 2025-11-29T07:55:26Z|00107|binding|INFO|Releasing lport 9b4bf2c3-157d-4772-ab63-bb4e179af153 from this chassis (sb_readonly=0)
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.293 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.295 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5908d283-c1b3-46ec-8e8e-b81d59c13f9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5908d283-c1b3-46ec-8e8e-b81d59c13f9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.298 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e40ebf18-308b-4cf4-b61b-9e2545c63f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.299 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-5908d283-c1b3-46ec-8e8e-b81d59c13f9a
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/5908d283-c1b3-46ec-8e8e-b81d59c13f9a.pid.haproxy
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 5908d283-c1b3-46ec-8e8e-b81d59c13f9a
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:55:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:26.300 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'env', 'PROCESS_TAG=haproxy-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5908d283-c1b3-46ec-8e8e-b81d59c13f9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:55:26 compute-0 sudo[278482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:26 compute-0 ceph-mon[75050]: pgmap v1524: 305 pgs: 305 active+clean; 306 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 573 KiB/s wr, 32 op/s
Nov 29 07:55:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2604068177' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2604068177' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:26 compute-0 sudo[278482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:26 compute-0 sudo[278482]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:26 compute-0 sudo[278513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:26 compute-0 sudo[278513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:26 compute-0 sudo[278513]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:26 compute-0 sudo[278539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:26 compute-0 sudo[278539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:26 compute-0 sudo[278539]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:26 compute-0 sudo[278576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 07:55:26 compute-0 sudo[278576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 320 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 1.0 MiB/s wr, 52 op/s
Nov 29 07:55:26 compute-0 podman[278600]: 2025-11-29 07:55:26.677893722 +0000 UTC m=+0.023795069 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:55:26 compute-0 nova_compute[256729]: 2025-11-29 07:55:26.923 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:26 compute-0 ovn_controller[153383]: 2025-11-29T07:55:26Z|00012|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.7
Nov 29 07:55:26 compute-0 ovn_controller[153383]: 2025-11-29T07:55:26Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b0:84:96 10.100.0.7
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.075830) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402927075951, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1202, "num_deletes": 262, "total_data_size": 1599819, "memory_usage": 1631248, "flush_reason": "Manual Compaction"}
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402927101487, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1580411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25578, "largest_seqno": 26779, "table_properties": {"data_size": 1574441, "index_size": 3301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13439, "raw_average_key_size": 20, "raw_value_size": 1562239, "raw_average_value_size": 2440, "num_data_blocks": 145, "num_entries": 640, "num_filter_entries": 640, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402839, "oldest_key_time": 1764402839, "file_creation_time": 1764402927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 25718 microseconds, and 5422 cpu microseconds.
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.101565) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1580411 bytes OK
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.101590) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.115744) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.115789) EVENT_LOG_v1 {"time_micros": 1764402927115779, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.115815) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1594119, prev total WAL file size 1594119, number of live WAL files 2.
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.116579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1543KB)], [56(8364KB)]
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402927116693, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 10146096, "oldest_snapshot_seqno": -1}
Nov 29 07:55:27 compute-0 podman[278600]: 2025-11-29 07:55:27.119889228 +0000 UTC m=+0.465790535 container create 1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 07:55:27 compute-0 systemd[1]: Started libpod-conmon-1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c.scope.
Nov 29 07:55:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:27 compute-0 sudo[278576]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc7afc8bac0fcca9996eb383dea25d264fcbd45a2c40716405c0408c1c87d27/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5361 keys, 8275463 bytes, temperature: kUnknown
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402927362305, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 8275463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8237803, "index_size": 23150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 135759, "raw_average_key_size": 25, "raw_value_size": 8139234, "raw_average_value_size": 1518, "num_data_blocks": 941, "num_entries": 5361, "num_filter_entries": 5361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764402927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.364 256736 DEBUG nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.365 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402927.3642306, 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.365 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] VM Started (Lifecycle Event)
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.368 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.372 256736 INFO nova.virt.libvirt.driver [-] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Instance spawned successfully.
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.372 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.391 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.397 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.401 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.401 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.402 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.402 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.402 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.403 256736 DEBUG nova.virt.libvirt.driver [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.430 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.430 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402927.365841, 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.430 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] VM Paused (Lifecycle Event)
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.454 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.458 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402927.368024, 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.458 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] VM Resumed (Lifecycle Event)
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.465 256736 INFO nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Took 11.92 seconds to spawn the instance on the hypervisor.
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.466 256736 DEBUG nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.484 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.487 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.511 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.542 256736 INFO nova.compute.manager [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Took 16.73 seconds to build instance.
Nov 29 07:55:27 compute-0 nova_compute[256729]: 2025-11-29 07:55:27.653 256736 DEBUG oslo_concurrency.lockutils [None req-e70a2f7b-a240-4767-aeee-67a6a7258aef c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:27 compute-0 podman[278600]: 2025-11-29 07:55:27.702270623 +0000 UTC m=+1.048171960 container init 1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:55:27 compute-0 podman[278600]: 2025-11-29 07:55:27.707668088 +0000 UTC m=+1.053569405 container start 1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:55:27 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[278663]: [NOTICE]   (278674) : New worker (278676) forked
Nov 29 07:55:27 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[278663]: [NOTICE]   (278674) : Loading success.
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.473027) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 8275463 bytes
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.866373) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 41.3 rd, 33.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.2 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(11.7) write-amplify(5.2) OK, records in: 5894, records dropped: 533 output_compression: NoCompression
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.866409) EVENT_LOG_v1 {"time_micros": 1764402927866397, "job": 30, "event": "compaction_finished", "compaction_time_micros": 245717, "compaction_time_cpu_micros": 23862, "output_level": 6, "num_output_files": 1, "total_output_size": 8275463, "num_input_records": 5894, "num_output_records": 5361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.116447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.866542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.866548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.866549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.866551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:55:27.866552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402927867215, "job": 0, "event": "table_file_deletion", "file_number": 58}
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:55:27 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402927868517, "job": 0, "event": "table_file_deletion", "file_number": 56}
Nov 29 07:55:27 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:55:28 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:28 compute-0 ceph-mon[75050]: pgmap v1525: 305 pgs: 305 active+clean; 320 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 1.0 MiB/s wr, 52 op/s
Nov 29 07:55:28 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:28 compute-0 sudo[278685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:28 compute-0 sudo[278685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:28 compute-0 sudo[278685]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:28 compute-0 nova_compute[256729]: 2025-11-29 07:55:28.326 256736 DEBUG nova.compute.manager [req-553ca469-b1c6-4bf9-aa93-f4596f80535e req-0ce5f3cb-d489-4f93-a0cc-b14b5dbc4bf2 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Received event network-vif-plugged-b8c51f74-6990-452a-b5b8-28fc5b51bef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:28 compute-0 nova_compute[256729]: 2025-11-29 07:55:28.327 256736 DEBUG oslo_concurrency.lockutils [req-553ca469-b1c6-4bf9-aa93-f4596f80535e req-0ce5f3cb-d489-4f93-a0cc-b14b5dbc4bf2 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:28 compute-0 nova_compute[256729]: 2025-11-29 07:55:28.327 256736 DEBUG oslo_concurrency.lockutils [req-553ca469-b1c6-4bf9-aa93-f4596f80535e req-0ce5f3cb-d489-4f93-a0cc-b14b5dbc4bf2 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:28 compute-0 nova_compute[256729]: 2025-11-29 07:55:28.327 256736 DEBUG oslo_concurrency.lockutils [req-553ca469-b1c6-4bf9-aa93-f4596f80535e req-0ce5f3cb-d489-4f93-a0cc-b14b5dbc4bf2 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:28 compute-0 nova_compute[256729]: 2025-11-29 07:55:28.327 256736 DEBUG nova.compute.manager [req-553ca469-b1c6-4bf9-aa93-f4596f80535e req-0ce5f3cb-d489-4f93-a0cc-b14b5dbc4bf2 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] No waiting events found dispatching network-vif-plugged-b8c51f74-6990-452a-b5b8-28fc5b51bef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:55:28 compute-0 nova_compute[256729]: 2025-11-29 07:55:28.328 256736 WARNING nova.compute.manager [req-553ca469-b1c6-4bf9-aa93-f4596f80535e req-0ce5f3cb-d489-4f93-a0cc-b14b5dbc4bf2 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Received unexpected event network-vif-plugged-b8c51f74-6990-452a-b5b8-28fc5b51bef8 for instance with vm_state active and task_state None.
Nov 29 07:55:28 compute-0 sudo[278710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:28 compute-0 sudo[278710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:28 compute-0 sudo[278710]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:28 compute-0 sudo[278735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:28 compute-0 sudo[278735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:28 compute-0 sudo[278735]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:28 compute-0 sudo[278760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:55:28 compute-0 sudo[278760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.4 MiB/s wr, 141 op/s
Nov 29 07:55:29 compute-0 sudo[278760]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:55:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:55:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:55:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:55:29 compute-0 nova_compute[256729]: 2025-11-29 07:55:29.709 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 131 op/s
Nov 29 07:55:30 compute-0 sshd-session[278510]: Connection closed by authenticating user root 143.14.121.41 port 40708 [preauth]
Nov 29 07:55:31 compute-0 ovn_controller[153383]: 2025-11-29T07:55:31Z|00014|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.7
Nov 29 07:55:31 compute-0 ovn_controller[153383]: 2025-11-29T07:55:31Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b0:84:96 10.100.0.7
Nov 29 07:55:31 compute-0 nova_compute[256729]: 2025-11-29 07:55:31.925 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:31 compute-0 ovn_controller[153383]: 2025-11-29T07:55:31Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:84:96 10.100.0.7
Nov 29 07:55:31 compute-0 ovn_controller[153383]: 2025-11-29T07:55:31Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:84:96 10.100.0.7
Nov 29 07:55:31 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:31 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev f6cda26f-9399-4c3d-ae95-475e9c503faa does not exist
Nov 29 07:55:31 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev be4b7633-7eb2-48a3-8578-89155e8110a1 does not exist
Nov 29 07:55:31 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev e7502b69-db94-4963-9bb5-355646387d39 does not exist
Nov 29 07:55:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:55:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:55:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:55:32 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:55:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:55:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:32 compute-0 sudo[278817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:32 compute-0 sudo[278817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:32 compute-0 sudo[278817]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:32 compute-0 sudo[278843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:32 compute-0 sudo[278843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:32 compute-0 sudo[278843]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:32 compute-0 sudo[278868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:32 compute-0 sudo[278868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:32 compute-0 sudo[278868]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:32 compute-0 nova_compute[256729]: 2025-11-29 07:55:32.626 256736 DEBUG nova.compute.manager [req-1c4bfb25-bb6a-4149-a06f-531680c28441 req-11906f34-51a4-4cd8-b596-d5c98f28d2fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Received event network-changed-b8c51f74-6990-452a-b5b8-28fc5b51bef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:55:32 compute-0 nova_compute[256729]: 2025-11-29 07:55:32.627 256736 DEBUG nova.compute.manager [req-1c4bfb25-bb6a-4149-a06f-531680c28441 req-11906f34-51a4-4cd8-b596-d5c98f28d2fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Refreshing instance network info cache due to event network-changed-b8c51f74-6990-452a-b5b8-28fc5b51bef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:55:32 compute-0 nova_compute[256729]: 2025-11-29 07:55:32.627 256736 DEBUG oslo_concurrency.lockutils [req-1c4bfb25-bb6a-4149-a06f-531680c28441 req-11906f34-51a4-4cd8-b596-d5c98f28d2fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:55:32 compute-0 nova_compute[256729]: 2025-11-29 07:55:32.627 256736 DEBUG oslo_concurrency.lockutils [req-1c4bfb25-bb6a-4149-a06f-531680c28441 req-11906f34-51a4-4cd8-b596-d5c98f28d2fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:55:32 compute-0 nova_compute[256729]: 2025-11-29 07:55:32.628 256736 DEBUG nova.network.neutron [req-1c4bfb25-bb6a-4149-a06f-531680c28441 req-11906f34-51a4-4cd8-b596-d5c98f28d2fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Refreshing network info cache for port b8c51f74-6990-452a-b5b8-28fc5b51bef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:55:32 compute-0 sudo[278893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:55:32 compute-0 sudo[278893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.3 MiB/s wr, 146 op/s
Nov 29 07:55:33 compute-0 podman[278958]: 2025-11-29 07:55:33.065593355 +0000 UTC m=+0.045112498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:33 compute-0 ceph-mon[75050]: pgmap v1526: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.4 MiB/s wr, 141 op/s
Nov 29 07:55:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:33 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:55:33 compute-0 podman[278958]: 2025-11-29 07:55:33.391859266 +0000 UTC m=+0.371378359 container create cbeb276b68185aecae7073490b1176cfb33f480aab962263272ce77f823ec9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lovelace, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 07:55:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:55:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2271031029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:55:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2271031029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:33 compute-0 systemd[1]: Started libpod-conmon-cbeb276b68185aecae7073490b1176cfb33f480aab962263272ce77f823ec9e6.scope.
Nov 29 07:55:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:34 compute-0 nova_compute[256729]: 2025-11-29 07:55:34.295 256736 DEBUG nova.network.neutron [req-1c4bfb25-bb6a-4149-a06f-531680c28441 req-11906f34-51a4-4cd8-b596-d5c98f28d2fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Updated VIF entry in instance network info cache for port b8c51f74-6990-452a-b5b8-28fc5b51bef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:55:34 compute-0 nova_compute[256729]: 2025-11-29 07:55:34.296 256736 DEBUG nova.network.neutron [req-1c4bfb25-bb6a-4149-a06f-531680c28441 req-11906f34-51a4-4cd8-b596-d5c98f28d2fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Updating instance_info_cache with network_info: [{"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:55:34 compute-0 nova_compute[256729]: 2025-11-29 07:55:34.392 256736 DEBUG oslo_concurrency.lockutils [req-1c4bfb25-bb6a-4149-a06f-531680c28441 req-11906f34-51a4-4cd8-b596-d5c98f28d2fd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:55:34 compute-0 ceph-mon[75050]: pgmap v1527: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 131 op/s
Nov 29 07:55:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:55:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:55:34 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:55:34 compute-0 ceph-mon[75050]: pgmap v1528: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.3 MiB/s wr, 146 op/s
Nov 29 07:55:34 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2271031029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:34 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2271031029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:34 compute-0 podman[278958]: 2025-11-29 07:55:34.484428263 +0000 UTC m=+1.463947446 container init cbeb276b68185aecae7073490b1176cfb33f480aab962263272ce77f823ec9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 07:55:34 compute-0 podman[278958]: 2025-11-29 07:55:34.493291311 +0000 UTC m=+1.472810374 container start cbeb276b68185aecae7073490b1176cfb33f480aab962263272ce77f823ec9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:55:34 compute-0 determined_lovelace[278975]: 167 167
Nov 29 07:55:34 compute-0 systemd[1]: libpod-cbeb276b68185aecae7073490b1176cfb33f480aab962263272ce77f823ec9e6.scope: Deactivated successfully.
Nov 29 07:55:34 compute-0 podman[278958]: 2025-11-29 07:55:34.598905626 +0000 UTC m=+1.578424779 container attach cbeb276b68185aecae7073490b1176cfb33f480aab962263272ce77f823ec9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lovelace, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 07:55:34 compute-0 podman[278958]: 2025-11-29 07:55:34.59975305 +0000 UTC m=+1.579272123 container died cbeb276b68185aecae7073490b1176cfb33f480aab962263272ce77f823ec9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:55:34 compute-0 nova_compute[256729]: 2025-11-29 07:55:34.712 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 191 op/s
Nov 29 07:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d30abd531ef18dd07b7fc5fc2594d04b1b4c92a9a400b9af00463f0e2d92071-merged.mount: Deactivated successfully.
Nov 29 07:55:35 compute-0 podman[278958]: 2025-11-29 07:55:35.214105779 +0000 UTC m=+2.193624842 container remove cbeb276b68185aecae7073490b1176cfb33f480aab962263272ce77f823ec9e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:55:35 compute-0 systemd[1]: libpod-conmon-cbeb276b68185aecae7073490b1176cfb33f480aab962263272ce77f823ec9e6.scope: Deactivated successfully.
Nov 29 07:55:35 compute-0 podman[279001]: 2025-11-29 07:55:35.399286065 +0000 UTC m=+0.023543981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:55:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:55:35 compute-0 podman[279001]: 2025-11-29 07:55:35.63415297 +0000 UTC m=+0.258410866 container create 642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:55:35 compute-0 systemd[1]: Started libpod-conmon-642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f.scope.
Nov 29 07:55:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0643aa1cdfeba596bea30ab0a898e6fe13f06b9a2f033ba88eb24844508b057e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0643aa1cdfeba596bea30ab0a898e6fe13f06b9a2f033ba88eb24844508b057e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0643aa1cdfeba596bea30ab0a898e6fe13f06b9a2f033ba88eb24844508b057e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0643aa1cdfeba596bea30ab0a898e6fe13f06b9a2f033ba88eb24844508b057e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0643aa1cdfeba596bea30ab0a898e6fe13f06b9a2f033ba88eb24844508b057e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:36 compute-0 sshd-session[278816]: Connection closed by authenticating user root 143.14.121.41 port 40718 [preauth]
Nov 29 07:55:36 compute-0 podman[279001]: 2025-11-29 07:55:36.128493878 +0000 UTC m=+0.752751824 container init 642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_albattani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 07:55:36 compute-0 podman[279001]: 2025-11-29 07:55:36.139164924 +0000 UTC m=+0.763422860 container start 642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:55:36 compute-0 podman[279001]: 2025-11-29 07:55:36.350313333 +0000 UTC m=+0.974571279 container attach 642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:55:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 29 07:55:36 compute-0 nova_compute[256729]: 2025-11-29 07:55:36.973 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:37 compute-0 nervous_albattani[279019]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:55:37 compute-0 nervous_albattani[279019]: --> relative data size: 1.0
Nov 29 07:55:37 compute-0 nervous_albattani[279019]: --> All data devices are unavailable
Nov 29 07:55:37 compute-0 systemd[1]: libpod-642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f.scope: Deactivated successfully.
Nov 29 07:55:37 compute-0 systemd[1]: libpod-642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f.scope: Consumed 1.053s CPU time.
Nov 29 07:55:37 compute-0 podman[279001]: 2025-11-29 07:55:37.299648598 +0000 UTC m=+1.923906524 container died 642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_albattani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:55:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:37 compute-0 ceph-mon[75050]: pgmap v1529: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 191 op/s
Nov 29 07:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0643aa1cdfeba596bea30ab0a898e6fe13f06b9a2f033ba88eb24844508b057e-merged.mount: Deactivated successfully.
Nov 29 07:55:38 compute-0 podman[279001]: 2025-11-29 07:55:38.186776647 +0000 UTC m=+2.811034553 container remove 642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_albattani, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:55:38 compute-0 sudo[278893]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:38 compute-0 systemd[1]: libpod-conmon-642a1da976871bef8c0f5a95a7003ed9d320c650ef64367a1d6bb0a7a9a22f6f.scope: Deactivated successfully.
Nov 29 07:55:38 compute-0 sudo[279063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:38 compute-0 sudo[279063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:38 compute-0 sudo[279063]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:38 compute-0 sudo[279088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:38 compute-0 sudo[279088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:38 compute-0 sudo[279088]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:38 compute-0 sudo[279113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:38 compute-0 sudo[279113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:38 compute-0 sudo[279113]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:38 compute-0 sudo[279138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:55:38 compute-0 sudo[279138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 359 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 159 op/s
Nov 29 07:55:39 compute-0 podman[279204]: 2025-11-29 07:55:38.99514171 +0000 UTC m=+0.036891059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 29 07:55:39 compute-0 nova_compute[256729]: 2025-11-29 07:55:39.714 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:40 compute-0 podman[279204]: 2025-11-29 07:55:40.14685844 +0000 UTC m=+1.188607799 container create 413b9c169b0c333a18fcaa85ef82d553769171ff70edf567fe95dc360519b719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:55:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 359 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 48 KiB/s wr, 66 op/s
Nov 29 07:55:40 compute-0 ceph-mon[75050]: pgmap v1530: 305 pgs: 305 active+clean; 355 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Nov 29 07:55:41 compute-0 sshd-session[279024]: Connection closed by authenticating user root 143.14.121.41 port 52920 [preauth]
Nov 29 07:55:41 compute-0 systemd[1]: Started libpod-conmon-413b9c169b0c333a18fcaa85ef82d553769171ff70edf567fe95dc360519b719.scope.
Nov 29 07:55:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 29 07:55:41 compute-0 nova_compute[256729]: 2025-11-29 07:55:41.975 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 359 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 48 KiB/s wr, 66 op/s
Nov 29 07:55:43 compute-0 podman[279204]: 2025-11-29 07:55:43.010463879 +0000 UTC m=+4.052213298 container init 413b9c169b0c333a18fcaa85ef82d553769171ff70edf567fe95dc360519b719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:55:43 compute-0 podman[279204]: 2025-11-29 07:55:43.025160782 +0000 UTC m=+4.066910112 container start 413b9c169b0c333a18fcaa85ef82d553769171ff70edf567fe95dc360519b719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:55:43 compute-0 festive_galileo[279220]: 167 167
Nov 29 07:55:43 compute-0 systemd[1]: libpod-413b9c169b0c333a18fcaa85ef82d553769171ff70edf567fe95dc360519b719.scope: Deactivated successfully.
Nov 29 07:55:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 29 07:55:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:44 compute-0 podman[279204]: 2025-11-29 07:55:44.030199658 +0000 UTC m=+5.071949007 container attach 413b9c169b0c333a18fcaa85ef82d553769171ff70edf567fe95dc360519b719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:55:44 compute-0 podman[279204]: 2025-11-29 07:55:44.032569781 +0000 UTC m=+5.074319120 container died 413b9c169b0c333a18fcaa85ef82d553769171ff70edf567fe95dc360519b719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 07:55:44 compute-0 ceph-mon[75050]: pgmap v1531: 305 pgs: 305 active+clean; 359 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 159 op/s
Nov 29 07:55:44 compute-0 ceph-mon[75050]: pgmap v1532: 305 pgs: 305 active+clean; 359 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 48 KiB/s wr, 66 op/s
Nov 29 07:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5263dd1991dc45cdaf9338b8e11c67a9057e1aba78b4e2e17354c467ea2fa075-merged.mount: Deactivated successfully.
Nov 29 07:55:44 compute-0 podman[279204]: 2025-11-29 07:55:44.269321306 +0000 UTC m=+5.311070625 container remove 413b9c169b0c333a18fcaa85ef82d553769171ff70edf567fe95dc360519b719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 07:55:44 compute-0 systemd[1]: libpod-conmon-413b9c169b0c333a18fcaa85ef82d553769171ff70edf567fe95dc360519b719.scope: Deactivated successfully.
Nov 29 07:55:44 compute-0 podman[279250]: 2025-11-29 07:55:44.518178196 +0000 UTC m=+0.098318652 container create 295cabb6be4679a3b96974c155a6b341a37e6f695b5677be8e82b5eda0581d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:55:44 compute-0 podman[279250]: 2025-11-29 07:55:44.445424559 +0000 UTC m=+0.025565035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:44 compute-0 systemd[1]: Started libpod-conmon-295cabb6be4679a3b96974c155a6b341a37e6f695b5677be8e82b5eda0581d02.scope.
Nov 29 07:55:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9438de96747844587cd2d4b9c7d77eb6e3507d300ae14646a79d45313c949808/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9438de96747844587cd2d4b9c7d77eb6e3507d300ae14646a79d45313c949808/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9438de96747844587cd2d4b9c7d77eb6e3507d300ae14646a79d45313c949808/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9438de96747844587cd2d4b9c7d77eb6e3507d300ae14646a79d45313c949808/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:44 compute-0 podman[279250]: 2025-11-29 07:55:44.657136394 +0000 UTC m=+0.237276880 container init 295cabb6be4679a3b96974c155a6b341a37e6f695b5677be8e82b5eda0581d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:55:44 compute-0 podman[279250]: 2025-11-29 07:55:44.665388445 +0000 UTC m=+0.245528901 container start 295cabb6be4679a3b96974c155a6b341a37e6f695b5677be8e82b5eda0581d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:55:44 compute-0 podman[279250]: 2025-11-29 07:55:44.668784686 +0000 UTC m=+0.248925162 container attach 295cabb6be4679a3b96974c155a6b341a37e6f695b5677be8e82b5eda0581d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:55:44 compute-0 nova_compute[256729]: 2025-11-29 07:55:44.719 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 361 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 172 KiB/s wr, 17 op/s
Nov 29 07:55:45 compute-0 ceph-mon[75050]: pgmap v1533: 305 pgs: 305 active+clean; 359 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 48 KiB/s wr, 66 op/s
Nov 29 07:55:45 compute-0 ceph-mon[75050]: osdmap e235: 3 total, 3 up, 3 in
Nov 29 07:55:45 compute-0 epic_cartwright[279266]: {
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:     "0": [
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:         {
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "devices": [
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "/dev/loop3"
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             ],
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_name": "ceph_lv0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_size": "21470642176",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "name": "ceph_lv0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "tags": {
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.cluster_name": "ceph",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.crush_device_class": "",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.encrypted": "0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.osd_id": "0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.type": "block",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.vdo": "0"
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             },
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "type": "block",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "vg_name": "ceph_vg0"
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:         }
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:     ],
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:     "1": [
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:         {
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "devices": [
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "/dev/loop4"
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             ],
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_name": "ceph_lv1",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_size": "21470642176",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "name": "ceph_lv1",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "tags": {
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.cluster_name": "ceph",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.crush_device_class": "",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.encrypted": "0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.osd_id": "1",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.type": "block",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.vdo": "0"
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             },
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "type": "block",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "vg_name": "ceph_vg1"
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:         }
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:     ],
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:     "2": [
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:         {
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "devices": [
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "/dev/loop5"
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             ],
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_name": "ceph_lv2",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_size": "21470642176",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "name": "ceph_lv2",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "tags": {
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.cluster_name": "ceph",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.crush_device_class": "",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.encrypted": "0",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.osd_id": "2",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.type": "block",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:                 "ceph.vdo": "0"
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             },
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "type": "block",
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:             "vg_name": "ceph_vg2"
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:         }
Nov 29 07:55:45 compute-0 epic_cartwright[279266]:     ]
Nov 29 07:55:45 compute-0 epic_cartwright[279266]: }
Nov 29 07:55:45 compute-0 systemd[1]: libpod-295cabb6be4679a3b96974c155a6b341a37e6f695b5677be8e82b5eda0581d02.scope: Deactivated successfully.
Nov 29 07:55:45 compute-0 podman[279250]: 2025-11-29 07:55:45.541590432 +0000 UTC m=+1.121730888 container died 295cabb6be4679a3b96974c155a6b341a37e6f695b5677be8e82b5eda0581d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:55:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9438de96747844587cd2d4b9c7d77eb6e3507d300ae14646a79d45313c949808-merged.mount: Deactivated successfully.
Nov 29 07:55:45 compute-0 podman[279250]: 2025-11-29 07:55:45.763412228 +0000 UTC m=+1.343552684 container remove 295cabb6be4679a3b96974c155a6b341a37e6f695b5677be8e82b5eda0581d02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:55:45 compute-0 systemd[1]: libpod-conmon-295cabb6be4679a3b96974c155a6b341a37e6f695b5677be8e82b5eda0581d02.scope: Deactivated successfully.
Nov 29 07:55:45 compute-0 sudo[279138]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:45 compute-0 ovn_controller[153383]: 2025-11-29T07:55:45Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:be:cd 10.100.0.8
Nov 29 07:55:45 compute-0 ovn_controller[153383]: 2025-11-29T07:55:45Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:be:cd 10.100.0.8
Nov 29 07:55:45 compute-0 sudo[279288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:45 compute-0 sudo[279288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:45 compute-0 sudo[279288]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:45 compute-0 sudo[279313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:55:45 compute-0 sudo[279313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:45 compute-0 sudo[279313]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:45 compute-0 sudo[279338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:45 compute-0 sudo[279338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:45 compute-0 sudo[279338]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:46 compute-0 sudo[279363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:55:46 compute-0 sudo[279363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:46 compute-0 sshd-session[279224]: Connection closed by authenticating user root 143.14.121.41 port 52936 [preauth]
Nov 29 07:55:46 compute-0 ceph-mon[75050]: pgmap v1535: 305 pgs: 305 active+clean; 361 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 172 KiB/s wr, 17 op/s
Nov 29 07:55:46 compute-0 podman[279428]: 2025-11-29 07:55:46.430285254 +0000 UTC m=+0.055179008 container create e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_roentgen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:55:46 compute-0 systemd[1]: Started libpod-conmon-e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a.scope.
Nov 29 07:55:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:46 compute-0 podman[279428]: 2025-11-29 07:55:46.409032795 +0000 UTC m=+0.033926639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:46 compute-0 podman[279428]: 2025-11-29 07:55:46.514084726 +0000 UTC m=+0.138978530 container init e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_roentgen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 07:55:46 compute-0 podman[279428]: 2025-11-29 07:55:46.521039242 +0000 UTC m=+0.145933026 container start e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:55:46 compute-0 podman[279428]: 2025-11-29 07:55:46.524411022 +0000 UTC m=+0.149304806 container attach e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:55:46 compute-0 flamboyant_roentgen[279445]: 167 167
Nov 29 07:55:46 compute-0 systemd[1]: libpod-e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a.scope: Deactivated successfully.
Nov 29 07:55:46 compute-0 conmon[279445]: conmon e9e25b0326dfb0022caa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a.scope/container/memory.events
Nov 29 07:55:46 compute-0 podman[279428]: 2025-11-29 07:55:46.530152635 +0000 UTC m=+0.155046419 container died e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_roentgen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 29 07:55:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:55:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3009254711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:55:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3009254711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eca075b707baf789a3fe825fea8722069ae70b84742bcc2054147f80034c10f-merged.mount: Deactivated successfully.
Nov 29 07:55:46 compute-0 podman[279428]: 2025-11-29 07:55:46.66819939 +0000 UTC m=+0.293093184 container remove e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:55:46 compute-0 systemd[1]: libpod-conmon-e9e25b0326dfb0022caac2afe2fa232f10c87a4fef7e9c65db1aee15daca9a0a.scope: Deactivated successfully.
Nov 29 07:55:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 366 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 978 KiB/s wr, 35 op/s
Nov 29 07:55:46 compute-0 podman[279469]: 2025-11-29 07:55:46.854170317 +0000 UTC m=+0.044806120 container create 40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:55:46 compute-0 systemd[1]: Started libpod-conmon-40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b.scope.
Nov 29 07:55:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:55:46 compute-0 podman[279469]: 2025-11-29 07:55:46.833838493 +0000 UTC m=+0.024474316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53945bb328bda8b61902115fe09ea6b67703fa7249343a15941a11a88739644/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53945bb328bda8b61902115fe09ea6b67703fa7249343a15941a11a88739644/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53945bb328bda8b61902115fe09ea6b67703fa7249343a15941a11a88739644/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53945bb328bda8b61902115fe09ea6b67703fa7249343a15941a11a88739644/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:55:46 compute-0 podman[279469]: 2025-11-29 07:55:46.945080519 +0000 UTC m=+0.135716342 container init 40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 07:55:46 compute-0 podman[279469]: 2025-11-29 07:55:46.951162952 +0000 UTC m=+0.141798745 container start 40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 07:55:46 compute-0 podman[279469]: 2025-11-29 07:55:46.956003621 +0000 UTC m=+0.146639424 container attach 40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:55:46 compute-0 nova_compute[256729]: 2025-11-29 07:55:46.978 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3009254711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3009254711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]: {
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "osd_id": 2,
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "type": "bluestore"
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:     },
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "osd_id": 1,
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "type": "bluestore"
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:     },
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "osd_id": 0,
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:         "type": "bluestore"
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]:     }
Nov 29 07:55:47 compute-0 compassionate_driscoll[279486]: }
Nov 29 07:55:47 compute-0 systemd[1]: libpod-40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b.scope: Deactivated successfully.
Nov 29 07:55:47 compute-0 podman[279469]: 2025-11-29 07:55:47.972937324 +0000 UTC m=+1.163573127 container died 40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:55:47 compute-0 systemd[1]: libpod-40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b.scope: Consumed 1.013s CPU time.
Nov 29 07:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d53945bb328bda8b61902115fe09ea6b67703fa7249343a15941a11a88739644-merged.mount: Deactivated successfully.
Nov 29 07:55:48 compute-0 ceph-mon[75050]: pgmap v1536: 305 pgs: 305 active+clean; 366 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 978 KiB/s wr, 35 op/s
Nov 29 07:55:48 compute-0 podman[279469]: 2025-11-29 07:55:48.595855564 +0000 UTC m=+1.786491407 container remove 40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 07:55:48 compute-0 systemd[1]: libpod-conmon-40fa72a1565cb6f820bbcfb7871506745afc1fb022cb7edb8b1c9e2f50945d5b.scope: Deactivated successfully.
Nov 29 07:55:48 compute-0 sudo[279363]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:55:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:55:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:48 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev e1212f8c-27c2-4b54-949f-95bf9a917409 does not exist
Nov 29 07:55:48 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev e958d0dd-5f9e-4423-8b73-4d414fbb42ca does not exist
Nov 29 07:55:48 compute-0 sudo[279533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:55:48 compute-0 sudo[279533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:48 compute-0 sudo[279533]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 393 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 582 KiB/s rd, 2.8 MiB/s wr, 112 op/s
Nov 29 07:55:48 compute-0 sudo[279558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:55:48 compute-0 sudo[279558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:55:48 compute-0 sudo[279558]: pam_unix(sudo:session): session closed for user root
Nov 29 07:55:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:48 compute-0 nova_compute[256729]: 2025-11-29 07:55:48.966 256736 DEBUG oslo_concurrency.lockutils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:48 compute-0 nova_compute[256729]: 2025-11-29 07:55:48.967 256736 DEBUG oslo_concurrency.lockutils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:49 compute-0 sshd-session[279444]: Connection closed by authenticating user root 143.14.121.41 port 41492 [preauth]
Nov 29 07:55:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:55:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1621767692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:55:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1621767692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:49 compute-0 nova_compute[256729]: 2025-11-29 07:55:49.287 256736 DEBUG nova.objects.instance [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'flavor' on Instance uuid a157d150-bd1c-4f7b-8068-764a8f3af802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:55:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1621767692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:55:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1621767692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:55:49 compute-0 nova_compute[256729]: 2025-11-29 07:55:49.723 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:50 compute-0 nova_compute[256729]: 2025-11-29 07:55:50.198 256736 DEBUG oslo_concurrency.lockutils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 1.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:50 compute-0 ceph-mon[75050]: pgmap v1537: 305 pgs: 305 active+clean; 393 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 582 KiB/s rd, 2.8 MiB/s wr, 112 op/s
Nov 29 07:55:50 compute-0 nova_compute[256729]: 2025-11-29 07:55:50.766 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 393 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 582 KiB/s rd, 2.8 MiB/s wr, 112 op/s
Nov 29 07:55:51 compute-0 nova_compute[256729]: 2025-11-29 07:55:51.980 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:51 compute-0 ceph-mon[75050]: pgmap v1538: 305 pgs: 305 active+clean; 393 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 582 KiB/s rd, 2.8 MiB/s wr, 112 op/s
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.748 256736 DEBUG oslo_concurrency.lockutils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.748 256736 DEBUG oslo_concurrency.lockutils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.749 256736 INFO nova.compute.manager [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Attaching volume 1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6 to /dev/vdb
Nov 29 07:55:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 598 KiB/s rd, 2.8 MiB/s wr, 125 op/s
Nov 29 07:55:52 compute-0 sshd-session[279583]: Connection closed by authenticating user root 143.14.121.41 port 41508 [preauth]
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.969 256736 DEBUG os_brick.utils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.971 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.983 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.983 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8ada37-713d-46b5-8ebf-a4ccfbd661a7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.985 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.995 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.996 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[adf40e5d-f9b0-45f1-bad2-1ca6c6f1bfb7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:52 compute-0 nova_compute[256729]: 2025-11-29 07:55:52.997 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.007 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.007 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[9da9f079-00cf-4e79-a0b3-e71ac455b7fd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.008 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[ef089193-597d-475b-9cae-a4b16abea2af]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.009 256736 DEBUG oslo_concurrency.processutils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.032 256736 DEBUG oslo_concurrency.processutils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.034 256736 DEBUG os_brick.initiator.connectors.lightos [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.034 256736 DEBUG os_brick.initiator.connectors.lightos [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.035 256736 DEBUG os_brick.initiator.connectors.lightos [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.035 256736 DEBUG os_brick.utils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.035 256736 DEBUG nova.virt.block_device [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Updating existing volume attachment record: b17e2d8d-6da0-491b-ab6c-93ca0f876658 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:55:53 compute-0 nova_compute[256729]: 2025-11-29 07:55:53.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:55:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591790693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:53 compute-0 podman[279594]: 2025-11-29 07:55:53.7553911 +0000 UTC m=+0.107289931 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:55:53 compute-0 podman[279595]: 2025-11-29 07:55:53.77145241 +0000 UTC m=+0.123324641 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:55:53 compute-0 podman[279593]: 2025-11-29 07:55:53.791927468 +0000 UTC m=+0.144013954 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 07:55:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:54 compute-0 ceph-mon[75050]: pgmap v1539: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 598 KiB/s rd, 2.8 MiB/s wr, 125 op/s
Nov 29 07:55:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1591790693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:55:54 compute-0 nova_compute[256729]: 2025-11-29 07:55:54.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:54 compute-0 nova_compute[256729]: 2025-11-29 07:55:54.251 256736 DEBUG nova.objects.instance [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'flavor' on Instance uuid a157d150-bd1c-4f7b-8068-764a8f3af802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:55:54 compute-0 nova_compute[256729]: 2025-11-29 07:55:54.308 256736 DEBUG nova.virt.libvirt.driver [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Attempting to attach volume 1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:55:54 compute-0 nova_compute[256729]: 2025-11-29 07:55:54.312 256736 DEBUG nova.virt.libvirt.guest [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:55:54 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:55:54 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6">
Nov 29 07:55:54 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:55:54 compute-0 nova_compute[256729]:   </source>
Nov 29 07:55:54 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 07:55:54 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:55:54 compute-0 nova_compute[256729]:   </auth>
Nov 29 07:55:54 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:55:54 compute-0 nova_compute[256729]:   <serial>1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6</serial>
Nov 29 07:55:54 compute-0 nova_compute[256729]: </disk>
Nov 29 07:55:54 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:55:54 compute-0 nova_compute[256729]: 2025-11-29 07:55:54.726 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 522 KiB/s rd, 2.4 MiB/s wr, 114 op/s
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.157 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.716 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.716 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.717 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.717 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.717 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.746 256736 DEBUG nova.virt.libvirt.driver [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.747 256736 DEBUG nova.virt.libvirt.driver [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.747 256736 DEBUG nova.virt.libvirt.driver [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:55:55 compute-0 nova_compute[256729]: 2025-11-29 07:55:55.748 256736 DEBUG nova.virt.libvirt.driver [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] No VIF found with MAC fa:16:3e:b0:84:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:55:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686094794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:56 compute-0 nova_compute[256729]: 2025-11-29 07:55:56.170 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:56 compute-0 ceph-mon[75050]: pgmap v1540: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 522 KiB/s rd, 2.4 MiB/s wr, 114 op/s
Nov 29 07:55:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3686094794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 480 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Nov 29 07:55:56 compute-0 nova_compute[256729]: 2025-11-29 07:55:56.983 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:57 compute-0 sshd-session[279592]: Connection closed by authenticating user root 143.14.121.41 port 41250 [preauth]
Nov 29 07:55:57 compute-0 nova_compute[256729]: 2025-11-29 07:55:57.873 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:57 compute-0 nova_compute[256729]: 2025-11-29 07:55:57.875 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:57 compute-0 nova_compute[256729]: 2025-11-29 07:55:57.880 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:57 compute-0 nova_compute[256729]: 2025-11-29 07:55:57.881 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:57 compute-0 nova_compute[256729]: 2025-11-29 07:55:57.885 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:57 compute-0 nova_compute[256729]: 2025-11-29 07:55:57.885 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:57 compute-0 nova_compute[256729]: 2025-11-29 07:55:57.885 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.061 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.063 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3866MB free_disk=59.88859176635742GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.063 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.064 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.187 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 8c45989b-e06e-4bd4-9961-e7756223b869 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.188 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance a157d150-bd1c-4f7b-8068-764a8f3af802 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.188 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.188 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.188 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:55:58 compute-0 ceph-mon[75050]: pgmap v1541: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 480 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.285 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.635 256736 DEBUG oslo_concurrency.lockutils [None req-06bfa7f2-e0c9-48e1-b917-e99d42ff820e 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:55:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949667500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.725 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:55:58 compute-0 nova_compute[256729]: 2025-11-29 07:55:58.732 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:55:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 454 KiB/s rd, 1.6 MiB/s wr, 96 op/s
Nov 29 07:55:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:55:59 compute-0 nova_compute[256729]: 2025-11-29 07:55:59.078 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:55:59 compute-0 nova_compute[256729]: 2025-11-29 07:55:59.111 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:55:59 compute-0 nova_compute[256729]: 2025-11-29 07:55:59.111 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:55:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3949667500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:55:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:59.290 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:55:59 compute-0 nova_compute[256729]: 2025-11-29 07:55:59.291 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:59.291 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:55:59 compute-0 nova_compute[256729]: 2025-11-29 07:55:59.728 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:55:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:59.775 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:55:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:59.776 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:55:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:55:59.777 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:00 compute-0 nova_compute[256729]: 2025-11-29 07:56:00.104 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:00 compute-0 nova_compute[256729]: 2025-11-29 07:56:00.105 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:56:00 compute-0 nova_compute[256729]: 2025-11-29 07:56:00.105 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:56:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 21 KiB/s wr, 26 op/s
Nov 29 07:56:01 compute-0 nova_compute[256729]: 2025-11-29 07:56:01.208 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:56:01 compute-0 nova_compute[256729]: 2025-11-29 07:56:01.210 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:56:01 compute-0 nova_compute[256729]: 2025-11-29 07:56:01.210 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:56:01 compute-0 nova_compute[256729]: 2025-11-29 07:56:01.210 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c45989b-e06e-4bd4-9961-e7756223b869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:01 compute-0 ceph-mon[75050]: pgmap v1542: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 454 KiB/s rd, 1.6 MiB/s wr, 96 op/s
Nov 29 07:56:01 compute-0 nova_compute[256729]: 2025-11-29 07:56:01.986 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:02 compute-0 sshd-session[279705]: Connection closed by authenticating user root 143.14.121.41 port 41254 [preauth]
Nov 29 07:56:02 compute-0 ceph-mon[75050]: pgmap v1543: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 21 KiB/s wr, 26 op/s
Nov 29 07:56:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 21 KiB/s wr, 26 op/s
Nov 29 07:56:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:04 compute-0 nova_compute[256729]: 2025-11-29 07:56:04.732 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:04 compute-0 ceph-mon[75050]: pgmap v1544: 305 pgs: 305 active+clean; 395 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 21 KiB/s wr, 26 op/s
Nov 29 07:56:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 397 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 185 KiB/s wr, 21 op/s
Nov 29 07:56:04 compute-0 sshd-session[279729]: Connection closed by authenticating user root 143.14.121.41 port 49004 [preauth]
Nov 29 07:56:05 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:05.293 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:56:05
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.mgr', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms']
Nov 29 07:56:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:56:05 compute-0 ceph-mon[75050]: pgmap v1545: 305 pgs: 305 active+clean; 397 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 393 KiB/s rd, 185 KiB/s wr, 21 op/s
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.679 256736 DEBUG oslo_concurrency.lockutils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.680 256736 DEBUG oslo_concurrency.lockutils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.714 256736 DEBUG oslo_concurrency.lockutils [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.714 256736 DEBUG oslo_concurrency.lockutils [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.716 256736 DEBUG nova.objects.instance [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'flavor' on Instance uuid 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.741 256736 INFO nova.compute.manager [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Detaching volume 1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.749 256736 INFO nova.virt.libvirt.driver [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Ignoring supplied device name: /dev/vdb
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 397 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 387 KiB/s rd, 173 KiB/s wr, 14 op/s
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.783 256736 DEBUG oslo_concurrency.lockutils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.912 256736 INFO nova.virt.block_device [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Attempting to driver detach volume 1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6 from mountpoint /dev/vdb
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.926 256736 DEBUG nova.virt.libvirt.driver [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Attempting to detach device vdb from instance a157d150-bd1c-4f7b-8068-764a8f3af802 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.927 256736 DEBUG nova.virt.libvirt.guest [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:56:06 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:56:06 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6">
Nov 29 07:56:06 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:56:06 compute-0 nova_compute[256729]:   </source>
Nov 29 07:56:06 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:56:06 compute-0 nova_compute[256729]:   <serial>1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6</serial>
Nov 29 07:56:06 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:56:06 compute-0 nova_compute[256729]: </disk>
Nov 29 07:56:06 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:56:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:56:06 compute-0 nova_compute[256729]: 2025-11-29 07:56:06.988 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.026 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updating instance_info_cache with network_info: [{"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.046 256736 DEBUG oslo_concurrency.lockutils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.046 256736 DEBUG oslo_concurrency.lockutils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.047 256736 INFO nova.compute.manager [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Attaching volume 826052bb-c5ae-41d7-b11b-9d81bb72ee1d to /dev/vdb
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.052 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.052 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.053 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.054 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.054 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.054 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.140 256736 INFO nova.virt.libvirt.driver [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Successfully detached device vdb from instance a157d150-bd1c-4f7b-8068-764a8f3af802 from the persistent domain config.
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.141 256736 DEBUG nova.virt.libvirt.driver [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a157d150-bd1c-4f7b-8068-764a8f3af802 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.143 256736 DEBUG nova.virt.libvirt.guest [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:56:07 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:56:07 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6">
Nov 29 07:56:07 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:56:07 compute-0 nova_compute[256729]:   </source>
Nov 29 07:56:07 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:56:07 compute-0 nova_compute[256729]:   <serial>1a96eb5f-0f39-479a-9bb2-8abfa0c9c8b6</serial>
Nov 29 07:56:07 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:56:07 compute-0 nova_compute[256729]: </disk>
Nov 29 07:56:07 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.193 256736 DEBUG os_brick.utils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.195 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.254 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.255 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[a598be6c-d5c9-4a4b-b7c8-816e8c6e20ec]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.257 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.264 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.264 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[08d5940a-7392-4107-b8ac-e3c7daa2e863]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.266 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.272 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.272 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7f62d7-de21-4465-bd43-6fecd0b5eecf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.273 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[9cadbc3f-139e-4266-a333-dcfcf47f50d5]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.273 256736 DEBUG oslo_concurrency.processutils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.299 256736 DEBUG oslo_concurrency.processutils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.302 256736 DEBUG os_brick.initiator.connectors.lightos [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.302 256736 DEBUG os_brick.initiator.connectors.lightos [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.303 256736 DEBUG os_brick.initiator.connectors.lightos [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.303 256736 DEBUG os_brick.utils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] <== get_connector_properties: return (108ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.303 256736 DEBUG nova.virt.block_device [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Updating existing volume attachment record: cf6b1cc0-8ae6-47e6-9dc6-aadad236ca9c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.989 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764402967.9889174, a157d150-bd1c-4f7b-8068-764a8f3af802 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.991 256736 DEBUG nova.virt.libvirt.driver [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a157d150-bd1c-4f7b-8068-764a8f3af802 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:56:07 compute-0 nova_compute[256729]: 2025-11-29 07:56:07.996 256736 INFO nova.virt.libvirt.driver [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Successfully detached device vdb from instance a157d150-bd1c-4f7b-8068-764a8f3af802 from the live domain config.
Nov 29 07:56:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:56:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/821053557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:08 compute-0 nova_compute[256729]: 2025-11-29 07:56:08.149 256736 DEBUG nova.objects.instance [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'flavor' on Instance uuid 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:08 compute-0 nova_compute[256729]: 2025-11-29 07:56:08.189 256736 DEBUG nova.virt.libvirt.driver [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Attempting to attach volume 826052bb-c5ae-41d7-b11b-9d81bb72ee1d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:56:08 compute-0 nova_compute[256729]: 2025-11-29 07:56:08.191 256736 DEBUG nova.virt.libvirt.guest [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:56:08 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:56:08 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-826052bb-c5ae-41d7-b11b-9d81bb72ee1d">
Nov 29 07:56:08 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:56:08 compute-0 nova_compute[256729]:   </source>
Nov 29 07:56:08 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 07:56:08 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:56:08 compute-0 nova_compute[256729]:   </auth>
Nov 29 07:56:08 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:56:08 compute-0 nova_compute[256729]:   <serial>826052bb-c5ae-41d7-b11b-9d81bb72ee1d</serial>
Nov 29 07:56:08 compute-0 nova_compute[256729]: </disk>
Nov 29 07:56:08 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:56:08 compute-0 nova_compute[256729]: 2025-11-29 07:56:08.220 256736 DEBUG nova.objects.instance [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'flavor' on Instance uuid a157d150-bd1c-4f7b-8068-764a8f3af802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:08 compute-0 nova_compute[256729]: 2025-11-29 07:56:08.278 256736 DEBUG oslo_concurrency.lockutils [None req-abb690ec-34a1-4dd6-92e1-707ab1d1d3bf 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:56:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3926772282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:56:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3926772282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:08 compute-0 ceph-mon[75050]: pgmap v1546: 305 pgs: 305 active+clean; 397 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 387 KiB/s rd, 173 KiB/s wr, 14 op/s
Nov 29 07:56:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/821053557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 397 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 176 KiB/s wr, 13 op/s
Nov 29 07:56:08 compute-0 nova_compute[256729]: 2025-11-29 07:56:08.791 256736 DEBUG nova.virt.libvirt.driver [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:56:08 compute-0 nova_compute[256729]: 2025-11-29 07:56:08.792 256736 DEBUG nova.virt.libvirt.driver [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:56:08 compute-0 nova_compute[256729]: 2025-11-29 07:56:08.792 256736 DEBUG nova.virt.libvirt.driver [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:56:08 compute-0 nova_compute[256729]: 2025-11-29 07:56:08.792 256736 DEBUG nova.virt.libvirt.driver [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No VIF found with MAC fa:16:3e:50:be:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:56:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:09 compute-0 nova_compute[256729]: 2025-11-29 07:56:09.147 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:09 compute-0 nova_compute[256729]: 2025-11-29 07:56:09.350 256736 DEBUG oslo_concurrency.lockutils [None req-634d127b-4f79-4b08-b7ed-e9514e733350 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:09 compute-0 sshd-session[279731]: Connection closed by authenticating user root 143.14.121.41 port 49012 [preauth]
Nov 29 07:56:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3926772282' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3926772282' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:09 compute-0 ceph-mon[75050]: pgmap v1547: 305 pgs: 305 active+clean; 397 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 176 KiB/s wr, 13 op/s
Nov 29 07:56:09 compute-0 nova_compute[256729]: 2025-11-29 07:56:09.744 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.113 256736 DEBUG nova.compute.manager [req-cc5ea779-c4e9-4c23-a414-d01daa9972ce req-54df701d-d9e2-4b52-9371-ca3274b6dbeb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received event network-changed-436ce809-d7f3-4287-867d-52ea26e65554 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.113 256736 DEBUG nova.compute.manager [req-cc5ea779-c4e9-4c23-a414-d01daa9972ce req-54df701d-d9e2-4b52-9371-ca3274b6dbeb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Refreshing instance network info cache due to event network-changed-436ce809-d7f3-4287-867d-52ea26e65554. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.114 256736 DEBUG oslo_concurrency.lockutils [req-cc5ea779-c4e9-4c23-a414-d01daa9972ce req-54df701d-d9e2-4b52-9371-ca3274b6dbeb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.114 256736 DEBUG oslo_concurrency.lockutils [req-cc5ea779-c4e9-4c23-a414-d01daa9972ce req-54df701d-d9e2-4b52-9371-ca3274b6dbeb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.114 256736 DEBUG nova.network.neutron [req-cc5ea779-c4e9-4c23-a414-d01daa9972ce req-54df701d-d9e2-4b52-9371-ca3274b6dbeb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Refreshing network info cache for port 436ce809-d7f3-4287-867d-52ea26e65554 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.183 256736 DEBUG oslo_concurrency.lockutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.184 256736 DEBUG oslo_concurrency.lockutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.185 256736 DEBUG oslo_concurrency.lockutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.185 256736 DEBUG oslo_concurrency.lockutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.185 256736 DEBUG oslo_concurrency.lockutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.186 256736 INFO nova.compute.manager [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Terminating instance
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.187 256736 DEBUG nova.compute.manager [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:56:10 compute-0 kernel: tap436ce809-d7 (unregistering): left promiscuous mode
Nov 29 07:56:10 compute-0 NetworkManager[48962]: <info>  [1764402970.2443] device (tap436ce809-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:56:10 compute-0 ovn_controller[153383]: 2025-11-29T07:56:10Z|00108|binding|INFO|Releasing lport 436ce809-d7f3-4287-867d-52ea26e65554 from this chassis (sb_readonly=0)
Nov 29 07:56:10 compute-0 ovn_controller[153383]: 2025-11-29T07:56:10Z|00109|binding|INFO|Setting lport 436ce809-d7f3-4287-867d-52ea26e65554 down in Southbound
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.251 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 ovn_controller[153383]: 2025-11-29T07:56:10Z|00110|binding|INFO|Removing iface tap436ce809-d7 ovn-installed in OVS
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.253 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.258 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:84:96 10.100.0.7'], port_security=['fa:16:3e:b0:84:96 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a157d150-bd1c-4f7b-8068-764a8f3af802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0aa15e11d9794e608f3aebb38ea3606a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '12be058e-47a2-4b10-9928-e2f6336ca894', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17005c51-b13f-40d9-a999-415174c76777, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=436ce809-d7f3-4287-867d-52ea26e65554) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.259 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 436ce809-d7f3-4287-867d-52ea26e65554 in datapath e678432d-7aa3-4fc9-8ccb-76ec3ffbd276 unbound from our chassis
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.261 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e678432d-7aa3-4fc9-8ccb-76ec3ffbd276
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.268 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.283 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a92ab96e-3d1e-4419-9300-585efa6567a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.317 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[d81c7da4-0f2b-4614-b35a-3921038f823e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:10 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 29 07:56:10 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 18.621s CPU time.
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.320 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad47118-43bc-4147-9956-6e6052d23294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:10 compute-0 systemd-machined[217781]: Machine qemu-8-instance-00000008 terminated.
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.342 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[9fdbc889-f79b-41ec-88f5-3bb68568c96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.363 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd55867-2544-49e8-9b7e-e88c5ad12be7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape678432d-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:f5:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515099, 'reachable_time': 28884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279775, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.378 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[8e54e7e5-d8b7-4e20-95bf-1472b4d95bb0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape678432d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515111, 'tstamp': 515111}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279776, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape678432d-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515114, 'tstamp': 515114}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279776, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.380 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape678432d-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.381 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.386 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.386 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape678432d-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.386 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.387 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape678432d-70, col_values=(('external_ids', {'iface-id': '83156f7b-0983-4e0f-a70a-261a0d3fbf52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:10.387 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:56:10 compute-0 kernel: tap436ce809-d7: entered promiscuous mode
Nov 29 07:56:10 compute-0 kernel: tap436ce809-d7 (unregistering): left promiscuous mode
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.420 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.433 256736 INFO nova.virt.libvirt.driver [-] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Instance destroyed successfully.
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.433 256736 DEBUG nova.objects.instance [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'resources' on Instance uuid a157d150-bd1c-4f7b-8068-764a8f3af802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.465 256736 DEBUG nova.virt.libvirt.vif [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1622337894',display_name='tempest-TestStampPattern-server-1622337894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1622337894',id=8,image_ref='8cb507a6-3d3e-48cc-8c73-be72eca3ddaa',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVIcG7iT8EuYRWwvh0xXPSujdlj7uKuKXhamDHlJ4QJb0wGzod0+Qsrv8DmE1TIN7tAAQa46X3+yrMq9A2yMt4mHHy/8wbOvohqcW7H1CuWupyv3Z+eB3t88xUDCWSqKQ==',key_name='tempest-TestStampPattern-886597490',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:55:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0aa15e11d9794e608f3aebb38ea3606a',ramdisk_id='',reservation_id='r-7i74n3ae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8c45989b-e06e-4bd4-9961-e7756223b869',image_min_disk='1',image_min_ram='0',image_owner_id='0aa15e11d9794e608f3aebb38ea3606a',image_owner_project_name='tempest-TestStampPattern-1135660929',image_owner_user_name='tempest-TestStampPattern-1135660929-project-member',image_user_id='81f071491e4c48c59662c7feba200299',owner_project_name='tempest-TestStampPattern-1135660929',owner_user_name='tempest-TestStampPattern-1135660929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:55:10Z,user_data=None,user_id='81f071491e4c48c59662c7feba200299',uuid=a157d150-bd1c-4f7b-8068-764a8f3af802,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.466 256736 DEBUG nova.network.os_vif_util [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converting VIF {"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.468 256736 DEBUG nova.network.os_vif_util [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:84:96,bridge_name='br-int',has_traffic_filtering=True,id=436ce809-d7f3-4287-867d-52ea26e65554,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap436ce809-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.468 256736 DEBUG os_vif [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:84:96,bridge_name='br-int',has_traffic_filtering=True,id=436ce809-d7f3-4287-867d-52ea26e65554,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap436ce809-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.470 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.470 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap436ce809-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.472 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.473 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.476 256736 INFO os_vif [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:84:96,bridge_name='br-int',has_traffic_filtering=True,id=436ce809-d7f3-4287-867d-52ea26e65554,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap436ce809-d7')
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.634 256736 DEBUG nova.compute.manager [req-2b9b8697-1fe5-434b-a5bb-72cf117f50df req-35a5c478-55aa-4e94-8cc4-734b89759b5f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received event network-vif-unplugged-436ce809-d7f3-4287-867d-52ea26e65554 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.635 256736 DEBUG oslo_concurrency.lockutils [req-2b9b8697-1fe5-434b-a5bb-72cf117f50df req-35a5c478-55aa-4e94-8cc4-734b89759b5f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.635 256736 DEBUG oslo_concurrency.lockutils [req-2b9b8697-1fe5-434b-a5bb-72cf117f50df req-35a5c478-55aa-4e94-8cc4-734b89759b5f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.635 256736 DEBUG oslo_concurrency.lockutils [req-2b9b8697-1fe5-434b-a5bb-72cf117f50df req-35a5c478-55aa-4e94-8cc4-734b89759b5f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.636 256736 DEBUG nova.compute.manager [req-2b9b8697-1fe5-434b-a5bb-72cf117f50df req-35a5c478-55aa-4e94-8cc4-734b89759b5f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] No waiting events found dispatching network-vif-unplugged-436ce809-d7f3-4287-867d-52ea26e65554 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.636 256736 DEBUG nova.compute.manager [req-2b9b8697-1fe5-434b-a5bb-72cf117f50df req-35a5c478-55aa-4e94-8cc4-734b89759b5f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received event network-vif-unplugged-436ce809-d7f3-4287-867d-52ea26e65554 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:56:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 29 07:56:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 29 07:56:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 29 07:56:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 397 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 458 KiB/s rd, 210 KiB/s wr, 10 op/s
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.904 256736 INFO nova.virt.libvirt.driver [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Deleting instance files /var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802_del
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.905 256736 INFO nova.virt.libvirt.driver [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Deletion of /var/lib/nova/instances/a157d150-bd1c-4f7b-8068-764a8f3af802_del complete
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.965 256736 INFO nova.compute.manager [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Took 0.78 seconds to destroy the instance on the hypervisor.
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.966 256736 DEBUG oslo.service.loopingcall [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.966 256736 DEBUG nova.compute.manager [-] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:56:10 compute-0 nova_compute[256729]: 2025-11-29 07:56:10.966 256736 DEBUG nova.network.neutron [-] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:56:11 compute-0 ceph-mon[75050]: osdmap e236: 3 total, 3 up, 3 in
Nov 29 07:56:11 compute-0 ceph-mon[75050]: pgmap v1549: 305 pgs: 305 active+clean; 397 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 458 KiB/s rd, 210 KiB/s wr, 10 op/s
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.033 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.393 256736 DEBUG nova.network.neutron [req-cc5ea779-c4e9-4c23-a414-d01daa9972ce req-54df701d-d9e2-4b52-9371-ca3274b6dbeb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Updated VIF entry in instance network info cache for port 436ce809-d7f3-4287-867d-52ea26e65554. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.394 256736 DEBUG nova.network.neutron [req-cc5ea779-c4e9-4c23-a414-d01daa9972ce req-54df701d-d9e2-4b52-9371-ca3274b6dbeb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Updating instance_info_cache with network_info: [{"id": "436ce809-d7f3-4287-867d-52ea26e65554", "address": "fa:16:3e:b0:84:96", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap436ce809-d7", "ovs_interfaceid": "436ce809-d7f3-4287-867d-52ea26e65554", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.424 256736 DEBUG nova.network.neutron [-] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.429 256736 DEBUG oslo_concurrency.lockutils [req-cc5ea779-c4e9-4c23-a414-d01daa9972ce req-54df701d-d9e2-4b52-9371-ca3274b6dbeb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-a157d150-bd1c-4f7b-8068-764a8f3af802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.449 256736 INFO nova.compute.manager [-] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Took 1.48 seconds to deallocate network for instance.
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.524 256736 DEBUG oslo_concurrency.lockutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.525 256736 DEBUG oslo_concurrency.lockutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.637 256736 DEBUG oslo_concurrency.processutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 29 07:56:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 29 07:56:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.744 256736 DEBUG nova.compute.manager [req-e14f5787-ff56-4888-b0e0-f1a7b2f1e22f req-0eab3711-03bc-4f8d-8f2f-32c9cf7c6e9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received event network-vif-plugged-436ce809-d7f3-4287-867d-52ea26e65554 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.745 256736 DEBUG oslo_concurrency.lockutils [req-e14f5787-ff56-4888-b0e0-f1a7b2f1e22f req-0eab3711-03bc-4f8d-8f2f-32c9cf7c6e9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.746 256736 DEBUG oslo_concurrency.lockutils [req-e14f5787-ff56-4888-b0e0-f1a7b2f1e22f req-0eab3711-03bc-4f8d-8f2f-32c9cf7c6e9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.746 256736 DEBUG oslo_concurrency.lockutils [req-e14f5787-ff56-4888-b0e0-f1a7b2f1e22f req-0eab3711-03bc-4f8d-8f2f-32c9cf7c6e9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.747 256736 DEBUG nova.compute.manager [req-e14f5787-ff56-4888-b0e0-f1a7b2f1e22f req-0eab3711-03bc-4f8d-8f2f-32c9cf7c6e9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] No waiting events found dispatching network-vif-plugged-436ce809-d7f3-4287-867d-52ea26e65554 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.747 256736 WARNING nova.compute.manager [req-e14f5787-ff56-4888-b0e0-f1a7b2f1e22f req-0eab3711-03bc-4f8d-8f2f-32c9cf7c6e9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received unexpected event network-vif-plugged-436ce809-d7f3-4287-867d-52ea26e65554 for instance with vm_state deleted and task_state None.
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.748 256736 DEBUG nova.compute.manager [req-e14f5787-ff56-4888-b0e0-f1a7b2f1e22f req-0eab3711-03bc-4f8d-8f2f-32c9cf7c6e9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Received event network-vif-deleted-436ce809-d7f3-4287-867d-52ea26e65554 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 397 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 6.2 KiB/s wr, 17 op/s
Nov 29 07:56:12 compute-0 sshd-session[279762]: Connection closed by authenticating user root 143.14.121.41 port 49022 [preauth]
Nov 29 07:56:12 compute-0 nova_compute[256729]: 2025-11-29 07:56:12.905 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893399526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:13 compute-0 nova_compute[256729]: 2025-11-29 07:56:13.158 256736 DEBUG oslo_concurrency.processutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:13 compute-0 nova_compute[256729]: 2025-11-29 07:56:13.164 256736 DEBUG nova.compute.provider_tree [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:56:13 compute-0 nova_compute[256729]: 2025-11-29 07:56:13.195 256736 DEBUG nova.scheduler.client.report [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:56:13 compute-0 nova_compute[256729]: 2025-11-29 07:56:13.228 256736 DEBUG oslo_concurrency.lockutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:13 compute-0 nova_compute[256729]: 2025-11-29 07:56:13.266 256736 INFO nova.scheduler.client.report [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Deleted allocations for instance a157d150-bd1c-4f7b-8068-764a8f3af802
Nov 29 07:56:13 compute-0 nova_compute[256729]: 2025-11-29 07:56:13.350 256736 DEBUG oslo_concurrency.lockutils [None req-452815f2-f640-47bc-acf0-d95db2557910 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "a157d150-bd1c-4f7b-8068-764a8f3af802" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:13 compute-0 ceph-mon[75050]: osdmap e237: 3 total, 3 up, 3 in
Nov 29 07:56:13 compute-0 ceph-mon[75050]: pgmap v1551: 305 pgs: 305 active+clean; 397 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 6.2 KiB/s wr, 17 op/s
Nov 29 07:56:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3893399526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 29 07:56:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 376 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 13 KiB/s wr, 60 op/s
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015212805031907274 of space, bias 1.0, pg target 0.45638415095721824 quantized to 32 (current 32)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0007584463721755551 of space, bias 1.0, pg target 0.22753391165266654 quantized to 32 (current 32)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014247498087191508 of space, bias 1.0, pg target 0.42742494261574526 quantized to 32 (current 32)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:56:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:56:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 29 07:56:15 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 29 07:56:15 compute-0 nova_compute[256729]: 2025-11-29 07:56:15.472 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:56:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1205187483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:56:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1205187483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:16 compute-0 ceph-mon[75050]: pgmap v1552: 305 pgs: 305 active+clean; 376 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 13 KiB/s wr, 60 op/s
Nov 29 07:56:16 compute-0 ceph-mon[75050]: osdmap e238: 3 total, 3 up, 3 in
Nov 29 07:56:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1205187483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1205187483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 376 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 12 KiB/s wr, 131 op/s
Nov 29 07:56:17 compute-0 nova_compute[256729]: 2025-11-29 07:56:17.035 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 29 07:56:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 29 07:56:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 29 07:56:17 compute-0 sshd-session[279828]: Connection closed by authenticating user root 143.14.121.41 port 35808 [preauth]
Nov 29 07:56:18 compute-0 ceph-mon[75050]: pgmap v1554: 305 pgs: 305 active+clean; 376 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 12 KiB/s wr, 131 op/s
Nov 29 07:56:18 compute-0 ceph-mon[75050]: osdmap e239: 3 total, 3 up, 3 in
Nov 29 07:56:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 29 07:56:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 29 07:56:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 29 07:56:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 322 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 17 KiB/s wr, 166 op/s
Nov 29 07:56:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:19 compute-0 nova_compute[256729]: 2025-11-29 07:56:19.466 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 29 07:56:20 compute-0 ceph-mon[75050]: osdmap e240: 3 total, 3 up, 3 in
Nov 29 07:56:20 compute-0 sshd-session[279831]: Connection closed by authenticating user root 143.14.121.41 port 35816 [preauth]
Nov 29 07:56:20 compute-0 nova_compute[256729]: 2025-11-29 07:56:20.473 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 29 07:56:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 29 07:56:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 322 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 8.6 KiB/s wr, 118 op/s
Nov 29 07:56:21 compute-0 ceph-mon[75050]: pgmap v1557: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 322 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 17 KiB/s wr, 166 op/s
Nov 29 07:56:21 compute-0 ceph-mon[75050]: osdmap e241: 3 total, 3 up, 3 in
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.465 256736 DEBUG nova.compute.manager [req-431f6813-a71e-4878-a183-a2243c607da3 req-93b6a478-9fed-4af0-a010-942bea7d2828 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Received event network-changed-071be225-ecaa-4260-bc91-73f144657155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.466 256736 DEBUG nova.compute.manager [req-431f6813-a71e-4878-a183-a2243c607da3 req-93b6a478-9fed-4af0-a010-942bea7d2828 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Refreshing instance network info cache due to event network-changed-071be225-ecaa-4260-bc91-73f144657155. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.466 256736 DEBUG oslo_concurrency.lockutils [req-431f6813-a71e-4878-a183-a2243c607da3 req-93b6a478-9fed-4af0-a010-942bea7d2828 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.467 256736 DEBUG oslo_concurrency.lockutils [req-431f6813-a71e-4878-a183-a2243c607da3 req-93b6a478-9fed-4af0-a010-942bea7d2828 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.467 256736 DEBUG nova.network.neutron [req-431f6813-a71e-4878-a183-a2243c607da3 req-93b6a478-9fed-4af0-a010-942bea7d2828 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Refreshing network info cache for port 071be225-ecaa-4260-bc91-73f144657155 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.511 256736 DEBUG oslo_concurrency.lockutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "8c45989b-e06e-4bd4-9961-e7756223b869" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.512 256736 DEBUG oslo_concurrency.lockutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.512 256736 DEBUG oslo_concurrency.lockutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.512 256736 DEBUG oslo_concurrency.lockutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.513 256736 DEBUG oslo_concurrency.lockutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.514 256736 INFO nova.compute.manager [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Terminating instance
Nov 29 07:56:21 compute-0 nova_compute[256729]: 2025-11-29 07:56:21.515 256736 DEBUG nova.compute.manager [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.039 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:22 compute-0 ceph-mon[75050]: pgmap v1559: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 322 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 8.6 KiB/s wr, 118 op/s
Nov 29 07:56:22 compute-0 kernel: tap071be225-ec (unregistering): left promiscuous mode
Nov 29 07:56:22 compute-0 NetworkManager[48962]: <info>  [1764402982.5219] device (tap071be225-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.620 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:22 compute-0 ovn_controller[153383]: 2025-11-29T07:56:22Z|00111|binding|INFO|Releasing lport 071be225-ecaa-4260-bc91-73f144657155 from this chassis (sb_readonly=0)
Nov 29 07:56:22 compute-0 ovn_controller[153383]: 2025-11-29T07:56:22Z|00112|binding|INFO|Setting lport 071be225-ecaa-4260-bc91-73f144657155 down in Southbound
Nov 29 07:56:22 compute-0 ovn_controller[153383]: 2025-11-29T07:56:22Z|00113|binding|INFO|Removing iface tap071be225-ec ovn-installed in OVS
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.624 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:22.629 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:b6:df 10.100.0.11'], port_security=['fa:16:3e:e5:b6:df 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8c45989b-e06e-4bd4-9961-e7756223b869', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0aa15e11d9794e608f3aebb38ea3606a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '12be058e-47a2-4b10-9928-e2f6336ca894', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17005c51-b13f-40d9-a999-415174c76777, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=071be225-ecaa-4260-bc91-73f144657155) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:22.631 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 071be225-ecaa-4260-bc91-73f144657155 in datapath e678432d-7aa3-4fc9-8ccb-76ec3ffbd276 unbound from our chassis
Nov 29 07:56:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:22.634 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e678432d-7aa3-4fc9-8ccb-76ec3ffbd276, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:56:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:22.635 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[006ec2c6-91c7-48a5-9c35-8ba0a43413d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:22.636 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276 namespace which is not needed anymore
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.653 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:22 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 29 07:56:22 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 23.790s CPU time.
Nov 29 07:56:22 compute-0 systemd-machined[217781]: Machine qemu-7-instance-00000007 terminated.
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.755 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.767 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.774 256736 INFO nova.virt.libvirt.driver [-] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Instance destroyed successfully.
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.775 256736 DEBUG nova.objects.instance [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lazy-loading 'resources' on Instance uuid 8c45989b-e06e-4bd4-9961-e7756223b869 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 311 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 10 KiB/s wr, 101 op/s
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.801 256736 DEBUG nova.virt.libvirt.vif [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-153694694',display_name='tempest-TestStampPattern-server-153694694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-153694694',id=7,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVIcG7iT8EuYRWwvh0xXPSujdlj7uKuKXhamDHlJ4QJb0wGzod0+Qsrv8DmE1TIN7tAAQa46X3+yrMq9A2yMt4mHHy/8wbOvohqcW7H1CuWupyv3Z+eB3t88xUDCWSqKQ==',key_name='tempest-TestStampPattern-886597490',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0aa15e11d9794e608f3aebb38ea3606a',ramdisk_id='',reservation_id='r-rmkjsyj0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-1135660929',owner_user_name='tempest-TestStampPattern-1135660929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:57Z,user_data=None,user_id='81f071491e4c48c59662c7feba200299',uuid=8c45989b-e06e-4bd4-9961-e7756223b869,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.801 256736 DEBUG nova.network.os_vif_util [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converting VIF {"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.802 256736 DEBUG nova.network.os_vif_util [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:b6:df,bridge_name='br-int',has_traffic_filtering=True,id=071be225-ecaa-4260-bc91-73f144657155,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap071be225-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.803 256736 DEBUG os_vif [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:b6:df,bridge_name='br-int',has_traffic_filtering=True,id=071be225-ecaa-4260-bc91-73f144657155,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap071be225-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.804 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.805 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap071be225-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.807 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.810 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.814 256736 INFO os_vif [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:b6:df,bridge_name='br-int',has_traffic_filtering=True,id=071be225-ecaa-4260-bc91-73f144657155,network=Network(e678432d-7aa3-4fc9-8ccb-76ec3ffbd276),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap071be225-ec')
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.923 256736 DEBUG nova.network.neutron [req-431f6813-a71e-4878-a183-a2243c607da3 req-93b6a478-9fed-4af0-a010-942bea7d2828 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updated VIF entry in instance network info cache for port 071be225-ecaa-4260-bc91-73f144657155. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.924 256736 DEBUG nova.network.neutron [req-431f6813-a71e-4878-a183-a2243c607da3 req-93b6a478-9fed-4af0-a010-942bea7d2828 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updating instance_info_cache with network_info: [{"id": "071be225-ecaa-4260-bc91-73f144657155", "address": "fa:16:3e:e5:b6:df", "network": {"id": "e678432d-7aa3-4fc9-8ccb-76ec3ffbd276", "bridge": "br-int", "label": "tempest-TestStampPattern-305147175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0aa15e11d9794e608f3aebb38ea3606a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap071be225-ec", "ovs_interfaceid": "071be225-ecaa-4260-bc91-73f144657155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:22 compute-0 nova_compute[256729]: 2025-11-29 07:56:22.949 256736 DEBUG oslo_concurrency.lockutils [req-431f6813-a71e-4878-a183-a2243c607da3 req-93b6a478-9fed-4af0-a010-942bea7d2828 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-8c45989b-e06e-4bd4-9961-e7756223b869" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:56:23 compute-0 neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276[276230]: [NOTICE]   (276234) : haproxy version is 2.8.14-c23fe91
Nov 29 07:56:23 compute-0 neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276[276230]: [NOTICE]   (276234) : path to executable is /usr/sbin/haproxy
Nov 29 07:56:23 compute-0 neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276[276230]: [WARNING]  (276234) : Exiting Master process...
Nov 29 07:56:23 compute-0 neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276[276230]: [WARNING]  (276234) : Exiting Master process...
Nov 29 07:56:23 compute-0 neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276[276230]: [ALERT]    (276234) : Current worker (276236) exited with code 143 (Terminated)
Nov 29 07:56:23 compute-0 neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276[276230]: [WARNING]  (276234) : All workers exited. Exiting... (0)
Nov 29 07:56:23 compute-0 systemd[1]: libpod-a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362.scope: Deactivated successfully.
Nov 29 07:56:23 compute-0 podman[279866]: 2025-11-29 07:56:23.529234197 +0000 UTC m=+0.714043660 container died a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 07:56:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362-userdata-shm.mount: Deactivated successfully.
Nov 29 07:56:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7ecbf876d976597b54a7545fab95108622a93dfc09d4b719cb4cde87b1a9d2f-merged.mount: Deactivated successfully.
Nov 29 07:56:23 compute-0 sshd-session[279833]: Connection closed by authenticating user root 143.14.121.41 port 35828 [preauth]
Nov 29 07:56:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:24 compute-0 podman[279866]: 2025-11-29 07:56:24.089769506 +0000 UTC m=+1.274578969 container cleanup a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:56:24 compute-0 systemd[1]: libpod-conmon-a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362.scope: Deactivated successfully.
Nov 29 07:56:24 compute-0 podman[279912]: 2025-11-29 07:56:24.162894493 +0000 UTC m=+0.268408264 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 07:56:24 compute-0 podman[279913]: 2025-11-29 07:56:24.241390784 +0000 UTC m=+0.340052911 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:56:24 compute-0 podman[279911]: 2025-11-29 07:56:24.242819311 +0000 UTC m=+0.355278737 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 07:56:24 compute-0 podman[279945]: 2025-11-29 07:56:24.295756348 +0000 UTC m=+0.172262750 container remove a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 07:56:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:24.302 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed31741-e3d2-4bdb-82fc-2fdc4ab8df0c]: (4, ('Sat Nov 29 07:56:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276 (a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362)\na6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362\nSat Nov 29 07:56:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276 (a6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362)\na6e45bbc1a1f02628e073b0323c2171876a5dbdcb616b569e79af10cc2b78362\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:24.304 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4cfbde-3387-4a8d-a0d5-f388140c2f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:24.305 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape678432d-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:24 compute-0 nova_compute[256729]: 2025-11-29 07:56:24.307 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:24 compute-0 kernel: tape678432d-70: left promiscuous mode
Nov 29 07:56:24 compute-0 nova_compute[256729]: 2025-11-29 07:56:24.327 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:24.331 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[bceb6a37-cc64-46ec-b2e3-c14febad9bbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:24.347 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb66bc9-60d7-4af8-9a28-7366921aa1eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:24.349 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9a6c666e-146b-4daf-99bf-07bbb776fff8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:24.368 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d2384042-3a62-4357-b81c-9fa3f1adbc93]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515092, 'reachable_time': 20644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279990, 'error': None, 'target': 'ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:24.372 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e678432d-7aa3-4fc9-8ccb-76ec3ffbd276 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:56:24 compute-0 systemd[1]: run-netns-ovnmeta\x2de678432d\x2d7aa3\x2d4fc9\x2d8ccb\x2d76ec3ffbd276.mount: Deactivated successfully.
Nov 29 07:56:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:24.372 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[a21f8a44-82fb-4c04-995c-f68d517c1768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 29 07:56:24 compute-0 ceph-mon[75050]: pgmap v1560: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 311 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 10 KiB/s wr, 101 op/s
Nov 29 07:56:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 29 07:56:24 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 29 07:56:24 compute-0 nova_compute[256729]: 2025-11-29 07:56:24.640 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 295 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 5.4 KiB/s wr, 76 op/s
Nov 29 07:56:25 compute-0 nova_compute[256729]: 2025-11-29 07:56:25.431 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402970.430704, a157d150-bd1c-4f7b-8068-764a8f3af802 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:56:25 compute-0 nova_compute[256729]: 2025-11-29 07:56:25.432 256736 INFO nova.compute.manager [-] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] VM Stopped (Lifecycle Event)
Nov 29 07:56:25 compute-0 nova_compute[256729]: 2025-11-29 07:56:25.457 256736 DEBUG nova.compute.manager [None req-ca84bb43-b96f-4bea-b2dc-6878f17dfd63 - - - - - -] [instance: a157d150-bd1c-4f7b-8068-764a8f3af802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:56:25 compute-0 nova_compute[256729]: 2025-11-29 07:56:25.481 256736 INFO nova.virt.libvirt.driver [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Deleting instance files /var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869_del
Nov 29 07:56:25 compute-0 nova_compute[256729]: 2025-11-29 07:56:25.482 256736 INFO nova.virt.libvirt.driver [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Deletion of /var/lib/nova/instances/8c45989b-e06e-4bd4-9961-e7756223b869_del complete
Nov 29 07:56:25 compute-0 nova_compute[256729]: 2025-11-29 07:56:25.543 256736 INFO nova.compute.manager [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Took 4.03 seconds to destroy the instance on the hypervisor.
Nov 29 07:56:25 compute-0 nova_compute[256729]: 2025-11-29 07:56:25.544 256736 DEBUG oslo.service.loopingcall [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:56:25 compute-0 nova_compute[256729]: 2025-11-29 07:56:25.544 256736 DEBUG nova.compute.manager [-] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:56:25 compute-0 nova_compute[256729]: 2025-11-29 07:56:25.544 256736 DEBUG nova.network.neutron [-] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:56:25 compute-0 ceph-mon[75050]: osdmap e242: 3 total, 3 up, 3 in
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.074 256736 DEBUG nova.network.neutron [-] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.106 256736 DEBUG oslo_concurrency.lockutils [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.107 256736 DEBUG oslo_concurrency.lockutils [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.108 256736 INFO nova.compute.manager [-] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Took 0.56 seconds to deallocate network for instance.
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.132 256736 INFO nova.compute.manager [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Detaching volume 826052bb-c5ae-41d7-b11b-9d81bb72ee1d
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.155 256736 DEBUG oslo_concurrency.lockutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.155 256736 DEBUG oslo_concurrency.lockutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.161 256736 DEBUG nova.compute.manager [req-e409c126-ba65-40ff-8f48-50f973277b44 req-f458a721-76eb-48b3-a159-d219c9b1bf4c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Received event network-vif-deleted-071be225-ecaa-4260-bc91-73f144657155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.241 256736 DEBUG oslo_concurrency.processutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.265 256736 DEBUG oslo_concurrency.lockutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.270 256736 INFO nova.virt.block_device [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Attempting to driver detach volume 826052bb-c5ae-41d7-b11b-9d81bb72ee1d from mountpoint /dev/vdb
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.281 256736 DEBUG nova.virt.libvirt.driver [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Attempting to detach device vdb from instance 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.282 256736 DEBUG nova.virt.libvirt.guest [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-826052bb-c5ae-41d7-b11b-9d81bb72ee1d">
Nov 29 07:56:26 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   </source>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <serial>826052bb-c5ae-41d7-b11b-9d81bb72ee1d</serial>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:56:26 compute-0 nova_compute[256729]: </disk>
Nov 29 07:56:26 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.340 256736 INFO nova.virt.libvirt.driver [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Successfully detached device vdb from instance 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf from the persistent domain config.
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.341 256736 DEBUG nova.virt.libvirt.driver [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.343 256736 DEBUG nova.virt.libvirt.guest [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-826052bb-c5ae-41d7-b11b-9d81bb72ee1d">
Nov 29 07:56:26 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   </source>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <serial>826052bb-c5ae-41d7-b11b-9d81bb72ee1d</serial>
Nov 29 07:56:26 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:56:26 compute-0 nova_compute[256729]: </disk>
Nov 29 07:56:26 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.472 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764402986.471942, 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.474 256736 DEBUG nova.virt.libvirt.driver [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.476 256736 INFO nova.virt.libvirt.driver [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Successfully detached device vdb from instance 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf from the live domain config.
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.654 256736 DEBUG nova.objects.instance [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'flavor' on Instance uuid 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.693 256736 DEBUG oslo_concurrency.lockutils [None req-dd3eda11-a79e-4184-9bf4-fea6885c3180 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.694 256736 DEBUG oslo_concurrency.lockutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.694 256736 DEBUG oslo_concurrency.lockutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.694 256736 DEBUG oslo_concurrency.lockutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.695 256736 DEBUG oslo_concurrency.lockutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.696 256736 INFO nova.compute.manager [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Terminating instance
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.698 256736 DEBUG nova.compute.manager [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:56:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771307591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.750 256736 DEBUG oslo_concurrency.processutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.756 256736 DEBUG nova.compute.provider_tree [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.772 256736 DEBUG nova.scheduler.client.report [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:56:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 280 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 6.0 KiB/s wr, 81 op/s
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.795 256736 DEBUG oslo_concurrency.lockutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.818 256736 INFO nova.scheduler.client.report [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Deleted allocations for instance 8c45989b-e06e-4bd4-9961-e7756223b869
Nov 29 07:56:26 compute-0 ceph-mon[75050]: pgmap v1562: 305 pgs: 305 active+clean; 295 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 5.4 KiB/s wr, 76 op/s
Nov 29 07:56:26 compute-0 nova_compute[256729]: 2025-11-29 07:56:26.890 256736 DEBUG oslo_concurrency.lockutils [None req-1606cb3c-6149-45f5-ab53-7f9ac22557f0 81f071491e4c48c59662c7feba200299 0aa15e11d9794e608f3aebb38ea3606a - - default default] Lock "8c45989b-e06e-4bd4-9961-e7756223b869" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.085 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:27 compute-0 kernel: tapb8c51f74-69 (unregistering): left promiscuous mode
Nov 29 07:56:27 compute-0 NetworkManager[48962]: <info>  [1764402987.1197] device (tapb8c51f74-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:56:27 compute-0 ovn_controller[153383]: 2025-11-29T07:56:27Z|00114|binding|INFO|Releasing lport b8c51f74-6990-452a-b5b8-28fc5b51bef8 from this chassis (sb_readonly=0)
Nov 29 07:56:27 compute-0 ovn_controller[153383]: 2025-11-29T07:56:27Z|00115|binding|INFO|Setting lport b8c51f74-6990-452a-b5b8-28fc5b51bef8 down in Southbound
Nov 29 07:56:27 compute-0 ovn_controller[153383]: 2025-11-29T07:56:27Z|00116|binding|INFO|Removing iface tapb8c51f74-69 ovn-installed in OVS
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.128 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.141 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:be:cd 10.100.0.8'], port_security=['fa:16:3e:50:be:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aede5de4449e445582aa074918be39c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4af7f879-b112-4921-902c-00a76a0cb23b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6a691ab-2be1-4362-9a9a-3c54aabcf5a5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=b8c51f74-6990-452a-b5b8-28fc5b51bef8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.144 163655 INFO neutron.agent.ovn.metadata.agent [-] Port b8c51f74-6990-452a-b5b8-28fc5b51bef8 in datapath 5908d283-c1b3-46ec-8e8e-b81d59c13f9a unbound from our chassis
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.146 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5908d283-c1b3-46ec-8e8e-b81d59c13f9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.148 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[112a4bb8-d8f6-4168-993e-d49bda0d81c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.149 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a namespace which is not needed anymore
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.150 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 29 07:56:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 16.936s CPU time.
Nov 29 07:56:27 compute-0 systemd-machined[217781]: Machine qemu-9-instance-00000009 terminated.
Nov 29 07:56:27 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[278663]: [NOTICE]   (278674) : haproxy version is 2.8.14-c23fe91
Nov 29 07:56:27 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[278663]: [NOTICE]   (278674) : path to executable is /usr/sbin/haproxy
Nov 29 07:56:27 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[278663]: [WARNING]  (278674) : Exiting Master process...
Nov 29 07:56:27 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[278663]: [WARNING]  (278674) : Exiting Master process...
Nov 29 07:56:27 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[278663]: [ALERT]    (278674) : Current worker (278676) exited with code 143 (Terminated)
Nov 29 07:56:27 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[278663]: [WARNING]  (278674) : All workers exited. Exiting... (0)
Nov 29 07:56:27 compute-0 systemd[1]: libpod-1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c.scope: Deactivated successfully.
Nov 29 07:56:27 compute-0 podman[280041]: 2025-11-29 07:56:27.328207997 +0000 UTC m=+0.050833492 container died 1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.340 256736 INFO nova.virt.libvirt.driver [-] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Instance destroyed successfully.
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.341 256736 DEBUG nova.objects.instance [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'resources' on Instance uuid 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c-userdata-shm.mount: Deactivated successfully.
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.358 256736 DEBUG nova.virt.libvirt.vif [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:55:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-280999621',display_name='tempest-VolumesSnapshotTestJSON-instance-280999621',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-280999621',id=9,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMWhh6xMyheKz/qakJMV0PY8VIZMvGtjrEW0ajE8Jdkf1cphTUAFk9GAOHqhajE/ikW8Cc5/oTjLgctLvAjh2Ld3iPyA7H7nITvAJ5EuwsXy6Z3UfC3+qycUlKu4OGr0Q==',key_name='tempest-keypair-1243301541',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aede5de4449e445582aa074918be39c9',ramdisk_id='',reservation_id='r-f0qvvlwe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-1121052015',owner_user_name='tempest-VolumesSnapshotTestJSON-1121052015-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:55:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c0b3479158714faaa4e8c3c336457d6d',uuid=5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.359 256736 DEBUG nova.network.os_vif_util [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converting VIF {"id": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "address": "fa:16:3e:50:be:cd", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb8c51f74-69", "ovs_interfaceid": "b8c51f74-6990-452a-b5b8-28fc5b51bef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dc7afc8bac0fcca9996eb383dea25d264fcbd45a2c40716405c0408c1c87d27-merged.mount: Deactivated successfully.
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.359 256736 DEBUG nova.network.os_vif_util [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:be:cd,bridge_name='br-int',has_traffic_filtering=True,id=b8c51f74-6990-452a-b5b8-28fc5b51bef8,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c51f74-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.360 256736 DEBUG os_vif [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:be:cd,bridge_name='br-int',has_traffic_filtering=True,id=b8c51f74-6990-452a-b5b8-28fc5b51bef8,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c51f74-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.361 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.362 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8c51f74-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.365 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.368 256736 INFO os_vif [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:be:cd,bridge_name='br-int',has_traffic_filtering=True,id=b8c51f74-6990-452a-b5b8-28fc5b51bef8,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb8c51f74-69')
Nov 29 07:56:27 compute-0 podman[280041]: 2025-11-29 07:56:27.371234228 +0000 UTC m=+0.093859763 container cleanup 1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:56:27 compute-0 systemd[1]: libpod-conmon-1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c.scope: Deactivated successfully.
Nov 29 07:56:27 compute-0 podman[280091]: 2025-11-29 07:56:27.446122752 +0000 UTC m=+0.052259350 container remove 1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.451 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[27517078-ec63-4f5a-9554-4e201aec6f7b]: (4, ('Sat Nov 29 07:56:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a (1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c)\n1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c\nSat Nov 29 07:56:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a (1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c)\n1fe3f1f9c5ba5483cb2fceb6d65b48c0ea1083a48137cc83817bb1025161e16c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.453 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a576071e-7175-4210-8518-235ae62660f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.454 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5908d283-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.456 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:27 compute-0 kernel: tap5908d283-c0: left promiscuous mode
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.474 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.478 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5a1de4-af47-4b56-b603-e86c0a7c1c73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.492 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[886fde61-2188-4774-9c90-8392d8283217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.493 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[de7e2f56-4cac-47da-a5e6-fa9771bf56b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.509 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[8cfe9061-b83d-47d7-a5af-d22205c6e69f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524106, 'reachable_time': 26621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280114, 'error': None, 'target': 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.512 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:56:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:27.512 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[166ad66c-d0ea-48ec-b707-b90228897276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d5908d283\x2dc1b3\x2d46ec\x2d8e8e\x2db81d59c13f9a.mount: Deactivated successfully.
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.770 256736 INFO nova.virt.libvirt.driver [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Deleting instance files /var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_del
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.771 256736 INFO nova.virt.libvirt.driver [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Deletion of /var/lib/nova/instances/5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf_del complete
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.825 256736 INFO nova.compute.manager [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Took 1.13 seconds to destroy the instance on the hypervisor.
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.825 256736 DEBUG oslo.service.loopingcall [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.826 256736 DEBUG nova.compute.manager [-] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:56:27 compute-0 nova_compute[256729]: 2025-11-29 07:56:27.826 256736 DEBUG nova.network.neutron [-] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:56:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2771307591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:27 compute-0 ceph-mon[75050]: pgmap v1563: 305 pgs: 305 active+clean; 280 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 6.0 KiB/s wr, 81 op/s
Nov 29 07:56:28 compute-0 sshd-session[279970]: Connection closed by authenticating user root 143.14.121.41 port 34392 [preauth]
Nov 29 07:56:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:56:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2610424777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:56:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2610424777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 156 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 9.3 KiB/s wr, 142 op/s
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.795 256736 DEBUG nova.compute.manager [req-f3b18647-3f8d-4ec0-92a1-a3a158b49eed req-4e69aeab-2713-4418-aa26-5e87584bcd81 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Received event network-vif-plugged-b8c51f74-6990-452a-b5b8-28fc5b51bef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.796 256736 DEBUG oslo_concurrency.lockutils [req-f3b18647-3f8d-4ec0-92a1-a3a158b49eed req-4e69aeab-2713-4418-aa26-5e87584bcd81 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.796 256736 DEBUG oslo_concurrency.lockutils [req-f3b18647-3f8d-4ec0-92a1-a3a158b49eed req-4e69aeab-2713-4418-aa26-5e87584bcd81 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.798 256736 DEBUG oslo_concurrency.lockutils [req-f3b18647-3f8d-4ec0-92a1-a3a158b49eed req-4e69aeab-2713-4418-aa26-5e87584bcd81 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.799 256736 DEBUG nova.compute.manager [req-f3b18647-3f8d-4ec0-92a1-a3a158b49eed req-4e69aeab-2713-4418-aa26-5e87584bcd81 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] No waiting events found dispatching network-vif-plugged-b8c51f74-6990-452a-b5b8-28fc5b51bef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.799 256736 WARNING nova.compute.manager [req-f3b18647-3f8d-4ec0-92a1-a3a158b49eed req-4e69aeab-2713-4418-aa26-5e87584bcd81 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Received unexpected event network-vif-plugged-b8c51f74-6990-452a-b5b8-28fc5b51bef8 for instance with vm_state active and task_state deleting.
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.803 256736 DEBUG nova.network.neutron [-] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.821 256736 INFO nova.compute.manager [-] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Took 0.99 seconds to deallocate network for instance.
Nov 29 07:56:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2610424777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2610424777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.925 256736 WARNING nova.volume.cinder [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Attachment cf6b1cc0-8ae6-47e6-9dc6-aadad236ca9c does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = cf6b1cc0-8ae6-47e6-9dc6-aadad236ca9c. (HTTP 404) (Request-ID: req-9dcfd46f-b5e8-403b-94e3-f0b5f8f4a1d4)
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.925 256736 INFO nova.compute.manager [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Took 0.10 seconds to detach 1 volumes for instance.
Nov 29 07:56:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.981 256736 DEBUG oslo_concurrency.lockutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:28 compute-0 nova_compute[256729]: 2025-11-29 07:56:28.982 256736 DEBUG oslo_concurrency.lockutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 29 07:56:29 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 29 07:56:29 compute-0 nova_compute[256729]: 2025-11-29 07:56:29.044 256736 DEBUG oslo_concurrency.processutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1748974857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:29 compute-0 nova_compute[256729]: 2025-11-29 07:56:29.498 256736 DEBUG oslo_concurrency.processutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:29 compute-0 nova_compute[256729]: 2025-11-29 07:56:29.505 256736 DEBUG nova.compute.provider_tree [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:56:29 compute-0 nova_compute[256729]: 2025-11-29 07:56:29.521 256736 DEBUG nova.scheduler.client.report [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:56:29 compute-0 nova_compute[256729]: 2025-11-29 07:56:29.543 256736 DEBUG oslo_concurrency.lockutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:29 compute-0 nova_compute[256729]: 2025-11-29 07:56:29.573 256736 INFO nova.scheduler.client.report [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Deleted allocations for instance 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf
Nov 29 07:56:29 compute-0 nova_compute[256729]: 2025-11-29 07:56:29.645 256736 DEBUG oslo_concurrency.lockutils [None req-435e8564-762c-445f-a56e-1ff1c7933539 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:30 compute-0 ceph-mon[75050]: pgmap v1564: 305 pgs: 305 active+clean; 156 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 9.3 KiB/s wr, 142 op/s
Nov 29 07:56:30 compute-0 ceph-mon[75050]: osdmap e243: 3 total, 3 up, 3 in
Nov 29 07:56:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1748974857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 156 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 6.2 KiB/s wr, 108 op/s
Nov 29 07:56:31 compute-0 nova_compute[256729]: 2025-11-29 07:56:31.016 256736 DEBUG nova.compute.manager [req-e477bb53-1a8a-4ba9-b30e-d522cb7c9a62 req-b1a4a28b-e89d-416e-9b45-2a86a2ecd7dd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Received event network-vif-deleted-b8c51f74-6990-452a-b5b8-28fc5b51bef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:31 compute-0 sshd-session[280116]: Connection closed by authenticating user root 143.14.121.41 port 34406 [preauth]
Nov 29 07:56:31 compute-0 nova_compute[256729]: 2025-11-29 07:56:31.961 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:31 compute-0 nova_compute[256729]: 2025-11-29 07:56:31.962 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:31 compute-0 nova_compute[256729]: 2025-11-29 07:56:31.982 256736 DEBUG nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.067 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.068 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.076 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.076 256736 INFO nova.compute.claims [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:56:32 compute-0 ceph-mon[75050]: pgmap v1566: 305 pgs: 305 active+clean; 156 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 6.2 KiB/s wr, 108 op/s
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.087 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.211 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.364 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010734151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.705 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.714 256736 DEBUG nova.compute.provider_tree [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.730 256736 DEBUG nova.scheduler.client.report [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.757 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.757 256736 DEBUG nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:56:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 5.2 KiB/s wr, 97 op/s
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.801 256736 DEBUG nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.802 256736 DEBUG nova.network.neutron [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.825 256736 INFO nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.845 256736 DEBUG nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.955 256736 DEBUG nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.957 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.957 256736 INFO nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Creating image(s)
Nov 29 07:56:32 compute-0 nova_compute[256729]: 2025-11-29 07:56:32.986 256736 DEBUG nova.storage.rbd_utils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] rbd image b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.012 256736 DEBUG nova.storage.rbd_utils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] rbd image b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.038 256736 DEBUG nova.storage.rbd_utils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] rbd image b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.041 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.258 256736 DEBUG nova.policy [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0c5cb3005d814da59b97c47aec6abaeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7bf43fdb064c4ac3bca9dd2593ccf7ce', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.499 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.502 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.503 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.503 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2010734151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.590 256736 DEBUG nova.storage.rbd_utils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] rbd image b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.595 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:33 compute-0 nova_compute[256729]: 2025-11-29 07:56:33.990 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.244 256736 DEBUG nova.network.neutron [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Successfully created port: 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.338 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.385 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.790s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.447 256736 DEBUG nova.storage.rbd_utils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] resizing rbd image b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:56:34 compute-0 ceph-mon[75050]: pgmap v1567: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 5.2 KiB/s wr, 97 op/s
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.666 256736 DEBUG nova.objects.instance [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lazy-loading 'migration_context' on Instance uuid b8cc435e-f1de-4ae2-990d-3e27f1e26a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.687 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.688 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Ensure instance console log exists: /var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.689 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.689 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:34 compute-0 nova_compute[256729]: 2025-11-29 07:56:34.690 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.6 KiB/s wr, 93 op/s
Nov 29 07:56:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:56:34 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3589309784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:56:34 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3589309784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:56:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:56:35 compute-0 nova_compute[256729]: 2025-11-29 07:56:35.633 256736 DEBUG nova.network.neutron [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Successfully updated port: 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:56:35 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3589309784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:56:35 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3589309784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:56:35 compute-0 nova_compute[256729]: 2025-11-29 07:56:35.662 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "refresh_cache-b8cc435e-f1de-4ae2-990d-3e27f1e26a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:56:35 compute-0 nova_compute[256729]: 2025-11-29 07:56:35.662 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquired lock "refresh_cache-b8cc435e-f1de-4ae2-990d-3e27f1e26a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:56:35 compute-0 nova_compute[256729]: 2025-11-29 07:56:35.663 256736 DEBUG nova.network.neutron [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:56:35 compute-0 nova_compute[256729]: 2025-11-29 07:56:35.715 256736 DEBUG nova.compute.manager [req-4b004780-d1d3-463d-b3b4-3da81bb29cdf req-b655d646-4b7e-4239-a60a-952dcc333b4d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received event network-changed-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:35 compute-0 nova_compute[256729]: 2025-11-29 07:56:35.715 256736 DEBUG nova.compute.manager [req-4b004780-d1d3-463d-b3b4-3da81bb29cdf req-b655d646-4b7e-4239-a60a-952dcc333b4d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Refreshing instance network info cache due to event network-changed-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:56:35 compute-0 nova_compute[256729]: 2025-11-29 07:56:35.715 256736 DEBUG oslo_concurrency.lockutils [req-4b004780-d1d3-463d-b3b4-3da81bb29cdf req-b655d646-4b7e-4239-a60a-952dcc333b4d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-b8cc435e-f1de-4ae2-990d-3e27f1e26a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:56:35 compute-0 nova_compute[256729]: 2025-11-29 07:56:35.791 256736 DEBUG nova.network.neutron [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.382 256736 DEBUG nova.network.neutron [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Updating instance_info_cache with network_info: [{"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.503 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Releasing lock "refresh_cache-b8cc435e-f1de-4ae2-990d-3e27f1e26a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.504 256736 DEBUG nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Instance network_info: |[{"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.504 256736 DEBUG oslo_concurrency.lockutils [req-4b004780-d1d3-463d-b3b4-3da81bb29cdf req-b655d646-4b7e-4239-a60a-952dcc333b4d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-b8cc435e-f1de-4ae2-990d-3e27f1e26a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.505 256736 DEBUG nova.network.neutron [req-4b004780-d1d3-463d-b3b4-3da81bb29cdf req-b655d646-4b7e-4239-a60a-952dcc333b4d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Refreshing network info cache for port 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.510 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Start _get_guest_xml network_info=[{"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.516 256736 WARNING nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.527 256736 DEBUG nova.virt.libvirt.host [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.528 256736 DEBUG nova.virt.libvirt.host [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.534 256736 DEBUG nova.virt.libvirt.host [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.535 256736 DEBUG nova.virt.libvirt.host [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.536 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.536 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.537 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.538 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.538 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.539 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.539 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.540 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.540 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.541 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.541 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.542 256736 DEBUG nova.virt.hardware [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:56:36 compute-0 nova_compute[256729]: 2025-11-29 07:56:36.547 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 29 07:56:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 29 07:56:36 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 29 07:56:36 compute-0 ceph-mon[75050]: pgmap v1568: 305 pgs: 305 active+clean; 134 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.6 KiB/s wr, 93 op/s
Nov 29 07:56:36 compute-0 sshd-session[280140]: Connection closed by authenticating user root 143.14.121.41 port 53106 [preauth]
Nov 29 07:56:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 142 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 933 KiB/s wr, 63 op/s
Nov 29 07:56:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:56:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2088195446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.025 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.046 256736 DEBUG nova.storage.rbd_utils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] rbd image b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.049 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.089 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.367 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:56:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1576281368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.469 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.470 256736 DEBUG nova.virt.libvirt.vif [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-438353141',display_name='tempest-VolumesExtendAttachedTest-instance-438353141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-438353141',id=10,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGn5K8e6youL+V/TDF+jRWUHECX24yHN4WVE3KX6EwKnN9GA4h2ZT1MJr3xV6xOYUt5v+J53UarYfWFEzpB6qNHXL3bK/rzPTklAH5cSOpiLhI2xzvUa8JU3xBLsceMY7g==',key_name='tempest-keypair-1640688050',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7bf43fdb064c4ac3bca9dd2593ccf7ce',ramdisk_id='',reservation_id='r-fd1r8bcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-602415048',owner_user_name='tempest-VolumesExtendAttachedTest-602415048-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0c5cb3005d814da59b97c47aec6abaeb',uuid=b8cc435e-f1de-4ae2-990d-3e27f1e26a21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.471 256736 DEBUG nova.network.os_vif_util [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Converting VIF {"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.472 256736 DEBUG nova.network.os_vif_util [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a2:6a,bridge_name='br-int',has_traffic_filtering=True,id=0c5dc4c4-1973-4476-a9d5-2a14d9f8302c,network=Network(00f1b1a1-e01a-4267-8e2c-c523dd99b965),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5dc4c4-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.472 256736 DEBUG nova.objects.instance [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lazy-loading 'pci_devices' on Instance uuid b8cc435e-f1de-4ae2-990d-3e27f1e26a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.489 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <uuid>b8cc435e-f1de-4ae2-990d-3e27f1e26a21</uuid>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <name>instance-0000000a</name>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <nova:name>tempest-VolumesExtendAttachedTest-instance-438353141</nova:name>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:56:36</nova:creationTime>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <nova:user uuid="0c5cb3005d814da59b97c47aec6abaeb">tempest-VolumesExtendAttachedTest-602415048-project-member</nova:user>
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <nova:project uuid="7bf43fdb064c4ac3bca9dd2593ccf7ce">tempest-VolumesExtendAttachedTest-602415048</nova:project>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <nova:port uuid="0c5dc4c4-1973-4476-a9d5-2a14d9f8302c">
Nov 29 07:56:37 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <system>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <entry name="serial">b8cc435e-f1de-4ae2-990d-3e27f1e26a21</entry>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <entry name="uuid">b8cc435e-f1de-4ae2-990d-3e27f1e26a21</entry>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     </system>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <os>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   </os>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <features>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   </features>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk">
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       </source>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk.config">
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       </source>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:56:37 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:e1:a2:6a"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <target dev="tap0c5dc4c4-19"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21/console.log" append="off"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <video>
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     </video>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:56:37 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:56:37 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:56:37 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:56:37 compute-0 nova_compute[256729]: </domain>
Nov 29 07:56:37 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.491 256736 DEBUG nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Preparing to wait for external event network-vif-plugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.491 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.491 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.491 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.492 256736 DEBUG nova.virt.libvirt.vif [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-438353141',display_name='tempest-VolumesExtendAttachedTest-instance-438353141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-438353141',id=10,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGn5K8e6youL+V/TDF+jRWUHECX24yHN4WVE3KX6EwKnN9GA4h2ZT1MJr3xV6xOYUt5v+J53UarYfWFEzpB6qNHXL3bK/rzPTklAH5cSOpiLhI2xzvUa8JU3xBLsceMY7g==',key_name='tempest-keypair-1640688050',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7bf43fdb064c4ac3bca9dd2593ccf7ce',ramdisk_id='',reservation_id='r-fd1r8bcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-602415048',owner_user_name='tempest-VolumesExtendAttachedTest-602415048-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0c5cb3005d814da59b97c47aec6abaeb',uuid=b8cc435e-f1de-4ae2-990d-3e27f1e26a21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.492 256736 DEBUG nova.network.os_vif_util [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Converting VIF {"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.493 256736 DEBUG nova.network.os_vif_util [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a2:6a,bridge_name='br-int',has_traffic_filtering=True,id=0c5dc4c4-1973-4476-a9d5-2a14d9f8302c,network=Network(00f1b1a1-e01a-4267-8e2c-c523dd99b965),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5dc4c4-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.493 256736 DEBUG os_vif [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a2:6a,bridge_name='br-int',has_traffic_filtering=True,id=0c5dc4c4-1973-4476-a9d5-2a14d9f8302c,network=Network(00f1b1a1-e01a-4267-8e2c-c523dd99b965),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5dc4c4-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.494 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.494 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.495 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.497 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.497 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c5dc4c4-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.498 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c5dc4c4-19, col_values=(('external_ids', {'iface-id': '0c5dc4c4-1973-4476-a9d5-2a14d9f8302c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:a2:6a', 'vm-uuid': 'b8cc435e-f1de-4ae2-990d-3e27f1e26a21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.500 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:37 compute-0 NetworkManager[48962]: <info>  [1764402997.5008] manager: (tap0c5dc4c4-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.502 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.508 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.510 256736 INFO os_vif [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a2:6a,bridge_name='br-int',has_traffic_filtering=True,id=0c5dc4c4-1973-4476-a9d5-2a14d9f8302c,network=Network(00f1b1a1-e01a-4267-8e2c-c523dd99b965),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5dc4c4-19')
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.573 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.573 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.574 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] No VIF found with MAC fa:16:3e:e1:a2:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.574 256736 INFO nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Using config drive
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.596 256736 DEBUG nova.storage.rbd_utils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] rbd image b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.776 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402982.7676282, 8c45989b-e06e-4bd4-9961-e7756223b869 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.777 256736 INFO nova.compute.manager [-] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] VM Stopped (Lifecycle Event)
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.804 256736 DEBUG nova.compute.manager [None req-23dae723-249d-4937-b002-3a3c9fa57e6b - - - - - -] [instance: 8c45989b-e06e-4bd4-9961-e7756223b869] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:56:37 compute-0 ceph-mon[75050]: osdmap e244: 3 total, 3 up, 3 in
Nov 29 07:56:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2088195446' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1576281368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.974 256736 INFO nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Creating config drive at /var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21/disk.config
Nov 29 07:56:37 compute-0 nova_compute[256729]: 2025-11-29 07:56:37.982 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq52952t1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.010 256736 DEBUG nova.network.neutron [req-4b004780-d1d3-463d-b3b4-3da81bb29cdf req-b655d646-4b7e-4239-a60a-952dcc333b4d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Updated VIF entry in instance network info cache for port 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.012 256736 DEBUG nova.network.neutron [req-4b004780-d1d3-463d-b3b4-3da81bb29cdf req-b655d646-4b7e-4239-a60a-952dcc333b4d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Updating instance_info_cache with network_info: [{"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.030 256736 DEBUG oslo_concurrency.lockutils [req-4b004780-d1d3-463d-b3b4-3da81bb29cdf req-b655d646-4b7e-4239-a60a-952dcc333b4d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-b8cc435e-f1de-4ae2-990d-3e27f1e26a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.124 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq52952t1" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.155 256736 DEBUG nova.storage.rbd_utils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] rbd image b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.159 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21/disk.config b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.325 256736 DEBUG oslo_concurrency.processutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21/disk.config b8cc435e-f1de-4ae2-990d-3e27f1e26a21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.327 256736 INFO nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Deleting local config drive /var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21/disk.config because it was imported into RBD.
Nov 29 07:56:38 compute-0 kernel: tap0c5dc4c4-19: entered promiscuous mode
Nov 29 07:56:38 compute-0 NetworkManager[48962]: <info>  [1764402998.3750] manager: (tap0c5dc4c4-19): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Nov 29 07:56:38 compute-0 ovn_controller[153383]: 2025-11-29T07:56:38Z|00117|binding|INFO|Claiming lport 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c for this chassis.
Nov 29 07:56:38 compute-0 ovn_controller[153383]: 2025-11-29T07:56:38Z|00118|binding|INFO|0c5dc4c4-1973-4476-a9d5-2a14d9f8302c: Claiming fa:16:3e:e1:a2:6a 10.100.0.7
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.377 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.381 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.398 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:a2:6a 10.100.0.7'], port_security=['fa:16:3e:e1:a2:6a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b8cc435e-f1de-4ae2-990d-3e27f1e26a21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00f1b1a1-e01a-4267-8e2c-c523dd99b965', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7bf43fdb064c4ac3bca9dd2593ccf7ce', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f6bae9e6-1792-464f-b5c8-5ba4b9d03ba3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80ae3c04-5883-4581-bcfa-da58cb4c9887, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=0c5dc4c4-1973-4476-a9d5-2a14d9f8302c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.400 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c in datapath 00f1b1a1-e01a-4267-8e2c-c523dd99b965 bound to our chassis
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.401 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00f1b1a1-e01a-4267-8e2c-c523dd99b965
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.414 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[cc60e8bf-4d80-4ff3-89c3-c075c3dd139c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.415 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00f1b1a1-e1 in ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:56:38 compute-0 systemd-udevd[280467]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.417 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00f1b1a1-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.417 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1ae524-5774-43ba-b5b0-1aea7a8661b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.418 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7cad2b-253f-4a2f-b880-940e20f2dafe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 systemd-machined[217781]: New machine qemu-10-instance-0000000a.
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.429 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[81b6da7e-e47e-4bd9-8ea8-19678e56ba5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 NetworkManager[48962]: <info>  [1764402998.4355] device (tap0c5dc4c4-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:56:38 compute-0 NetworkManager[48962]: <info>  [1764402998.4362] device (tap0c5dc4c4-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:56:38 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.459 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.459 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b709f7eb-853b-4fa5-be6a-ee19182760fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_controller[153383]: 2025-11-29T07:56:38Z|00119|binding|INFO|Setting lport 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c ovn-installed in OVS
Nov 29 07:56:38 compute-0 ovn_controller[153383]: 2025-11-29T07:56:38Z|00120|binding|INFO|Setting lport 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c up in Southbound
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.464 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.484 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[bba5fc49-5195-4f9c-9201-05e0ea4086d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.488 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b519fa4c-a397-4dc5-bd3a-b85cd6a6d0b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 NetworkManager[48962]: <info>  [1764402998.4895] manager: (tap00f1b1a1-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.520 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[00493ed3-f1d7-4d76-af3d-dfcd92806579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.523 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[a481b704-5363-4b7e-a824-4e2ebae2ca2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 NetworkManager[48962]: <info>  [1764402998.5431] device (tap00f1b1a1-e0): carrier: link connected
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.548 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0ba588-91c1-4f44-85e4-3496f9e00c57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.566 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[807433cf-2967-4239-bf45-3c7c3c7df8f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00f1b1a1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:3b:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531360, 'reachable_time': 40289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280500, 'error': None, 'target': 'ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.584 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0b408f41-caf9-41cc-bf1e-fd939371d46c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:3bac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531360, 'tstamp': 531360}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280501, 'error': None, 'target': 'ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.601 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4a028928-0af0-4304-bd04-c7c451c7d732]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00f1b1a1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:3b:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531360, 'reachable_time': 40289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280502, 'error': None, 'target': 'ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.641 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c1ef3a-b257-400a-8c9e-fea2bdd8c6bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.713 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[35ff2524-dcaf-4570-9052-5be5499fdb3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.714 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00f1b1a1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.714 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.715 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00f1b1a1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.716 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:38 compute-0 NetworkManager[48962]: <info>  [1764402998.7174] manager: (tap00f1b1a1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 29 07:56:38 compute-0 kernel: tap00f1b1a1-e0: entered promiscuous mode
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.720 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00f1b1a1-e0, col_values=(('external_ids', {'iface-id': '3c6763b3-0b24-4382-b819-78523f7aaca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.720 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:38 compute-0 ovn_controller[153383]: 2025-11-29T07:56:38Z|00121|binding|INFO|Releasing lport 3c6763b3-0b24-4382-b819-78523f7aaca6 from this chassis (sb_readonly=0)
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.754 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.756 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00f1b1a1-e01a-4267-8e2c-c523dd99b965.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00f1b1a1-e01a-4267-8e2c-c523dd99b965.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.757 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[edde433b-663e-406d-844f-e99c77e47e25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.759 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-00f1b1a1-e01a-4267-8e2c-c523dd99b965
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/00f1b1a1-e01a-4267-8e2c-c523dd99b965.pid.haproxy
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 00f1b1a1-e01a-4267-8e2c-c523dd99b965
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:56:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:38.760 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965', 'env', 'PROCESS_TAG=haproxy-00f1b1a1-e01a-4267-8e2c-c523dd99b965', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00f1b1a1-e01a-4267-8e2c-c523dd99b965.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:56:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.810 256736 DEBUG nova.compute.manager [req-0ef1c459-38e3-4d5e-914e-6bfd6eb24934 req-2c936a86-3226-4102-a1ea-7e2d9c1f2966 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received event network-vif-plugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.811 256736 DEBUG oslo_concurrency.lockutils [req-0ef1c459-38e3-4d5e-914e-6bfd6eb24934 req-2c936a86-3226-4102-a1ea-7e2d9c1f2966 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.812 256736 DEBUG oslo_concurrency.lockutils [req-0ef1c459-38e3-4d5e-914e-6bfd6eb24934 req-2c936a86-3226-4102-a1ea-7e2d9c1f2966 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.812 256736 DEBUG oslo_concurrency.lockutils [req-0ef1c459-38e3-4d5e-914e-6bfd6eb24934 req-2c936a86-3226-4102-a1ea-7e2d9c1f2966 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:38 compute-0 nova_compute[256729]: 2025-11-29 07:56:38.813 256736 DEBUG nova.compute.manager [req-0ef1c459-38e3-4d5e-914e-6bfd6eb24934 req-2c936a86-3226-4102-a1ea-7e2d9c1f2966 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Processing event network-vif-plugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:56:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 29 07:56:38 compute-0 ceph-mon[75050]: pgmap v1570: 305 pgs: 305 active+clean; 142 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 933 KiB/s wr, 63 op/s
Nov 29 07:56:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 29 07:56:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 29 07:56:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:39 compute-0 podman[280576]: 2025-11-29 07:56:39.167235346 +0000 UTC m=+0.052556864 container create 7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.183 256736 DEBUG nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.184 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402999.1827385, b8cc435e-f1de-4ae2-990d-3e27f1e26a21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.185 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] VM Started (Lifecycle Event)
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.187 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.190 256736 INFO nova.virt.libvirt.driver [-] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Instance spawned successfully.
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.191 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:56:39 compute-0 systemd[1]: Started libpod-conmon-7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1.scope.
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.211 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.218 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.224 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.225 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.226 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.226 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.227 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.228 256736 DEBUG nova.virt.libvirt.driver [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:56:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:56:39 compute-0 podman[280576]: 2025-11-29 07:56:39.139418294 +0000 UTC m=+0.024739822 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab0b901284a965ebecae38bee6c56d7ddf612e2aac8b2e3da46569b60065cae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:39 compute-0 podman[280576]: 2025-11-29 07:56:39.252514552 +0000 UTC m=+0.137836080 container init 7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 07:56:39 compute-0 podman[280576]: 2025-11-29 07:56:39.260427974 +0000 UTC m=+0.145749482 container start 7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.270 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.271 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402999.182834, b8cc435e-f1de-4ae2-990d-3e27f1e26a21 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.271 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] VM Paused (Lifecycle Event)
Nov 29 07:56:39 compute-0 neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965[280592]: [NOTICE]   (280596) : New worker (280598) forked
Nov 29 07:56:39 compute-0 neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965[280592]: [NOTICE]   (280596) : Loading success.
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.302 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.305 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764402999.1867054, b8cc435e-f1de-4ae2-990d-3e27f1e26a21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.306 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] VM Resumed (Lifecycle Event)
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.330 256736 INFO nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Took 6.37 seconds to spawn the instance on the hypervisor.
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.331 256736 DEBUG nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.332 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.337 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.374 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.408 256736 INFO nova.compute.manager [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Took 7.36 seconds to build instance.
Nov 29 07:56:39 compute-0 nova_compute[256729]: 2025-11-29 07:56:39.432 256736 DEBUG oslo_concurrency.lockutils [None req-2dfb2b76-7d2e-4c7b-a379-19e7bb10ba90 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:40 compute-0 ceph-mon[75050]: pgmap v1571: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 29 07:56:40 compute-0 ceph-mon[75050]: osdmap e245: 3 total, 3 up, 3 in
Nov 29 07:56:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 2.7 MiB/s wr, 102 op/s
Nov 29 07:56:40 compute-0 nova_compute[256729]: 2025-11-29 07:56:40.928 256736 DEBUG nova.compute.manager [req-4070b1cf-d404-40a0-8305-aa441523266e req-de0c32bc-a939-4708-815b-1eba45f44a4f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received event network-vif-plugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:40 compute-0 nova_compute[256729]: 2025-11-29 07:56:40.929 256736 DEBUG oslo_concurrency.lockutils [req-4070b1cf-d404-40a0-8305-aa441523266e req-de0c32bc-a939-4708-815b-1eba45f44a4f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:40 compute-0 nova_compute[256729]: 2025-11-29 07:56:40.929 256736 DEBUG oslo_concurrency.lockutils [req-4070b1cf-d404-40a0-8305-aa441523266e req-de0c32bc-a939-4708-815b-1eba45f44a4f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:40 compute-0 nova_compute[256729]: 2025-11-29 07:56:40.929 256736 DEBUG oslo_concurrency.lockutils [req-4070b1cf-d404-40a0-8305-aa441523266e req-de0c32bc-a939-4708-815b-1eba45f44a4f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:40 compute-0 nova_compute[256729]: 2025-11-29 07:56:40.930 256736 DEBUG nova.compute.manager [req-4070b1cf-d404-40a0-8305-aa441523266e req-de0c32bc-a939-4708-815b-1eba45f44a4f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] No waiting events found dispatching network-vif-plugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:56:40 compute-0 nova_compute[256729]: 2025-11-29 07:56:40.930 256736 WARNING nova.compute.manager [req-4070b1cf-d404-40a0-8305-aa441523266e req-de0c32bc-a939-4708-815b-1eba45f44a4f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received unexpected event network-vif-plugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c for instance with vm_state active and task_state None.
Nov 29 07:56:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 29 07:56:41 compute-0 NetworkManager[48962]: <info>  [1764403001.7643] manager: (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 29 07:56:41 compute-0 nova_compute[256729]: 2025-11-29 07:56:41.763 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:41 compute-0 NetworkManager[48962]: <info>  [1764403001.7656] manager: (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 29 07:56:41 compute-0 nova_compute[256729]: 2025-11-29 07:56:41.900 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:41 compute-0 ovn_controller[153383]: 2025-11-29T07:56:41Z|00122|binding|INFO|Releasing lport 3c6763b3-0b24-4382-b819-78523f7aaca6 from this chassis (sb_readonly=0)
Nov 29 07:56:41 compute-0 nova_compute[256729]: 2025-11-29 07:56:41.918 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 29 07:56:42 compute-0 nova_compute[256729]: 2025-11-29 07:56:42.175 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:42 compute-0 ceph-mon[75050]: pgmap v1573: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 2.7 MiB/s wr, 102 op/s
Nov 29 07:56:42 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 29 07:56:42 compute-0 nova_compute[256729]: 2025-11-29 07:56:42.338 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402987.3369772, 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:56:42 compute-0 nova_compute[256729]: 2025-11-29 07:56:42.339 256736 INFO nova.compute.manager [-] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] VM Stopped (Lifecycle Event)
Nov 29 07:56:42 compute-0 nova_compute[256729]: 2025-11-29 07:56:42.366 256736 DEBUG nova.compute.manager [None req-c8dd06ee-40da-4b68-9f30-60263a7575bd - - - - - -] [instance: 5782fbdd-96d5-43e1-936f-4c6cb0b0c7bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:56:42 compute-0 nova_compute[256729]: 2025-11-29 07:56:42.500 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:42 compute-0 sshd-session[280371]: Connection closed by authenticating user root 143.14.121.41 port 53122 [preauth]
Nov 29 07:56:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 127 op/s
Nov 29 07:56:43 compute-0 nova_compute[256729]: 2025-11-29 07:56:43.051 256736 DEBUG nova.compute.manager [req-a769fba9-e1fd-49aa-b8cc-1a457ea2a5e4 req-7375f593-5d8c-4d0e-b64e-72688b6fbfc8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received event network-changed-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:43 compute-0 nova_compute[256729]: 2025-11-29 07:56:43.052 256736 DEBUG nova.compute.manager [req-a769fba9-e1fd-49aa-b8cc-1a457ea2a5e4 req-7375f593-5d8c-4d0e-b64e-72688b6fbfc8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Refreshing instance network info cache due to event network-changed-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:56:43 compute-0 nova_compute[256729]: 2025-11-29 07:56:43.052 256736 DEBUG oslo_concurrency.lockutils [req-a769fba9-e1fd-49aa-b8cc-1a457ea2a5e4 req-7375f593-5d8c-4d0e-b64e-72688b6fbfc8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-b8cc435e-f1de-4ae2-990d-3e27f1e26a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:56:43 compute-0 nova_compute[256729]: 2025-11-29 07:56:43.052 256736 DEBUG oslo_concurrency.lockutils [req-a769fba9-e1fd-49aa-b8cc-1a457ea2a5e4 req-7375f593-5d8c-4d0e-b64e-72688b6fbfc8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-b8cc435e-f1de-4ae2-990d-3e27f1e26a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:56:43 compute-0 nova_compute[256729]: 2025-11-29 07:56:43.053 256736 DEBUG nova.network.neutron [req-a769fba9-e1fd-49aa-b8cc-1a457ea2a5e4 req-7375f593-5d8c-4d0e-b64e-72688b6fbfc8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Refreshing network info cache for port 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:56:43 compute-0 ceph-mon[75050]: osdmap e246: 3 total, 3 up, 3 in
Nov 29 07:56:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:44 compute-0 ceph-mon[75050]: pgmap v1575: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 127 op/s
Nov 29 07:56:44 compute-0 nova_compute[256729]: 2025-11-29 07:56:44.386 256736 DEBUG nova.network.neutron [req-a769fba9-e1fd-49aa-b8cc-1a457ea2a5e4 req-7375f593-5d8c-4d0e-b64e-72688b6fbfc8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Updated VIF entry in instance network info cache for port 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:56:44 compute-0 nova_compute[256729]: 2025-11-29 07:56:44.390 256736 DEBUG nova.network.neutron [req-a769fba9-e1fd-49aa-b8cc-1a457ea2a5e4 req-7375f593-5d8c-4d0e-b64e-72688b6fbfc8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Updating instance_info_cache with network_info: [{"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:44 compute-0 nova_compute[256729]: 2025-11-29 07:56:44.425 256736 DEBUG oslo_concurrency.lockutils [req-a769fba9-e1fd-49aa-b8cc-1a457ea2a5e4 req-7375f593-5d8c-4d0e-b64e-72688b6fbfc8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-b8cc435e-f1de-4ae2-990d-3e27f1e26a21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:56:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Nov 29 07:56:46 compute-0 ceph-mon[75050]: pgmap v1576: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Nov 29 07:56:46 compute-0 nova_compute[256729]: 2025-11-29 07:56:46.772 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 121 op/s
Nov 29 07:56:47 compute-0 sshd-session[280608]: Connection closed by authenticating user root 143.14.121.41 port 45050 [preauth]
Nov 29 07:56:47 compute-0 nova_compute[256729]: 2025-11-29 07:56:47.179 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:47 compute-0 nova_compute[256729]: 2025-11-29 07:56:47.502 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:48 compute-0 nova_compute[256729]: 2025-11-29 07:56:48.364 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:48 compute-0 nova_compute[256729]: 2025-11-29 07:56:48.366 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:48 compute-0 nova_compute[256729]: 2025-11-29 07:56:48.386 256736 DEBUG nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:56:48 compute-0 nova_compute[256729]: 2025-11-29 07:56:48.459 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:48 compute-0 nova_compute[256729]: 2025-11-29 07:56:48.460 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:48 compute-0 nova_compute[256729]: 2025-11-29 07:56:48.763 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:56:48 compute-0 nova_compute[256729]: 2025-11-29 07:56:48.764 256736 INFO nova.compute.claims [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:56:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 98 op/s
Nov 29 07:56:48 compute-0 nova_compute[256729]: 2025-11-29 07:56:48.887 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:48 compute-0 sudo[280612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:48 compute-0 sudo[280612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:48 compute-0 sudo[280612]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:49 compute-0 sudo[280638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:56:49 compute-0 sudo[280638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:49 compute-0 sudo[280638]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:49 compute-0 sudo[280674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:49 compute-0 sudo[280674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:49 compute-0 sudo[280674]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:49 compute-0 sudo[280699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:56:49 compute-0 sudo[280699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.290 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 29 07:56:49 compute-0 ceph-mon[75050]: pgmap v1577: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 121 op/s
Nov 29 07:56:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 29 07:56:49 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 29 07:56:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948672151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.693 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.805s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.699 256736 DEBUG nova.compute.provider_tree [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.716 256736 DEBUG nova.scheduler.client.report [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:56:49 compute-0 sudo[280699]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.746 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.747 256736 DEBUG nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:56:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:56:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:56:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:56:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:56:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.810 256736 DEBUG nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.811 256736 DEBUG nova.network.neutron [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.837 256736 INFO nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:56:49 compute-0 nova_compute[256729]: 2025-11-29 07:56:49.982 256736 DEBUG nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.099 256736 DEBUG nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.100 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.101 256736 INFO nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Creating image(s)
Nov 29 07:56:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:56:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 56b45f0c-1eee-493e-9794-4d67d9fffebf does not exist
Nov 29 07:56:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 64e375a9-ca39-4d65-9bfe-656fb1d814d6 does not exist
Nov 29 07:56:50 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c0439441-72db-451c-83b8-7fe55102110a does not exist
Nov 29 07:56:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:56:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:56:50 compute-0 ceph-mon[75050]: pgmap v1578: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 98 op/s
Nov 29 07:56:50 compute-0 ceph-mon[75050]: osdmap e247: 3 total, 3 up, 3 in
Nov 29 07:56:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1948672151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:56:50 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.646 256736 DEBUG nova.storage.rbd_utils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:56:50 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:56:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:56:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.695 256736 DEBUG nova.storage.rbd_utils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:50 compute-0 sshd-session[280610]: Invalid user nexus from 143.14.121.41 port 45060
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.740 256736 DEBUG nova.storage.rbd_utils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.745 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:50 compute-0 sudo[280785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:50 compute-0 sudo[280785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:50 compute-0 sudo[280785]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.776 256736 DEBUG nova.policy [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c0b3479158714faaa4e8c3c336457d6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aede5de4449e445582aa074918be39c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:56:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 713 B/s wr, 71 op/s
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.822 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.824 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.824 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.825 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:50 compute-0 sudo[280847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:56:50 compute-0 sudo[280847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:50 compute-0 sudo[280847]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.854 256736 DEBUG nova.storage.rbd_utils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:50 compute-0 nova_compute[256729]: 2025-11-29 07:56:50.859 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:50 compute-0 sudo[280889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:50 compute-0 sudo[280889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:50 compute-0 sudo[280889]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:50 compute-0 sudo[280918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:56:50 compute-0 sudo[280918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:51 compute-0 nova_compute[256729]: 2025-11-29 07:56:51.193 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:51 compute-0 sshd-session[280610]: Connection closed by invalid user nexus 143.14.121.41 port 45060 [preauth]
Nov 29 07:56:51 compute-0 nova_compute[256729]: 2025-11-29 07:56:51.260 256736 DEBUG nova.storage.rbd_utils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] resizing rbd image 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:56:51 compute-0 podman[281035]: 2025-11-29 07:56:51.300205356 +0000 UTC m=+0.037379868 container create f6b89e0c73ff3645159d17dd2bd010a8d38478b0a2459150ce8b2e3b6a3b3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:56:51 compute-0 systemd[1]: Started libpod-conmon-f6b89e0c73ff3645159d17dd2bd010a8d38478b0a2459150ce8b2e3b6a3b3b51.scope.
Nov 29 07:56:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:56:51 compute-0 podman[281035]: 2025-11-29 07:56:51.283453459 +0000 UTC m=+0.020627991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.074 256736 DEBUG nova.network.neutron [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Successfully created port: bc8c91e4-3e52-4696-8921-d8013cfb7b7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.172 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.182 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:56:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:56:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:56:52 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:56:52 compute-0 podman[281035]: 2025-11-29 07:56:52.375718777 +0000 UTC m=+1.112893339 container init f6b89e0c73ff3645159d17dd2bd010a8d38478b0a2459150ce8b2e3b6a3b3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 07:56:52 compute-0 podman[281035]: 2025-11-29 07:56:52.385544839 +0000 UTC m=+1.122719351 container start f6b89e0c73ff3645159d17dd2bd010a8d38478b0a2459150ce8b2e3b6a3b3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:56:52 compute-0 zen_hoover[281072]: 167 167
Nov 29 07:56:52 compute-0 systemd[1]: libpod-f6b89e0c73ff3645159d17dd2bd010a8d38478b0a2459150ce8b2e3b6a3b3b51.scope: Deactivated successfully.
Nov 29 07:56:52 compute-0 podman[281035]: 2025-11-29 07:56:52.408673707 +0000 UTC m=+1.145848249 container attach f6b89e0c73ff3645159d17dd2bd010a8d38478b0a2459150ce8b2e3b6a3b3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 07:56:52 compute-0 podman[281035]: 2025-11-29 07:56:52.409896129 +0000 UTC m=+1.147070661 container died f6b89e0c73ff3645159d17dd2bd010a8d38478b0a2459150ce8b2e3b6a3b3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:56:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd5b56e109d064acdfdc69256f8a94243a3f7c50e187df804095f506e7cb0dc0-merged.mount: Deactivated successfully.
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.464 256736 DEBUG nova.objects.instance [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.479 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.479 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Ensure instance console log exists: /var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.480 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.481 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.481 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:52 compute-0 nova_compute[256729]: 2025-11-29 07:56:52.504 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:52 compute-0 podman[281035]: 2025-11-29 07:56:52.504484544 +0000 UTC m=+1.241659056 container remove f6b89e0c73ff3645159d17dd2bd010a8d38478b0a2459150ce8b2e3b6a3b3b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:56:52 compute-0 systemd[1]: libpod-conmon-f6b89e0c73ff3645159d17dd2bd010a8d38478b0a2459150ce8b2e3b6a3b3b51.scope: Deactivated successfully.
Nov 29 07:56:52 compute-0 podman[281118]: 2025-11-29 07:56:52.675202652 +0000 UTC m=+0.046894124 container create 4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 07:56:52 compute-0 systemd[1]: Started libpod-conmon-4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c.scope.
Nov 29 07:56:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e66b10c10e3dc6a4e7666120d0a37557df98b889d8b34c6de8d8e78c710820/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e66b10c10e3dc6a4e7666120d0a37557df98b889d8b34c6de8d8e78c710820/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e66b10c10e3dc6a4e7666120d0a37557df98b889d8b34c6de8d8e78c710820/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e66b10c10e3dc6a4e7666120d0a37557df98b889d8b34c6de8d8e78c710820/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1e66b10c10e3dc6a4e7666120d0a37557df98b889d8b34c6de8d8e78c710820/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:52 compute-0 podman[281118]: 2025-11-29 07:56:52.652486695 +0000 UTC m=+0.024178187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:56:52 compute-0 podman[281118]: 2025-11-29 07:56:52.759586354 +0000 UTC m=+0.131277856 container init 4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:56:52 compute-0 podman[281118]: 2025-11-29 07:56:52.767033363 +0000 UTC m=+0.138724835 container start 4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:56:52 compute-0 podman[281118]: 2025-11-29 07:56:52.769680014 +0000 UTC m=+0.141371516 container attach 4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 07:56:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 614 B/s wr, 61 op/s
Nov 29 07:56:53 compute-0 ceph-mon[75050]: pgmap v1580: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 713 B/s wr, 71 op/s
Nov 29 07:56:53 compute-0 ovn_controller[153383]: 2025-11-29T07:56:53Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:a2:6a 10.100.0.7
Nov 29 07:56:53 compute-0 ovn_controller[153383]: 2025-11-29T07:56:53Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:a2:6a 10.100.0.7
Nov 29 07:56:53 compute-0 jolly_knuth[281134]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:56:53 compute-0 jolly_knuth[281134]: --> relative data size: 1.0
Nov 29 07:56:53 compute-0 jolly_knuth[281134]: --> All data devices are unavailable
Nov 29 07:56:53 compute-0 systemd[1]: libpod-4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c.scope: Deactivated successfully.
Nov 29 07:56:53 compute-0 systemd[1]: libpod-4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c.scope: Consumed 1.025s CPU time.
Nov 29 07:56:53 compute-0 podman[281118]: 2025-11-29 07:56:53.854730199 +0000 UTC m=+1.226421671 container died 4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 29 07:56:53 compute-0 nova_compute[256729]: 2025-11-29 07:56:53.870 256736 DEBUG nova.network.neutron [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Successfully updated port: bc8c91e4-3e52-4696-8921-d8013cfb7b7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:56:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1e66b10c10e3dc6a4e7666120d0a37557df98b889d8b34c6de8d8e78c710820-merged.mount: Deactivated successfully.
Nov 29 07:56:53 compute-0 podman[281118]: 2025-11-29 07:56:53.930328097 +0000 UTC m=+1.302019579 container remove 4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:56:53 compute-0 systemd[1]: libpod-conmon-4f6467f854ca8db63359c6ab6f87d69e3b3d2fb8432eb595b32c4a15beacd68c.scope: Deactivated successfully.
Nov 29 07:56:53 compute-0 sudo[280918]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:54 compute-0 sudo[281176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:54 compute-0 sudo[281176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:54 compute-0 sudo[281176]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:54 compute-0 sudo[281201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:56:54 compute-0 sudo[281201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:54 compute-0 sudo[281201]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:54 compute-0 nova_compute[256729]: 2025-11-29 07:56:54.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:54 compute-0 sudo[281226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:54 compute-0 sudo[281226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:54 compute-0 sudo[281226]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:54 compute-0 sudo[281252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:56:54 compute-0 sudo[281252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:54 compute-0 nova_compute[256729]: 2025-11-29 07:56:54.293 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "refresh_cache-147c2de5-0104-4eb0-bc20-b3bdc3909ed9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:56:54 compute-0 nova_compute[256729]: 2025-11-29 07:56:54.293 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquired lock "refresh_cache-147c2de5-0104-4eb0-bc20-b3bdc3909ed9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:56:54 compute-0 nova_compute[256729]: 2025-11-29 07:56:54.294 256736 DEBUG nova.network.neutron [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:56:54 compute-0 nova_compute[256729]: 2025-11-29 07:56:54.298 256736 DEBUG nova.compute.manager [req-6cf0fa64-fee5-455f-8816-25b2f1c8ea49 req-7f872b3b-12ff-4368-8768-e13b9feb72fa ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received event network-changed-bc8c91e4-3e52-4696-8921-d8013cfb7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:56:54 compute-0 nova_compute[256729]: 2025-11-29 07:56:54.298 256736 DEBUG nova.compute.manager [req-6cf0fa64-fee5-455f-8816-25b2f1c8ea49 req-7f872b3b-12ff-4368-8768-e13b9feb72fa ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Refreshing instance network info cache due to event network-changed-bc8c91e4-3e52-4696-8921-d8013cfb7b7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:56:54 compute-0 nova_compute[256729]: 2025-11-29 07:56:54.298 256736 DEBUG oslo_concurrency.lockutils [req-6cf0fa64-fee5-455f-8816-25b2f1c8ea49 req-7f872b3b-12ff-4368-8768-e13b9feb72fa ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-147c2de5-0104-4eb0-bc20-b3bdc3909ed9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:56:54 compute-0 podman[281250]: 2025-11-29 07:56:54.318011976 +0000 UTC m=+0.084515087 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 07:56:54 compute-0 podman[281297]: 2025-11-29 07:56:54.396281235 +0000 UTC m=+0.048803813 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:56:54 compute-0 ceph-mon[75050]: pgmap v1581: 305 pgs: 305 active+clean; 134 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 614 B/s wr, 61 op/s
Nov 29 07:56:54 compute-0 podman[281296]: 2025-11-29 07:56:54.428994529 +0000 UTC m=+0.086801868 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 07:56:54 compute-0 podman[281382]: 2025-11-29 07:56:54.60734281 +0000 UTC m=+0.036258519 container create df07d047e914ebc5ca934412bfe751e3324af0dd4c087dc0207f46a02cb89a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 07:56:54 compute-0 systemd[1]: Started libpod-conmon-df07d047e914ebc5ca934412bfe751e3324af0dd4c087dc0207f46a02cb89a61.scope.
Nov 29 07:56:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:56:54 compute-0 podman[281382]: 2025-11-29 07:56:54.684160241 +0000 UTC m=+0.113076000 container init df07d047e914ebc5ca934412bfe751e3324af0dd4c087dc0207f46a02cb89a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:56:54 compute-0 podman[281382]: 2025-11-29 07:56:54.591845276 +0000 UTC m=+0.020761005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:56:54 compute-0 podman[281382]: 2025-11-29 07:56:54.691760724 +0000 UTC m=+0.120700253 container start df07d047e914ebc5ca934412bfe751e3324af0dd4c087dc0207f46a02cb89a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:56:54 compute-0 podman[281382]: 2025-11-29 07:56:54.695076202 +0000 UTC m=+0.123991931 container attach df07d047e914ebc5ca934412bfe751e3324af0dd4c087dc0207f46a02cb89a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:56:54 compute-0 frosty_villani[281399]: 167 167
Nov 29 07:56:54 compute-0 systemd[1]: libpod-df07d047e914ebc5ca934412bfe751e3324af0dd4c087dc0207f46a02cb89a61.scope: Deactivated successfully.
Nov 29 07:56:54 compute-0 podman[281382]: 2025-11-29 07:56:54.69722653 +0000 UTC m=+0.126142239 container died df07d047e914ebc5ca934412bfe751e3324af0dd4c087dc0207f46a02cb89a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 07:56:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-faa5f0d6299857c295344916ac9c7b720f6236f0bfb7d1fc1173d7c229d74ed4-merged.mount: Deactivated successfully.
Nov 29 07:56:54 compute-0 podman[281382]: 2025-11-29 07:56:54.732364247 +0000 UTC m=+0.161279966 container remove df07d047e914ebc5ca934412bfe751e3324af0dd4c087dc0207f46a02cb89a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 29 07:56:54 compute-0 systemd[1]: libpod-conmon-df07d047e914ebc5ca934412bfe751e3324af0dd4c087dc0207f46a02cb89a61.scope: Deactivated successfully.
Nov 29 07:56:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 189 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 258 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Nov 29 07:56:54 compute-0 nova_compute[256729]: 2025-11-29 07:56:54.807 256736 DEBUG nova.network.neutron [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:56:54 compute-0 podman[281424]: 2025-11-29 07:56:54.909025563 +0000 UTC m=+0.053253852 container create ad9a3f1930ff53c61c2b1819b83008d6d616fd50c14300cef4ebe50483ef42a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 07:56:54 compute-0 systemd[1]: Started libpod-conmon-ad9a3f1930ff53c61c2b1819b83008d6d616fd50c14300cef4ebe50483ef42a5.scope.
Nov 29 07:56:54 compute-0 podman[281424]: 2025-11-29 07:56:54.881347464 +0000 UTC m=+0.025575833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:56:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942c4e0f7dfbf0fad51f52f5bf66a9600d390e38f830873f3fdafad3e43bad56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942c4e0f7dfbf0fad51f52f5bf66a9600d390e38f830873f3fdafad3e43bad56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942c4e0f7dfbf0fad51f52f5bf66a9600d390e38f830873f3fdafad3e43bad56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942c4e0f7dfbf0fad51f52f5bf66a9600d390e38f830873f3fdafad3e43bad56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:55 compute-0 nova_compute[256729]: 2025-11-29 07:56:55.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:55 compute-0 nova_compute[256729]: 2025-11-29 07:56:55.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:55 compute-0 podman[281424]: 2025-11-29 07:56:55.174440279 +0000 UTC m=+0.318668598 container init ad9a3f1930ff53c61c2b1819b83008d6d616fd50c14300cef4ebe50483ef42a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:56:55 compute-0 podman[281424]: 2025-11-29 07:56:55.181897478 +0000 UTC m=+0.326125797 container start ad9a3f1930ff53c61c2b1819b83008d6d616fd50c14300cef4ebe50483ef42a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 07:56:55 compute-0 podman[281424]: 2025-11-29 07:56:55.186133861 +0000 UTC m=+0.330362170 container attach ad9a3f1930ff53c61c2b1819b83008d6d616fd50c14300cef4ebe50483ef42a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:56:55 compute-0 sshd-session[281075]: Invalid user media from 143.14.121.41 port 45070
Nov 29 07:56:55 compute-0 nova_compute[256729]: 2025-11-29 07:56:55.601 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:55 compute-0 nova_compute[256729]: 2025-11-29 07:56:55.602 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:55 compute-0 nova_compute[256729]: 2025-11-29 07:56:55.602 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:55 compute-0 nova_compute[256729]: 2025-11-29 07:56:55.603 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:56:55 compute-0 nova_compute[256729]: 2025-11-29 07:56:55.604 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]: {
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:     "0": [
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:         {
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "devices": [
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "/dev/loop3"
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             ],
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_name": "ceph_lv0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_size": "21470642176",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "name": "ceph_lv0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "tags": {
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:56:55 compute-0 sshd-session[281075]: Connection closed by invalid user media 143.14.121.41 port 45070 [preauth]
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.cluster_name": "ceph",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.crush_device_class": "",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.encrypted": "0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.osd_id": "0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.type": "block",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.vdo": "0"
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             },
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "type": "block",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "vg_name": "ceph_vg0"
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:         }
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:     ],
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:     "1": [
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:         {
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "devices": [
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "/dev/loop4"
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             ],
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_name": "ceph_lv1",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_size": "21470642176",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "name": "ceph_lv1",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "tags": {
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.cluster_name": "ceph",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.crush_device_class": "",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.encrypted": "0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.osd_id": "1",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.type": "block",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.vdo": "0"
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             },
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "type": "block",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "vg_name": "ceph_vg1"
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:         }
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:     ],
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:     "2": [
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:         {
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "devices": [
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "/dev/loop5"
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             ],
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_name": "ceph_lv2",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_size": "21470642176",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "name": "ceph_lv2",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "tags": {
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.cluster_name": "ceph",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.crush_device_class": "",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.encrypted": "0",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.osd_id": "2",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.type": "block",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:                 "ceph.vdo": "0"
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             },
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "type": "block",
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:             "vg_name": "ceph_vg2"
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:         }
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]:     ]
Nov 29 07:56:55 compute-0 confident_ishizaka[281440]: }
Nov 29 07:56:55 compute-0 systemd[1]: libpod-ad9a3f1930ff53c61c2b1819b83008d6d616fd50c14300cef4ebe50483ef42a5.scope: Deactivated successfully.
Nov 29 07:56:56 compute-0 podman[281469]: 2025-11-29 07:56:56.034922959 +0000 UTC m=+0.030400853 container died ad9a3f1930ff53c61c2b1819b83008d6d616fd50c14300cef4ebe50483ef42a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 29 07:56:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86310260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-942c4e0f7dfbf0fad51f52f5bf66a9600d390e38f830873f3fdafad3e43bad56-merged.mount: Deactivated successfully.
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.083 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:56 compute-0 podman[281469]: 2025-11-29 07:56:56.101679711 +0000 UTC m=+0.097157585 container remove ad9a3f1930ff53c61c2b1819b83008d6d616fd50c14300cef4ebe50483ef42a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:56:56 compute-0 systemd[1]: libpod-conmon-ad9a3f1930ff53c61c2b1819b83008d6d616fd50c14300cef4ebe50483ef42a5.scope: Deactivated successfully.
Nov 29 07:56:56 compute-0 sudo[281252]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.170 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.170 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:56:56 compute-0 sudo[281487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:56 compute-0 sudo[281487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:56 compute-0 sudo[281487]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:56 compute-0 sudo[281513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:56:56 compute-0 sudo[281513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:56 compute-0 sudo[281513]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.353 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:56 compute-0 sudo[281538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:56 compute-0 sudo[281538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:56 compute-0 sudo[281538]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.374 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.375 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4252MB free_disk=59.93300247192383GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.375 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.376 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:56 compute-0 sudo[281563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:56:56 compute-0 sudo[281563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:56 compute-0 ceph-mon[75050]: pgmap v1582: 305 pgs: 305 active+clean; 189 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 258 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Nov 29 07:56:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/86310260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.543 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance b8cc435e-f1de-4ae2-990d-3e27f1e26a21 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.543 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.544 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.544 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.561 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing inventories for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.607 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating ProviderTree inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.608 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.628 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing aggregate associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.660 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing trait associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, traits: COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NODE,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.718 256736 DEBUG nova.network.neutron [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Updating instance_info_cache with network_info: [{"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.722 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.762 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Releasing lock "refresh_cache-147c2de5-0104-4eb0-bc20-b3bdc3909ed9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.763 256736 DEBUG nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Instance network_info: |[{"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.764 256736 DEBUG oslo_concurrency.lockutils [req-6cf0fa64-fee5-455f-8816-25b2f1c8ea49 req-7f872b3b-12ff-4368-8768-e13b9feb72fa ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-147c2de5-0104-4eb0-bc20-b3bdc3909ed9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.765 256736 DEBUG nova.network.neutron [req-6cf0fa64-fee5-455f-8816-25b2f1c8ea49 req-7f872b3b-12ff-4368-8768-e13b9feb72fa ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Refreshing network info cache for port bc8c91e4-3e52-4696-8921-d8013cfb7b7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.770 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Start _get_guest_xml network_info=[{"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.778 256736 WARNING nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.785 256736 DEBUG nova.virt.libvirt.host [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.786 256736 DEBUG nova.virt.libvirt.host [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.796 256736 DEBUG nova.virt.libvirt.host [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.797 256736 DEBUG nova.virt.libvirt.host [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:56:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 211 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 4.6 MiB/s wr, 87 op/s
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.798 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.798 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.799 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.799 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.800 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.800 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:56:56 compute-0 podman[281628]: 2025-11-29 07:56:56.800680851 +0000 UTC m=+0.063189558 container create 8b9b65106823908e31b4b3c87ba9a0dadc675573d7ca8d7c26009cfb7955f991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.800 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.801 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.801 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.801 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.801 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.802 256736 DEBUG nova.virt.hardware [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:56:56 compute-0 nova_compute[256729]: 2025-11-29 07:56:56.805 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:56 compute-0 systemd[1]: Started libpod-conmon-8b9b65106823908e31b4b3c87ba9a0dadc675573d7ca8d7c26009cfb7955f991.scope.
Nov 29 07:56:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:56:56 compute-0 podman[281628]: 2025-11-29 07:56:56.78302907 +0000 UTC m=+0.045537807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:56:56 compute-0 podman[281628]: 2025-11-29 07:56:56.946405602 +0000 UTC m=+0.208914319 container init 8b9b65106823908e31b4b3c87ba9a0dadc675573d7ca8d7c26009cfb7955f991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 07:56:56 compute-0 podman[281628]: 2025-11-29 07:56:56.953143701 +0000 UTC m=+0.215652408 container start 8b9b65106823908e31b4b3c87ba9a0dadc675573d7ca8d7c26009cfb7955f991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 07:56:56 compute-0 podman[281628]: 2025-11-29 07:56:56.956273475 +0000 UTC m=+0.218782182 container attach 8b9b65106823908e31b4b3c87ba9a0dadc675573d7ca8d7c26009cfb7955f991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:56:56 compute-0 laughing_kowalevski[281644]: 167 167
Nov 29 07:56:56 compute-0 systemd[1]: libpod-8b9b65106823908e31b4b3c87ba9a0dadc675573d7ca8d7c26009cfb7955f991.scope: Deactivated successfully.
Nov 29 07:56:57 compute-0 podman[281687]: 2025-11-29 07:56:57.000152496 +0000 UTC m=+0.028400299 container died 8b9b65106823908e31b4b3c87ba9a0dadc675573d7ca8d7c26009cfb7955f991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:56:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:56:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479519777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.183 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.187 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.193 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.208 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:56:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:56:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1686578318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.240 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fc3b4b5e24bfb83edcd2f0c943945d3e7689c141e368b8f3f0be8ed5e4af19c-merged.mount: Deactivated successfully.
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.269 256736 DEBUG nova.storage.rbd_utils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.274 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:57 compute-0 podman[281687]: 2025-11-29 07:56:57.276448932 +0000 UTC m=+0.304696675 container remove 8b9b65106823908e31b4b3c87ba9a0dadc675573d7ca8d7c26009cfb7955f991 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kowalevski, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 07:56:57 compute-0 systemd[1]: libpod-conmon-8b9b65106823908e31b4b3c87ba9a0dadc675573d7ca8d7c26009cfb7955f991.scope: Deactivated successfully.
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.299 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.299 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.506 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:57 compute-0 podman[281751]: 2025-11-29 07:56:57.450906129 +0000 UTC m=+0.027582667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:56:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:56:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735710694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.692 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.694 256736 DEBUG nova.virt.libvirt.vif [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-599798234',display_name='tempest-VolumesSnapshotTestJSON-instance-599798234',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-599798234',id=11,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObDc+NbrQmtYcY6EwSmvzU0R3Gi/UQJqyfQZjkI4/toFRRTIoIgfCy8x3M1DrT2i/Xfl3y4TiKeD8LDdjTp6tKwDxJPyEMTV5d+3JcYVoid++iXEGL2INbaZ4J9doILLQ==',key_name='tempest-keypair-576206054',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aede5de4449e445582aa074918be39c9',ramdisk_id='',reservation_id='r-kcxuajyb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1121052015',owner_user_name='tempest-VolumesSnapshotTestJSON-1121052015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c0b3479158714faaa4e8c3c336457d6d',uuid=147c2de5-0104-4eb0-bc20-b3bdc3909ed9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.694 256736 DEBUG nova.network.os_vif_util [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converting VIF {"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.696 256736 DEBUG nova.network.os_vif_util [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:64:59,bridge_name='br-int',has_traffic_filtering=True,id=bc8c91e4-3e52-4696-8921-d8013cfb7b7c,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8c91e4-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.698 256736 DEBUG nova.objects.instance [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.719 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <uuid>147c2de5-0104-4eb0-bc20-b3bdc3909ed9</uuid>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <name>instance-0000000b</name>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-599798234</nova:name>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:56:56</nova:creationTime>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <nova:user uuid="c0b3479158714faaa4e8c3c336457d6d">tempest-VolumesSnapshotTestJSON-1121052015-project-member</nova:user>
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <nova:project uuid="aede5de4449e445582aa074918be39c9">tempest-VolumesSnapshotTestJSON-1121052015</nova:project>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <nova:port uuid="bc8c91e4-3e52-4696-8921-d8013cfb7b7c">
Nov 29 07:56:57 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <system>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <entry name="serial">147c2de5-0104-4eb0-bc20-b3bdc3909ed9</entry>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <entry name="uuid">147c2de5-0104-4eb0-bc20-b3bdc3909ed9</entry>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     </system>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <os>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   </os>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <features>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   </features>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk">
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       </source>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk.config">
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       </source>
Nov 29 07:56:57 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:56:57 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:15:64:59"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <target dev="tapbc8c91e4-3e"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9/console.log" append="off"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <video>
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     </video>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:56:57 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:56:57 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:56:57 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:56:57 compute-0 nova_compute[256729]: </domain>
Nov 29 07:56:57 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.719 256736 DEBUG nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Preparing to wait for external event network-vif-plugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.720 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.720 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.721 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.722 256736 DEBUG nova.virt.libvirt.vif [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:56:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-599798234',display_name='tempest-VolumesSnapshotTestJSON-instance-599798234',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-599798234',id=11,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObDc+NbrQmtYcY6EwSmvzU0R3Gi/UQJqyfQZjkI4/toFRRTIoIgfCy8x3M1DrT2i/Xfl3y4TiKeD8LDdjTp6tKwDxJPyEMTV5d+3JcYVoid++iXEGL2INbaZ4J9doILLQ==',key_name='tempest-keypair-576206054',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aede5de4449e445582aa074918be39c9',ramdisk_id='',reservation_id='r-kcxuajyb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-1121052015',owner_user_name='tempest-VolumesSnapshotTestJSON-1121052015-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:56:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c0b3479158714faaa4e8c3c336457d6d',uuid=147c2de5-0104-4eb0-bc20-b3bdc3909ed9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.722 256736 DEBUG nova.network.os_vif_util [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converting VIF {"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.723 256736 DEBUG nova.network.os_vif_util [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:64:59,bridge_name='br-int',has_traffic_filtering=True,id=bc8c91e4-3e52-4696-8921-d8013cfb7b7c,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8c91e4-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.723 256736 DEBUG os_vif [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:64:59,bridge_name='br-int',has_traffic_filtering=True,id=bc8c91e4-3e52-4696-8921-d8013cfb7b7c,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8c91e4-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.724 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.725 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.726 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.731 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.732 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc8c91e4-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.733 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc8c91e4-3e, col_values=(('external_ids', {'iface-id': 'bc8c91e4-3e52-4696-8921-d8013cfb7b7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:64:59', 'vm-uuid': '147c2de5-0104-4eb0-bc20-b3bdc3909ed9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.734 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:57 compute-0 NetworkManager[48962]: <info>  [1764403017.7362] manager: (tapbc8c91e4-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.736 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:56:57 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.743 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:57 compute-0 nova_compute[256729]: 2025-11-29 07:56:57.745 256736 INFO os_vif [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:64:59,bridge_name='br-int',has_traffic_filtering=True,id=bc8c91e4-3e52-4696-8921-d8013cfb7b7c,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8c91e4-3e')
Nov 29 07:56:57 compute-0 podman[281751]: 2025-11-29 07:56:57.956429114 +0000 UTC m=+0.533105612 container create 31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 07:56:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1479519777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:56:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1686578318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.007 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.007 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.007 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No VIF found with MAC fa:16:3e:15:64:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.008 256736 INFO nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Using config drive
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.038 256736 DEBUG nova.storage.rbd_utils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:58 compute-0 systemd[1]: Started libpod-conmon-31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e.scope.
Nov 29 07:56:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420af9e12b2d6d5e0811c15588a4b4e7b59feefc9f1958ecd772f5a914d96a81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420af9e12b2d6d5e0811c15588a4b4e7b59feefc9f1958ecd772f5a914d96a81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420af9e12b2d6d5e0811c15588a4b4e7b59feefc9f1958ecd772f5a914d96a81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420af9e12b2d6d5e0811c15588a4b4e7b59feefc9f1958ecd772f5a914d96a81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:56:58 compute-0 podman[281751]: 2025-11-29 07:56:58.126163335 +0000 UTC m=+0.702839843 container init 31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:56:58 compute-0 podman[281751]: 2025-11-29 07:56:58.133010668 +0000 UTC m=+0.709687176 container start 31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 29 07:56:58 compute-0 podman[281751]: 2025-11-29 07:56:58.14732173 +0000 UTC m=+0.723998228 container attach 31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.299 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.300 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.350 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.350 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.350 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.350 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.372 256736 DEBUG nova.network.neutron [req-6cf0fa64-fee5-455f-8816-25b2f1c8ea49 req-7f872b3b-12ff-4368-8768-e13b9feb72fa ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Updated VIF entry in instance network info cache for port bc8c91e4-3e52-4696-8921-d8013cfb7b7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.373 256736 DEBUG nova.network.neutron [req-6cf0fa64-fee5-455f-8816-25b2f1c8ea49 req-7f872b3b-12ff-4368-8768-e13b9feb72fa ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Updating instance_info_cache with network_info: [{"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:56:58 compute-0 nova_compute[256729]: 2025-11-29 07:56:58.395 256736 DEBUG oslo_concurrency.lockutils [req-6cf0fa64-fee5-455f-8816-25b2f1c8ea49 req-7f872b3b-12ff-4368-8768-e13b9feb72fa ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-147c2de5-0104-4eb0-bc20-b3bdc3909ed9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:56:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 4.7 MiB/s wr, 111 op/s
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.010 256736 INFO nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Creating config drive at /var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9/disk.config
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.018 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp20gaqigi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:59 compute-0 ceph-mon[75050]: pgmap v1583: 305 pgs: 305 active+clean; 211 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 4.6 MiB/s wr, 87 op/s
Nov 29 07:56:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1735710694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:56:59 compute-0 vigorous_easley[281790]: {
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "osd_id": 2,
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "type": "bluestore"
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:     },
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "osd_id": 1,
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "type": "bluestore"
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:     },
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "osd_id": 0,
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:         "type": "bluestore"
Nov 29 07:56:59 compute-0 vigorous_easley[281790]:     }
Nov 29 07:56:59 compute-0 vigorous_easley[281790]: }
Nov 29 07:56:59 compute-0 systemd[1]: libpod-31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e.scope: Deactivated successfully.
Nov 29 07:56:59 compute-0 podman[281751]: 2025-11-29 07:56:59.138121579 +0000 UTC m=+1.714798077 container died 31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 07:56:59 compute-0 systemd[1]: libpod-31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e.scope: Consumed 1.016s CPU time.
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.154 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp20gaqigi" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.181 256736 DEBUG nova.storage.rbd_utils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] rbd image 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.184 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9/disk.config 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:56:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:56:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-420af9e12b2d6d5e0811c15588a4b4e7b59feefc9f1958ecd772f5a914d96a81-merged.mount: Deactivated successfully.
Nov 29 07:56:59 compute-0 podman[281751]: 2025-11-29 07:56:59.374075958 +0000 UTC m=+1.950752466 container remove 31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:56:59 compute-0 systemd[1]: libpod-conmon-31c32d01de49df776f0c2e7ef56c9dbe9afcff53444e9d36091c44dc4ff6bd4e.scope: Deactivated successfully.
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.423 256736 DEBUG oslo_concurrency.processutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9/disk.config 147c2de5-0104-4eb0-bc20-b3bdc3909ed9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.424 256736 INFO nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Deleting local config drive /var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9/disk.config because it was imported into RBD.
Nov 29 07:56:59 compute-0 sudo[281563]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:56:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:56:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:56:59 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:56:59 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7ef7c622-a1cb-4783-b8e4-26656ca2ff78 does not exist
Nov 29 07:56:59 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 19ce369f-aa7d-4ffc-802f-72ef629918df does not exist
Nov 29 07:56:59 compute-0 kernel: tapbc8c91e4-3e: entered promiscuous mode
Nov 29 07:56:59 compute-0 NetworkManager[48962]: <info>  [1764403019.4800] manager: (tapbc8c91e4-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 29 07:56:59 compute-0 ovn_controller[153383]: 2025-11-29T07:56:59Z|00123|binding|INFO|Claiming lport bc8c91e4-3e52-4696-8921-d8013cfb7b7c for this chassis.
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.480 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-0 ovn_controller[153383]: 2025-11-29T07:56:59Z|00124|binding|INFO|bc8c91e4-3e52-4696-8921-d8013cfb7b7c: Claiming fa:16:3e:15:64:59 10.100.0.3
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.494 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:64:59 10.100.0.3'], port_security=['fa:16:3e:15:64:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '147c2de5-0104-4eb0-bc20-b3bdc3909ed9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aede5de4449e445582aa074918be39c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2cb872f-de4b-4850-8126-1e4dfb0f16a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6a691ab-2be1-4362-9a9a-3c54aabcf5a5, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=bc8c91e4-3e52-4696-8921-d8013cfb7b7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.496 163655 INFO neutron.agent.ovn.metadata.agent [-] Port bc8c91e4-3e52-4696-8921-d8013cfb7b7c in datapath 5908d283-c1b3-46ec-8e8e-b81d59c13f9a bound to our chassis
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.498 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5908d283-c1b3-46ec-8e8e-b81d59c13f9a
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.508 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-0 ovn_controller[153383]: 2025-11-29T07:56:59Z|00125|binding|INFO|Setting lport bc8c91e4-3e52-4696-8921-d8013cfb7b7c ovn-installed in OVS
Nov 29 07:56:59 compute-0 ovn_controller[153383]: 2025-11-29T07:56:59Z|00126|binding|INFO|Setting lport bc8c91e4-3e52-4696-8921-d8013cfb7b7c up in Southbound
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.511 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.512 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa89f06-8f4c-417a-bc7d-6706d67ff0f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.513 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5908d283-c1 in ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.515 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5908d283-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.515 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbc1f47-32a2-47b6-a330-b295e2b06f6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.516 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-0 systemd-machined[217781]: New machine qemu-11-instance-0000000b.
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.518 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[16878c7a-1bf2-406f-a9cf-824ccdf3b708]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 systemd-udevd[281916]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.528 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[71568128-a0ba-48b0-b9e8-147306126aaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 NetworkManager[48962]: <info>  [1764403019.5363] device (tapbc8c91e4-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:56:59 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 29 07:56:59 compute-0 NetworkManager[48962]: <info>  [1764403019.5381] device (tapbc8c91e4-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:56:59 compute-0 sudo[281883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.542 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[bc29223c-9cbe-4998-98fb-db276b5f7c5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 sudo[281883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:59 compute-0 sudo[281883]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.572 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf64465-d00f-4540-a3b3-d5044b841778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 NetworkManager[48962]: <info>  [1764403019.5791] manager: (tap5908d283-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.576 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2d5bb20c-78a8-4598-bf94-0a5f3c3be7c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 systemd-udevd[281920]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:56:59 compute-0 sudo[281925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:56:59 compute-0 sudo[281925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.606 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[8f630d7e-e88f-444e-9b98-58152669e276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 sudo[281925]: pam_unix(sudo:session): session closed for user root
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.609 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[0b787d27-4a0b-467d-9cbc-0b22b06fe22c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 NetworkManager[48962]: <info>  [1764403019.6349] device (tap5908d283-c0): carrier: link connected
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.648 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[bfff3dd1-da74-4b1d-90f9-345ab664091b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.665 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3871b1-19c1-4c6d-8f64-68b92423fdc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5908d283-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:cf:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533469, 'reachable_time': 41022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281974, 'error': None, 'target': 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.681 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a269f09a-de12-4e2e-aa6e-f16b4a3a8f5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:cfee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 533469, 'tstamp': 533469}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281982, 'error': None, 'target': 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.696 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[96fdb057-4cd5-4fbe-9140-033dc6fc6543]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5908d283-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:cf:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533469, 'reachable_time': 41022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281992, 'error': None, 'target': 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.721 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[547266b7-dc84-4357-a6c0-8a33827caec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.776 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.776 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.777 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.780 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[da9b1a0b-46ed-4fe3-b5fa-dc854191fb7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.781 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5908d283-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.781 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.781 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5908d283-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:59 compute-0 NetworkManager[48962]: <info>  [1764403019.7841] manager: (tap5908d283-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.783 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-0 kernel: tap5908d283-c0: entered promiscuous mode
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.786 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.787 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5908d283-c0, col_values=(('external_ids', {'iface-id': '9b4bf2c3-157d-4772-ab63-bb4e179af153'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.789 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-0 ovn_controller[153383]: 2025-11-29T07:56:59Z|00127|binding|INFO|Releasing lport 9b4bf2c3-157d-4772-ab63-bb4e179af153 from this chassis (sb_readonly=0)
Nov 29 07:56:59 compute-0 nova_compute[256729]: 2025-11-29 07:56:59.803 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.804 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5908d283-c1b3-46ec-8e8e-b81d59c13f9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5908d283-c1b3-46ec-8e8e-b81d59c13f9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.805 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[24a7e367-410c-4285-b8fe-6bece3b2a1e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.806 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-5908d283-c1b3-46ec-8e8e-b81d59c13f9a
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/5908d283-c1b3-46ec-8e8e-b81d59c13f9a.pid.haproxy
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 5908d283-c1b3-46ec-8e8e-b81d59c13f9a
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:56:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:56:59.807 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'env', 'PROCESS_TAG=haproxy-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5908d283-c1b3-46ec-8e8e-b81d59c13f9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:56:59 compute-0 sshd-session[281511]: Invalid user mc from 143.14.121.41 port 43888
Nov 29 07:57:00 compute-0 podman[282026]: 2025-11-29 07:57:00.173678494 +0000 UTC m=+0.035052297 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:57:00 compute-0 sshd-session[281511]: Connection closed by invalid user mc 143.14.121.41 port 43888 [preauth]
Nov 29 07:57:00 compute-0 podman[282026]: 2025-11-29 07:57:00.552823625 +0000 UTC m=+0.414197388 container create fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 07:57:00 compute-0 ceph-mon[75050]: pgmap v1584: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 4.7 MiB/s wr, 111 op/s
Nov 29 07:57:00 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:57:00 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:57:00 compute-0 systemd[1]: Started libpod-conmon-fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7.scope.
Nov 29 07:57:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:57:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/315ba7488003b5b0338ca7747cbb0222584bd5a6f2f673d36d67fb5dc439ec90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:00 compute-0 podman[282026]: 2025-11-29 07:57:00.710768341 +0000 UTC m=+0.572142104 container init fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:57:00 compute-0 podman[282026]: 2025-11-29 07:57:00.717757888 +0000 UTC m=+0.579131621 container start fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 07:57:00 compute-0 nova_compute[256729]: 2025-11-29 07:57:00.718 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403020.717675, 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:00 compute-0 nova_compute[256729]: 2025-11-29 07:57:00.719 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] VM Started (Lifecycle Event)
Nov 29 07:57:00 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[282065]: [NOTICE]   (282070) : New worker (282072) forked
Nov 29 07:57:00 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[282065]: [NOTICE]   (282070) : Loading success.
Nov 29 07:57:00 compute-0 nova_compute[256729]: 2025-11-29 07:57:00.758 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:00 compute-0 nova_compute[256729]: 2025-11-29 07:57:00.764 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403020.718597, 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:00 compute-0 nova_compute[256729]: 2025-11-29 07:57:00.764 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] VM Paused (Lifecycle Event)
Nov 29 07:57:00 compute-0 nova_compute[256729]: 2025-11-29 07:57:00.788 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:00 compute-0 nova_compute[256729]: 2025-11-29 07:57:00.794 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:57:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 365 KiB/s rd, 4.1 MiB/s wr, 98 op/s
Nov 29 07:57:00 compute-0 nova_compute[256729]: 2025-11-29 07:57:00.818 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:57:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:01.262 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:57:01 compute-0 nova_compute[256729]: 2025-11-29 07:57:01.263 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:01.264 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:57:01 compute-0 nova_compute[256729]: 2025-11-29 07:57:01.695 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:02 compute-0 nova_compute[256729]: 2025-11-29 07:57:02.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:02 compute-0 nova_compute[256729]: 2025-11-29 07:57:02.187 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:02 compute-0 ceph-mon[75050]: pgmap v1585: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 365 KiB/s rd, 4.1 MiB/s wr, 98 op/s
Nov 29 07:57:02 compute-0 nova_compute[256729]: 2025-11-29 07:57:02.735 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 29 07:57:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:04 compute-0 sshd-session[282081]: Connection closed by authenticating user ftp 143.14.121.41 port 43894 [preauth]
Nov 29 07:57:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Nov 29 07:57:05 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:05.266 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:57:05
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control']
Nov 29 07:57:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: client.0 ms_handle_reset on v2:192.168.122.100:6800/878361048
Nov 29 07:57:06 compute-0 ceph-mon[75050]: pgmap v1586: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 1004 KiB/s wr, 47 op/s
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:57:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.084 256736 DEBUG nova.compute.manager [req-937dfe5d-dc6a-4cdb-9530-6efd1a8a726d req-3de10e4b-7f95-4e13-a4be-93c03f4daa11 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received event network-vif-plugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.085 256736 DEBUG oslo_concurrency.lockutils [req-937dfe5d-dc6a-4cdb-9530-6efd1a8a726d req-3de10e4b-7f95-4e13-a4be-93c03f4daa11 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.086 256736 DEBUG oslo_concurrency.lockutils [req-937dfe5d-dc6a-4cdb-9530-6efd1a8a726d req-3de10e4b-7f95-4e13-a4be-93c03f4daa11 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.086 256736 DEBUG oslo_concurrency.lockutils [req-937dfe5d-dc6a-4cdb-9530-6efd1a8a726d req-3de10e4b-7f95-4e13-a4be-93c03f4daa11 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.087 256736 DEBUG nova.compute.manager [req-937dfe5d-dc6a-4cdb-9530-6efd1a8a726d req-3de10e4b-7f95-4e13-a4be-93c03f4daa11 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Processing event network-vif-plugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.088 256736 DEBUG nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.093 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403027.093201, 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.094 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] VM Resumed (Lifecycle Event)
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.098 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.103 256736 INFO nova.virt.libvirt.driver [-] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Instance spawned successfully.
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.103 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.121 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.128 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.131 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.131 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.132 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.132 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.132 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.133 256736 DEBUG nova.virt.libvirt.driver [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.161 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.199 256736 INFO nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Took 17.10 seconds to spawn the instance on the hypervisor.
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.200 256736 DEBUG nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.201 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.295 256736 INFO nova.compute.manager [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Took 18.86 seconds to build instance.
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.321 256736 DEBUG oslo_concurrency.lockutils [None req-86795d89-398b-488b-8c90-4881a77654ed c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:07 compute-0 ceph-mon[75050]: pgmap v1587: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Nov 29 07:57:07 compute-0 nova_compute[256729]: 2025-11-29 07:57:07.736 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575956728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3575956728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 115 KiB/s wr, 54 op/s
Nov 29 07:57:08 compute-0 ceph-mon[75050]: pgmap v1588: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 1004 KiB/s wr, 47 op/s
Nov 29 07:57:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3575956728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3575956728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:09 compute-0 nova_compute[256729]: 2025-11-29 07:57:09.319 256736 DEBUG nova.compute.manager [req-4dc2846c-8be1-455c-a473-7323fcd09bc7 req-76998e27-e3cb-4f19-84ef-7adca2a15393 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received event network-vif-plugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:09 compute-0 nova_compute[256729]: 2025-11-29 07:57:09.319 256736 DEBUG oslo_concurrency.lockutils [req-4dc2846c-8be1-455c-a473-7323fcd09bc7 req-76998e27-e3cb-4f19-84ef-7adca2a15393 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:09 compute-0 nova_compute[256729]: 2025-11-29 07:57:09.319 256736 DEBUG oslo_concurrency.lockutils [req-4dc2846c-8be1-455c-a473-7323fcd09bc7 req-76998e27-e3cb-4f19-84ef-7adca2a15393 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:09 compute-0 nova_compute[256729]: 2025-11-29 07:57:09.319 256736 DEBUG oslo_concurrency.lockutils [req-4dc2846c-8be1-455c-a473-7323fcd09bc7 req-76998e27-e3cb-4f19-84ef-7adca2a15393 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:09 compute-0 nova_compute[256729]: 2025-11-29 07:57:09.319 256736 DEBUG nova.compute.manager [req-4dc2846c-8be1-455c-a473-7323fcd09bc7 req-76998e27-e3cb-4f19-84ef-7adca2a15393 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] No waiting events found dispatching network-vif-plugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:57:09 compute-0 nova_compute[256729]: 2025-11-29 07:57:09.319 256736 WARNING nova.compute.manager [req-4dc2846c-8be1-455c-a473-7323fcd09bc7 req-76998e27-e3cb-4f19-84ef-7adca2a15393 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received unexpected event network-vif-plugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c for instance with vm_state active and task_state None.
Nov 29 07:57:09 compute-0 sshd-session[282083]: Invalid user elasticsearch from 143.14.121.41 port 54110
Nov 29 07:57:09 compute-0 sshd-session[282083]: Connection closed by invalid user elasticsearch 143.14.121.41 port 54110 [preauth]
Nov 29 07:57:10 compute-0 ceph-mon[75050]: pgmap v1589: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 115 KiB/s wr, 54 op/s
Nov 29 07:57:10 compute-0 nova_compute[256729]: 2025-11-29 07:57:10.292 256736 DEBUG nova.compute.manager [req-cedfceef-c2f5-4f5d-b4d4-ed2bd70ab62c req-2e10bd3d-ff69-4ee6-92c9-de7e7410da35 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received event network-changed-bc8c91e4-3e52-4696-8921-d8013cfb7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:10 compute-0 nova_compute[256729]: 2025-11-29 07:57:10.293 256736 DEBUG nova.compute.manager [req-cedfceef-c2f5-4f5d-b4d4-ed2bd70ab62c req-2e10bd3d-ff69-4ee6-92c9-de7e7410da35 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Refreshing instance network info cache due to event network-changed-bc8c91e4-3e52-4696-8921-d8013cfb7b7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:57:10 compute-0 nova_compute[256729]: 2025-11-29 07:57:10.293 256736 DEBUG oslo_concurrency.lockutils [req-cedfceef-c2f5-4f5d-b4d4-ed2bd70ab62c req-2e10bd3d-ff69-4ee6-92c9-de7e7410da35 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-147c2de5-0104-4eb0-bc20-b3bdc3909ed9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:10 compute-0 nova_compute[256729]: 2025-11-29 07:57:10.293 256736 DEBUG oslo_concurrency.lockutils [req-cedfceef-c2f5-4f5d-b4d4-ed2bd70ab62c req-2e10bd3d-ff69-4ee6-92c9-de7e7410da35 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-147c2de5-0104-4eb0-bc20-b3bdc3909ed9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:10 compute-0 nova_compute[256729]: 2025-11-29 07:57:10.293 256736 DEBUG nova.network.neutron [req-cedfceef-c2f5-4f5d-b4d4-ed2bd70ab62c req-2e10bd3d-ff69-4ee6-92c9-de7e7410da35 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Refreshing network info cache for port bc8c91e4-3e52-4696-8921-d8013cfb7b7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:57:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 27 KiB/s wr, 34 op/s
Nov 29 07:57:11 compute-0 sshd-session[282085]: Invalid user deploy from 143.14.121.41 port 54114
Nov 29 07:57:12 compute-0 sshd-session[282085]: Connection closed by invalid user deploy 143.14.121.41 port 54114 [preauth]
Nov 29 07:57:12 compute-0 nova_compute[256729]: 2025-11-29 07:57:12.173 256736 DEBUG nova.network.neutron [req-cedfceef-c2f5-4f5d-b4d4-ed2bd70ab62c req-2e10bd3d-ff69-4ee6-92c9-de7e7410da35 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Updated VIF entry in instance network info cache for port bc8c91e4-3e52-4696-8921-d8013cfb7b7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:57:12 compute-0 nova_compute[256729]: 2025-11-29 07:57:12.175 256736 DEBUG nova.network.neutron [req-cedfceef-c2f5-4f5d-b4d4-ed2bd70ab62c req-2e10bd3d-ff69-4ee6-92c9-de7e7410da35 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Updating instance_info_cache with network_info: [{"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:12 compute-0 nova_compute[256729]: 2025-11-29 07:57:12.201 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:12 compute-0 nova_compute[256729]: 2025-11-29 07:57:12.292 256736 DEBUG oslo_concurrency.lockutils [req-cedfceef-c2f5-4f5d-b4d4-ed2bd70ab62c req-2e10bd3d-ff69-4ee6-92c9-de7e7410da35 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-147c2de5-0104-4eb0-bc20-b3bdc3909ed9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:57:12 compute-0 ceph-mon[75050]: pgmap v1590: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 27 KiB/s wr, 34 op/s
Nov 29 07:57:12 compute-0 nova_compute[256729]: 2025-11-29 07:57:12.739 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 27 KiB/s wr, 55 op/s
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.079 256736 DEBUG oslo_concurrency.lockutils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.080 256736 DEBUG oslo_concurrency.lockutils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.097 256736 DEBUG nova.objects.instance [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lazy-loading 'flavor' on Instance uuid b8cc435e-f1de-4ae2-990d-3e27f1e26a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.125 256736 INFO nova.virt.libvirt.driver [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Ignoring supplied device name: /dev/vdb
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.145 256736 DEBUG oslo_concurrency.lockutils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.356 256736 DEBUG oslo_concurrency.lockutils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.357 256736 DEBUG oslo_concurrency.lockutils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.357 256736 INFO nova.compute.manager [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Attaching volume 478f735d-329a-4472-a15a-ff17cee69cb6 to /dev/vdb
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.489 256736 DEBUG os_brick.utils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.491 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.510 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.511 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc45f2f-ca73-48d9-8337-f6a8ecc8246f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.512 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.526 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.526 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[4372b3df-54e4-41ae-b77f-3617d6b7dacd]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.528 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.536 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.536 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d1bab3-5f23-4486-8fdc-2faa0fc26524]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.537 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[a6873e7e-2c49-4dbc-bc82-064014bb3a50]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.538 256736 DEBUG oslo_concurrency.processutils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.569 256736 DEBUG oslo_concurrency.processutils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.572 256736 DEBUG os_brick.initiator.connectors.lightos [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.572 256736 DEBUG os_brick.initiator.connectors.lightos [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.572 256736 DEBUG os_brick.initiator.connectors.lightos [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.573 256736 DEBUG os_brick.utils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:57:14 compute-0 nova_compute[256729]: 2025-11-29 07:57:14.573 256736 DEBUG nova.virt.block_device [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Updating existing volume attachment record: 3e85d737-5595-4e59-8e3d-fca11a008861 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:57:14 compute-0 ceph-mon[75050]: pgmap v1591: 305 pgs: 305 active+clean; 213 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 27 KiB/s wr, 55 op/s
Nov 29 07:57:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 215 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 31 KiB/s wr, 96 op/s
Nov 29 07:57:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2167732867' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011084496280197955 of space, bias 1.0, pg target 0.33253488840593864 quantized to 32 (current 32)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003810355212606629 of space, bias 1.0, pg target 0.11431065637819887 quantized to 32 (current 32)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:57:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:57:15 compute-0 nova_compute[256729]: 2025-11-29 07:57:15.387 256736 DEBUG nova.objects.instance [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lazy-loading 'flavor' on Instance uuid b8cc435e-f1de-4ae2-990d-3e27f1e26a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:15 compute-0 nova_compute[256729]: 2025-11-29 07:57:15.412 256736 DEBUG nova.virt.libvirt.driver [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Attempting to attach volume 478f735d-329a-4472-a15a-ff17cee69cb6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:57:15 compute-0 nova_compute[256729]: 2025-11-29 07:57:15.415 256736 DEBUG nova.virt.libvirt.guest [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:57:15 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:57:15 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-478f735d-329a-4472-a15a-ff17cee69cb6">
Nov 29 07:57:15 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:15 compute-0 nova_compute[256729]:   </source>
Nov 29 07:57:15 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 07:57:15 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:57:15 compute-0 nova_compute[256729]:   </auth>
Nov 29 07:57:15 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:57:15 compute-0 nova_compute[256729]:   <serial>478f735d-329a-4472-a15a-ff17cee69cb6</serial>
Nov 29 07:57:15 compute-0 nova_compute[256729]: </disk>
Nov 29 07:57:15 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:57:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1565579619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:16 compute-0 sshd-session[282087]: Invalid user dan from 143.14.121.41 port 35374
Nov 29 07:57:16 compute-0 ceph-mon[75050]: pgmap v1592: 305 pgs: 305 active+clean; 215 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 31 KiB/s wr, 96 op/s
Nov 29 07:57:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2167732867' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1565579619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:16 compute-0 nova_compute[256729]: 2025-11-29 07:57:16.684 256736 DEBUG nova.virt.libvirt.driver [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:16 compute-0 nova_compute[256729]: 2025-11-29 07:57:16.684 256736 DEBUG nova.virt.libvirt.driver [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:16 compute-0 nova_compute[256729]: 2025-11-29 07:57:16.684 256736 DEBUG nova.virt.libvirt.driver [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:16 compute-0 nova_compute[256729]: 2025-11-29 07:57:16.685 256736 DEBUG nova.virt.libvirt.driver [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] No VIF found with MAC fa:16:3e:e1:a2:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:57:16 compute-0 sshd-session[282087]: Connection closed by invalid user dan 143.14.121.41 port 35374 [preauth]
Nov 29 07:57:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 225 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 185 KiB/s wr, 100 op/s
Nov 29 07:57:17 compute-0 nova_compute[256729]: 2025-11-29 07:57:17.203 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 29 07:57:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 29 07:57:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 29 07:57:17 compute-0 nova_compute[256729]: 2025-11-29 07:57:17.671 256736 DEBUG oslo_concurrency.lockutils [None req-453415dc-d9ba-4cbc-acb5-b8ba6a505b01 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:17 compute-0 nova_compute[256729]: 2025-11-29 07:57:17.741 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:18 compute-0 ceph-mon[75050]: pgmap v1593: 305 pgs: 305 active+clean; 225 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 185 KiB/s wr, 100 op/s
Nov 29 07:57:18 compute-0 ceph-mon[75050]: osdmap e248: 3 total, 3 up, 3 in
Nov 29 07:57:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.4 MiB/s wr, 127 op/s
Nov 29 07:57:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 29 07:57:20 compute-0 nova_compute[256729]: 2025-11-29 07:57:20.333 256736 DEBUG nova.compute.manager [req-5e01c841-85c6-437f-a24c-bf1e6f754094 req-85100e79-cb79-46dc-aea3-fb69232f995f ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received event volume-extended-478f735d-329a-4472-a15a-ff17cee69cb6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:20 compute-0 nova_compute[256729]: 2025-11-29 07:57:20.358 256736 DEBUG nova.compute.manager [req-5e01c841-85c6-437f-a24c-bf1e6f754094 req-85100e79-cb79-46dc-aea3-fb69232f995f ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Handling volume-extended event for volume 478f735d-329a-4472-a15a-ff17cee69cb6 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Nov 29 07:57:20 compute-0 nova_compute[256729]: 2025-11-29 07:57:20.378 256736 INFO nova.compute.manager [req-5e01c841-85c6-437f-a24c-bf1e6f754094 req-85100e79-cb79-46dc-aea3-fb69232f995f ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Cinder extended volume 478f735d-329a-4472-a15a-ff17cee69cb6; extending it to detect new size
Nov 29 07:57:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.4 MiB/s wr, 127 op/s
Nov 29 07:57:20 compute-0 nova_compute[256729]: 2025-11-29 07:57:20.843 256736 DEBUG nova.virt.libvirt.driver [req-5e01c841-85c6-437f-a24c-bf1e6f754094 req-85100e79-cb79-46dc-aea3-fb69232f995f ecb39e11079b4fe1956168f4ef628305 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756
Nov 29 07:57:21 compute-0 sshd-session[282116]: Invalid user bot from 143.14.121.41 port 35388
Nov 29 07:57:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 29 07:57:21 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 29 07:57:21 compute-0 ceph-mon[75050]: pgmap v1595: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.4 MiB/s wr, 127 op/s
Nov 29 07:57:21 compute-0 sshd-session[282116]: Connection closed by invalid user bot 143.14.121.41 port 35388 [preauth]
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.206 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.444 256736 DEBUG oslo_concurrency.lockutils [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.445 256736 DEBUG oslo_concurrency.lockutils [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.468 256736 INFO nova.compute.manager [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Detaching volume 478f735d-329a-4472-a15a-ff17cee69cb6
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.590 256736 INFO nova.virt.block_device [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Attempting to driver detach volume 478f735d-329a-4472-a15a-ff17cee69cb6 from mountpoint /dev/vdb
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.600 256736 DEBUG nova.virt.libvirt.driver [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Attempting to detach device vdb from instance b8cc435e-f1de-4ae2-990d-3e27f1e26a21 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.601 256736 DEBUG nova.virt.libvirt.guest [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-478f735d-329a-4472-a15a-ff17cee69cb6">
Nov 29 07:57:22 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   </source>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <serial>478f735d-329a-4472-a15a-ff17cee69cb6</serial>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:57:22 compute-0 nova_compute[256729]: </disk>
Nov 29 07:57:22 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.748 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 292 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.1 MiB/s wr, 73 op/s
Nov 29 07:57:22 compute-0 ovn_controller[153383]: 2025-11-29T07:57:22Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:64:59 10.100.0.3
Nov 29 07:57:22 compute-0 ovn_controller[153383]: 2025-11-29T07:57:22Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:64:59 10.100.0.3
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.943 256736 INFO nova.virt.libvirt.driver [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Successfully detached device vdb from instance b8cc435e-f1de-4ae2-990d-3e27f1e26a21 from the persistent domain config.
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.944 256736 DEBUG nova.virt.libvirt.driver [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b8cc435e-f1de-4ae2-990d-3e27f1e26a21 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:57:22 compute-0 nova_compute[256729]: 2025-11-29 07:57:22.944 256736 DEBUG nova.virt.libvirt.guest [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-478f735d-329a-4472-a15a-ff17cee69cb6">
Nov 29 07:57:22 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   </source>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <serial>478f735d-329a-4472-a15a-ff17cee69cb6</serial>
Nov 29 07:57:22 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:57:22 compute-0 nova_compute[256729]: </disk>
Nov 29 07:57:22 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:57:22 compute-0 ceph-mon[75050]: pgmap v1596: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.4 MiB/s wr, 127 op/s
Nov 29 07:57:22 compute-0 ceph-mon[75050]: osdmap e249: 3 total, 3 up, 3 in
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.084 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764403043.0836575, b8cc435e-f1de-4ae2-990d-3e27f1e26a21 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.086 256736 DEBUG nova.virt.libvirt.driver [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b8cc435e-f1de-4ae2-990d-3e27f1e26a21 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.089 256736 INFO nova.virt.libvirt.driver [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Successfully detached device vdb from instance b8cc435e-f1de-4ae2-990d-3e27f1e26a21 from the live domain config.
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.252 256736 DEBUG nova.objects.instance [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lazy-loading 'flavor' on Instance uuid b8cc435e-f1de-4ae2-990d-3e27f1e26a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.290 256736 DEBUG oslo_concurrency.lockutils [None req-34d42340-fb4e-4016-b5c7-2a652b283a31 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.817 256736 DEBUG oslo_concurrency.lockutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.818 256736 DEBUG oslo_concurrency.lockutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.819 256736 DEBUG oslo_concurrency.lockutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.819 256736 DEBUG oslo_concurrency.lockutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.819 256736 DEBUG oslo_concurrency.lockutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.821 256736 INFO nova.compute.manager [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Terminating instance
Nov 29 07:57:23 compute-0 nova_compute[256729]: 2025-11-29 07:57:23.822 256736 DEBUG nova.compute.manager [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:57:24 compute-0 podman[282124]: 2025-11-29 07:57:24.732286203 +0000 UTC m=+0.083598513 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 07:57:24 compute-0 podman[282123]: 2025-11-29 07:57:24.743735339 +0000 UTC m=+0.096313892 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:57:24 compute-0 podman[282122]: 2025-11-29 07:57:24.778058825 +0000 UTC m=+0.133539555 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 07:57:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 322 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.3 MiB/s wr, 125 op/s
Nov 29 07:57:25 compute-0 ceph-mon[75050]: pgmap v1598: 305 pgs: 305 active+clean; 292 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.1 MiB/s wr, 73 op/s
Nov 29 07:57:25 compute-0 kernel: tap0c5dc4c4-19 (unregistering): left promiscuous mode
Nov 29 07:57:25 compute-0 NetworkManager[48962]: <info>  [1764403045.6251] device (tap0c5dc4c4-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:57:25 compute-0 ovn_controller[153383]: 2025-11-29T07:57:25Z|00128|binding|INFO|Releasing lport 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c from this chassis (sb_readonly=0)
Nov 29 07:57:25 compute-0 ovn_controller[153383]: 2025-11-29T07:57:25Z|00129|binding|INFO|Setting lport 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c down in Southbound
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.639 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:25 compute-0 ovn_controller[153383]: 2025-11-29T07:57:25Z|00130|binding|INFO|Removing iface tap0c5dc4c4-19 ovn-installed in OVS
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.669 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:25.674 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:a2:6a 10.100.0.7'], port_security=['fa:16:3e:e1:a2:6a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b8cc435e-f1de-4ae2-990d-3e27f1e26a21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00f1b1a1-e01a-4267-8e2c-c523dd99b965', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7bf43fdb064c4ac3bca9dd2593ccf7ce', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f6bae9e6-1792-464f-b5c8-5ba4b9d03ba3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80ae3c04-5883-4581-bcfa-da58cb4c9887, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=0c5dc4c4-1973-4476-a9d5-2a14d9f8302c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:57:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:25.675 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 0c5dc4c4-1973-4476-a9d5-2a14d9f8302c in datapath 00f1b1a1-e01a-4267-8e2c-c523dd99b965 unbound from our chassis
Nov 29 07:57:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:25.676 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00f1b1a1-e01a-4267-8e2c-c523dd99b965, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:57:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:25.677 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[666f1840-e042-4e31-848d-474aee88e8e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:25.679 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965 namespace which is not needed anymore
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.689 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:25 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 29 07:57:25 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 15.572s CPU time.
Nov 29 07:57:25 compute-0 systemd-machined[217781]: Machine qemu-10-instance-0000000a terminated.
Nov 29 07:57:25 compute-0 neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965[280592]: [NOTICE]   (280596) : haproxy version is 2.8.14-c23fe91
Nov 29 07:57:25 compute-0 neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965[280592]: [NOTICE]   (280596) : path to executable is /usr/sbin/haproxy
Nov 29 07:57:25 compute-0 neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965[280592]: [WARNING]  (280596) : Exiting Master process...
Nov 29 07:57:25 compute-0 neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965[280592]: [ALERT]    (280596) : Current worker (280598) exited with code 143 (Terminated)
Nov 29 07:57:25 compute-0 neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965[280592]: [WARNING]  (280596) : All workers exited. Exiting... (0)
Nov 29 07:57:25 compute-0 systemd[1]: libpod-7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1.scope: Deactivated successfully.
Nov 29 07:57:25 compute-0 podman[282206]: 2025-11-29 07:57:25.831139887 +0000 UTC m=+0.058089151 container died 7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.865 256736 INFO nova.virt.libvirt.driver [-] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Instance destroyed successfully.
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.866 256736 DEBUG nova.objects.instance [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lazy-loading 'resources' on Instance uuid b8cc435e-f1de-4ae2-990d-3e27f1e26a21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.884 256736 DEBUG nova.compute.manager [req-fa2aa347-879b-471f-a33f-22748a36b537 req-10806ba6-10b0-4b6d-a895-432e2089d2d1 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received event network-vif-unplugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.885 256736 DEBUG oslo_concurrency.lockutils [req-fa2aa347-879b-471f-a33f-22748a36b537 req-10806ba6-10b0-4b6d-a895-432e2089d2d1 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.885 256736 DEBUG oslo_concurrency.lockutils [req-fa2aa347-879b-471f-a33f-22748a36b537 req-10806ba6-10b0-4b6d-a895-432e2089d2d1 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.886 256736 DEBUG oslo_concurrency.lockutils [req-fa2aa347-879b-471f-a33f-22748a36b537 req-10806ba6-10b0-4b6d-a895-432e2089d2d1 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.886 256736 DEBUG nova.compute.manager [req-fa2aa347-879b-471f-a33f-22748a36b537 req-10806ba6-10b0-4b6d-a895-432e2089d2d1 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] No waiting events found dispatching network-vif-unplugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.886 256736 DEBUG nova.compute.manager [req-fa2aa347-879b-471f-a33f-22748a36b537 req-10806ba6-10b0-4b6d-a895-432e2089d2d1 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received event network-vif-unplugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.887 256736 DEBUG nova.virt.libvirt.vif [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-438353141',display_name='tempest-VolumesExtendAttachedTest-instance-438353141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-438353141',id=10,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGn5K8e6youL+V/TDF+jRWUHECX24yHN4WVE3KX6EwKnN9GA4h2ZT1MJr3xV6xOYUt5v+J53UarYfWFEzpB6qNHXL3bK/rzPTklAH5cSOpiLhI2xzvUa8JU3xBLsceMY7g==',key_name='tempest-keypair-1640688050',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:56:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7bf43fdb064c4ac3bca9dd2593ccf7ce',ramdisk_id='',reservation_id='r-fd1r8bcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-602415048',owner_user_name='tempest-VolumesExtendAttachedTest-602415048-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:56:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0c5cb3005d814da59b97c47aec6abaeb',uuid=b8cc435e-f1de-4ae2-990d-3e27f1e26a21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.888 256736 DEBUG nova.network.os_vif_util [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Converting VIF {"id": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "address": "fa:16:3e:e1:a2:6a", "network": {"id": "00f1b1a1-e01a-4267-8e2c-c523dd99b965", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1054592581-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7bf43fdb064c4ac3bca9dd2593ccf7ce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c5dc4c4-19", "ovs_interfaceid": "0c5dc4c4-1973-4476-a9d5-2a14d9f8302c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.889 256736 DEBUG nova.network.os_vif_util [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:a2:6a,bridge_name='br-int',has_traffic_filtering=True,id=0c5dc4c4-1973-4476-a9d5-2a14d9f8302c,network=Network(00f1b1a1-e01a-4267-8e2c-c523dd99b965),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5dc4c4-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.890 256736 DEBUG os_vif [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:a2:6a,bridge_name='br-int',has_traffic_filtering=True,id=0c5dc4c4-1973-4476-a9d5-2a14d9f8302c,network=Network(00f1b1a1-e01a-4267-8e2c-c523dd99b965),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5dc4c4-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.892 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.893 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c5dc4c4-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.897 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.898 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:57:25 compute-0 nova_compute[256729]: 2025-11-29 07:57:25.900 256736 INFO os_vif [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:a2:6a,bridge_name='br-int',has_traffic_filtering=True,id=0c5dc4c4-1973-4476-a9d5-2a14d9f8302c,network=Network(00f1b1a1-e01a-4267-8e2c-c523dd99b965),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c5dc4c4-19')
Nov 29 07:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1-userdata-shm.mount: Deactivated successfully.
Nov 29 07:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ab0b901284a965ebecae38bee6c56d7ddf612e2aac8b2e3da46569b60065cae-merged.mount: Deactivated successfully.
Nov 29 07:57:25 compute-0 podman[282206]: 2025-11-29 07:57:25.968877125 +0000 UTC m=+0.195826369 container cleanup 7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 07:57:25 compute-0 systemd[1]: libpod-conmon-7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1.scope: Deactivated successfully.
Nov 29 07:57:26 compute-0 podman[282266]: 2025-11-29 07:57:26.053282748 +0000 UTC m=+0.059890020 container remove 7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:57:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:26.059 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a38b7efd-9b58-4d8c-b4cb-7e55189bc391]: (4, ('Sat Nov 29 07:57:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965 (7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1)\n7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1\nSat Nov 29 07:57:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965 (7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1)\n7c95dc124221849b61b634147ddff81111aed6db0e1a49db17582844d6b47fe1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:26.061 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c3113c-f5d8-4318-b0b2-e1a8c19a97b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:26.062 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00f1b1a1-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:26 compute-0 nova_compute[256729]: 2025-11-29 07:57:26.064 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:26 compute-0 kernel: tap00f1b1a1-e0: left promiscuous mode
Nov 29 07:57:26 compute-0 nova_compute[256729]: 2025-11-29 07:57:26.066 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:26.069 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c06c6d5b-9b7d-433b-97c6-9393d79d8c55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:26 compute-0 nova_compute[256729]: 2025-11-29 07:57:26.087 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:26.089 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[640a55f6-ddce-4298-a28c-082956059da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:26.091 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd452f1-b272-4b94-b310-956a796cbb8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:26.116 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3c7617-6c5c-49c7-8018-599a083fc271]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531354, 'reachable_time': 33642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282281, 'error': None, 'target': 'ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d00f1b1a1\x2de01a\x2d4267\x2d8e2c\x2dc523dd99b965.mount: Deactivated successfully.
Nov 29 07:57:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:26.121 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00f1b1a1-e01a-4267-8e2c-c523dd99b965 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:57:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:26.122 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[c09703a9-8233-4787-89bd-5d6a9d7e5495]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:26 compute-0 nova_compute[256729]: 2025-11-29 07:57:26.399 256736 INFO nova.virt.libvirt.driver [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Deleting instance files /var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21_del
Nov 29 07:57:26 compute-0 nova_compute[256729]: 2025-11-29 07:57:26.400 256736 INFO nova.virt.libvirt.driver [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Deletion of /var/lib/nova/instances/b8cc435e-f1de-4ae2-990d-3e27f1e26a21_del complete
Nov 29 07:57:26 compute-0 nova_compute[256729]: 2025-11-29 07:57:26.460 256736 INFO nova.compute.manager [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Took 2.64 seconds to destroy the instance on the hypervisor.
Nov 29 07:57:26 compute-0 nova_compute[256729]: 2025-11-29 07:57:26.462 256736 DEBUG oslo.service.loopingcall [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:57:26 compute-0 nova_compute[256729]: 2025-11-29 07:57:26.462 256736 DEBUG nova.compute.manager [-] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:57:26 compute-0 nova_compute[256729]: 2025-11-29 07:57:26.463 256736 DEBUG nova.network.neutron [-] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:57:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3094269645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:26 compute-0 ceph-mon[75050]: pgmap v1599: 305 pgs: 305 active+clean; 322 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.3 MiB/s wr, 125 op/s
Nov 29 07:57:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3094269645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 335 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.0 MiB/s wr, 125 op/s
Nov 29 07:57:27 compute-0 nova_compute[256729]: 2025-11-29 07:57:27.208 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 29 07:57:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 29 07:57:27 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 29 07:57:27 compute-0 nova_compute[256729]: 2025-11-29 07:57:27.991 256736 DEBUG nova.compute.manager [req-620faf55-5322-4bd8-bf63-bb43dc645277 req-be0218e7-d0f9-43ed-8bc2-604f90f6dcfe ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received event network-vif-plugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:27 compute-0 nova_compute[256729]: 2025-11-29 07:57:27.991 256736 DEBUG oslo_concurrency.lockutils [req-620faf55-5322-4bd8-bf63-bb43dc645277 req-be0218e7-d0f9-43ed-8bc2-604f90f6dcfe ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:27 compute-0 nova_compute[256729]: 2025-11-29 07:57:27.992 256736 DEBUG oslo_concurrency.lockutils [req-620faf55-5322-4bd8-bf63-bb43dc645277 req-be0218e7-d0f9-43ed-8bc2-604f90f6dcfe ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:27 compute-0 nova_compute[256729]: 2025-11-29 07:57:27.992 256736 DEBUG oslo_concurrency.lockutils [req-620faf55-5322-4bd8-bf63-bb43dc645277 req-be0218e7-d0f9-43ed-8bc2-604f90f6dcfe ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:27 compute-0 nova_compute[256729]: 2025-11-29 07:57:27.992 256736 DEBUG nova.compute.manager [req-620faf55-5322-4bd8-bf63-bb43dc645277 req-be0218e7-d0f9-43ed-8bc2-604f90f6dcfe ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] No waiting events found dispatching network-vif-plugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:57:27 compute-0 nova_compute[256729]: 2025-11-29 07:57:27.993 256736 WARNING nova.compute.manager [req-620faf55-5322-4bd8-bf63-bb43dc645277 req-be0218e7-d0f9-43ed-8bc2-604f90f6dcfe ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received unexpected event network-vif-plugged-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c for instance with vm_state active and task_state deleting.
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.024 256736 DEBUG nova.network.neutron [-] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.043 256736 INFO nova.compute.manager [-] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Took 1.58 seconds to deallocate network for instance.
Nov 29 07:57:28 compute-0 sshd-session[282118]: Invalid user arkserver from 143.14.121.41 port 52554
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.106 256736 DEBUG oslo_concurrency.lockutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.107 256736 DEBUG oslo_concurrency.lockutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.125 256736 DEBUG nova.compute.manager [req-89cb1b0c-6c38-4900-a8d1-9f6dde62244b req-42c4f05e-df9f-47f6-b9db-9655533f6469 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Received event network-vif-deleted-0c5dc4c4-1973-4476-a9d5-2a14d9f8302c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.226 256736 DEBUG oslo_concurrency.processutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:28 compute-0 sshd-session[282118]: Connection closed by invalid user arkserver 143.14.121.41 port 52554 [preauth]
Nov 29 07:57:28 compute-0 ceph-mon[75050]: pgmap v1600: 305 pgs: 305 active+clean; 335 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.0 MiB/s wr, 125 op/s
Nov 29 07:57:28 compute-0 ceph-mon[75050]: osdmap e250: 3 total, 3 up, 3 in
Nov 29 07:57:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/681527661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.684 256736 DEBUG oslo_concurrency.processutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.694 256736 DEBUG nova.compute.provider_tree [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.712 256736 DEBUG nova.scheduler.client.report [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.736 256736 DEBUG oslo_concurrency.lockutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.763 256736 INFO nova.scheduler.client.report [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Deleted allocations for instance b8cc435e-f1de-4ae2-990d-3e27f1e26a21
Nov 29 07:57:28 compute-0 nova_compute[256729]: 2025-11-29 07:57:28.845 256736 DEBUG oslo_concurrency.lockutils [None req-f591d114-b480-483c-be40-f9a09966e0c7 0c5cb3005d814da59b97c47aec6abaeb 7bf43fdb064c4ac3bca9dd2593ccf7ce - - default default] Lock "b8cc435e-f1de-4ae2-990d-3e27f1e26a21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 324 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 8.0 MiB/s wr, 172 op/s
Nov 29 07:57:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/681527661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:30 compute-0 ceph-mon[75050]: pgmap v1602: 305 pgs: 305 active+clean; 324 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 8.0 MiB/s wr, 172 op/s
Nov 29 07:57:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 324 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.0 MiB/s wr, 141 op/s
Nov 29 07:57:30 compute-0 nova_compute[256729]: 2025-11-29 07:57:30.897 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:31 compute-0 sshd-session[282305]: Invalid user ubuntu from 143.14.121.41 port 52562
Nov 29 07:57:32 compute-0 sshd-session[282305]: Connection closed by invalid user ubuntu 143.14.121.41 port 52562 [preauth]
Nov 29 07:57:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:57:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/639608504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:57:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/639608504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:32 compute-0 nova_compute[256729]: 2025-11-29 07:57:32.210 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:32 compute-0 ceph-mon[75050]: pgmap v1603: 305 pgs: 305 active+clean; 324 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.0 MiB/s wr, 141 op/s
Nov 29 07:57:32 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/639608504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:57:32 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/639608504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:57:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 306 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 162 op/s
Nov 29 07:57:33 compute-0 ceph-mon[75050]: pgmap v1604: 305 pgs: 305 active+clean; 306 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 162 op/s
Nov 29 07:57:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 306 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 121 op/s
Nov 29 07:57:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:57:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:57:35 compute-0 nova_compute[256729]: 2025-11-29 07:57:35.899 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:35 compute-0 ovn_controller[153383]: 2025-11-29T07:57:35Z|00131|binding|INFO|Releasing lport 9b4bf2c3-157d-4772-ab63-bb4e179af153 from this chassis (sb_readonly=0)
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.079 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Acquiring lock "81e82526-de13-4350-a618-49168b2e029c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.080 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:36 compute-0 sshd-session[282307]: Connection closed by authenticating user root 143.14.121.41 port 48136 [preauth]
Nov 29 07:57:36 compute-0 ceph-mon[75050]: pgmap v1605: 305 pgs: 305 active+clean; 306 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 121 op/s
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.101 256736 DEBUG nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.104 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.189 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.189 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.197 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.198 256736 INFO nova.compute.claims [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.345 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884409822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.826 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.836 256736 DEBUG nova.compute.provider_tree [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:57:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 306 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 104 op/s
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.859 256736 DEBUG nova.scheduler.client.report [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.888 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.889 256736 DEBUG nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.940 256736 DEBUG nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.940 256736 DEBUG nova.network.neutron [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.961 256736 INFO nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:57:36 compute-0 nova_compute[256729]: 2025-11-29 07:57:36.984 256736 DEBUG nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.049 256736 INFO nova.virt.block_device [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Booting with volume f3b5216d-549d-4b06-8579-7bdf5ec8d7a8 at /dev/vda
Nov 29 07:57:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2884409822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.213 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.216 256736 DEBUG os_brick.utils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.217 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.235 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.235 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[2a153a58-e833-4851-b2f0-6809b774f914]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.237 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.249 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.250 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[a47eaa4c-10f7-44fa-9f91-d50ea3feb890]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.252 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.266 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.266 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[219c63c7-a38a-4505-bf69-8fab21f2a28f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.271 256736 DEBUG nova.policy [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11e11652beb841579a10eab85f0c13f9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9d3648d4d8b045ca9d33086f2d66a86b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.268 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[7bea13fb-2cf0-46a7-a07b-991be4b900a9]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.275 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.310 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.314 256736 DEBUG os_brick.initiator.connectors.lightos [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.314 256736 DEBUG os_brick.initiator.connectors.lightos [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.315 256736 DEBUG os_brick.initiator.connectors.lightos [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.315 256736 DEBUG os_brick.utils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:57:37 compute-0 nova_compute[256729]: 2025-11-29 07:57:37.316 256736 DEBUG nova.virt.block_device [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updating existing volume attachment record: 7662aa1f-af7b-49fa-9795-c3292b6032e4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:57:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1501255320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:38 compute-0 ceph-mon[75050]: pgmap v1606: 305 pgs: 305 active+clean; 306 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 104 op/s
Nov 29 07:57:38 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1501255320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:38 compute-0 nova_compute[256729]: 2025-11-29 07:57:38.329 256736 DEBUG nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:57:38 compute-0 nova_compute[256729]: 2025-11-29 07:57:38.331 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:57:38 compute-0 nova_compute[256729]: 2025-11-29 07:57:38.332 256736 INFO nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Creating image(s)
Nov 29 07:57:38 compute-0 nova_compute[256729]: 2025-11-29 07:57:38.332 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 07:57:38 compute-0 nova_compute[256729]: 2025-11-29 07:57:38.332 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Ensure instance console log exists: /var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:57:38 compute-0 nova_compute[256729]: 2025-11-29 07:57:38.333 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:38 compute-0 nova_compute[256729]: 2025-11-29 07:57:38.333 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:38 compute-0 nova_compute[256729]: 2025-11-29 07:57:38.333 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:38 compute-0 nova_compute[256729]: 2025-11-29 07:57:38.413 256736 DEBUG nova.network.neutron [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Successfully created port: 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:57:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 306 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 92 op/s
Nov 29 07:57:39 compute-0 nova_compute[256729]: 2025-11-29 07:57:39.397 256736 DEBUG nova.network.neutron [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Successfully updated port: 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:57:39 compute-0 nova_compute[256729]: 2025-11-29 07:57:39.420 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Acquiring lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:39 compute-0 nova_compute[256729]: 2025-11-29 07:57:39.420 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Acquired lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:39 compute-0 nova_compute[256729]: 2025-11-29 07:57:39.421 256736 DEBUG nova.network.neutron [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:57:39 compute-0 nova_compute[256729]: 2025-11-29 07:57:39.482 256736 DEBUG nova.compute.manager [req-9d6dcf2b-311d-437b-bd39-dc26503c7155 req-de26aba1-b7fd-4006-ba5f-e772907a6a07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:39 compute-0 nova_compute[256729]: 2025-11-29 07:57:39.483 256736 DEBUG nova.compute.manager [req-9d6dcf2b-311d-437b-bd39-dc26503c7155 req-de26aba1-b7fd-4006-ba5f-e772907a6a07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing instance network info cache due to event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:57:39 compute-0 nova_compute[256729]: 2025-11-29 07:57:39.483 256736 DEBUG oslo_concurrency.lockutils [req-9d6dcf2b-311d-437b-bd39-dc26503c7155 req-de26aba1-b7fd-4006-ba5f-e772907a6a07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:39 compute-0 nova_compute[256729]: 2025-11-29 07:57:39.591 256736 DEBUG nova.network.neutron [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:57:39 compute-0 ovn_controller[153383]: 2025-11-29T07:57:39Z|00132|binding|INFO|Releasing lport 9b4bf2c3-157d-4772-ab63-bb4e179af153 from this chassis (sb_readonly=0)
Nov 29 07:57:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:39 compute-0 nova_compute[256729]: 2025-11-29 07:57:39.824 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.339 256736 DEBUG nova.network.neutron [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updating instance_info_cache with network_info: [{"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.363 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Releasing lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.364 256736 DEBUG nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Instance network_info: |[{"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.364 256736 DEBUG oslo_concurrency.lockutils [req-9d6dcf2b-311d-437b-bd39-dc26503c7155 req-de26aba1-b7fd-4006-ba5f-e772907a6a07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.365 256736 DEBUG nova.network.neutron [req-9d6dcf2b-311d-437b-bd39-dc26503c7155 req-de26aba1-b7fd-4006-ba5f-e772907a6a07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.371 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Start _get_guest_xml network_info=[{"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f3b5216d-549d-4b06-8579-7bdf5ec8d7a8', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f3b5216d-549d-4b06-8579-7bdf5ec8d7a8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '81e82526-de13-4350-a618-49168b2e029c', 'attached_at': '', 'detached_at': '', 'volume_id': 'f3b5216d-549d-4b06-8579-7bdf5ec8d7a8', 'serial': 'f3b5216d-549d-4b06-8579-7bdf5ec8d7a8'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': '7662aa1f-af7b-49fa-9795-c3292b6032e4', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.378 256736 WARNING nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.387 256736 DEBUG nova.virt.libvirt.host [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.387 256736 DEBUG nova.virt.libvirt.host [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.392 256736 DEBUG nova.virt.libvirt.host [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.392 256736 DEBUG nova.virt.libvirt.host [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.393 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.394 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.394 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.395 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.395 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.396 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.396 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.397 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.397 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.398 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.398 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.399 256736 DEBUG nova.virt.hardware [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.516 256736 DEBUG nova.storage.rbd_utils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] rbd image 81e82526-de13-4350-a618-49168b2e029c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.523 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:40 compute-0 ceph-mon[75050]: pgmap v1607: 305 pgs: 305 active+clean; 306 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 92 op/s
Nov 29 07:57:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 306 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 506 KiB/s rd, 151 KiB/s wr, 40 op/s
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.861 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403045.8603737, b8cc435e-f1de-4ae2-990d-3e27f1e26a21 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.862 256736 INFO nova.compute.manager [-] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] VM Stopped (Lifecycle Event)
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.885 256736 DEBUG nova.compute.manager [None req-4ade719c-de39-4ae3-883c-b63b28e9cfeb - - - - - -] [instance: b8cc435e-f1de-4ae2-990d-3e27f1e26a21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:40 compute-0 nova_compute[256729]: 2025-11-29 07:57:40.902 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160431038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.030 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.060 256736 DEBUG nova.virt.libvirt.vif [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1898927466',display_name='tempest-TestVolumeBackupRestore-server-1898927466',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1898927466',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD2oFI/eGmO1KgdFdCw086AXtsGkSZs6bW4dbz9aiUPNwQuO25ubQRmkoDg9ydhZ/LChEtq7wDa01nNKHOQTVokzoOLAWiQnh1zMeKdK8LOnzJC9plK7JZmNkhNTjpNqfg==',key_name='tempest-TestVolumeBackupRestore-309075728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d3648d4d8b045ca9d33086f2d66a86b',ramdisk_id='',reservation_id='r-zak6fese',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1464538626',owner_user_name='tempest-TestVolumeBackupRestore-1464538626-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:37Z,user_data=None,user_id='11e11652beb841579a10eab85f0c13f9',uuid=81e82526-de13-4350-a618-49168b2e029c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.061 256736 DEBUG nova.network.os_vif_util [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Converting VIF {"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.062 256736 DEBUG nova.network.os_vif_util [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:99:59,bridge_name='br-int',has_traffic_filtering=True,id=3142a2d6-f8a3-4cb2-b1f3-d90d7877515a,network=Network(22f97d85-f65d-44f6-8f02-46e31590c8a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3142a2d6-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.064 256736 DEBUG nova.objects.instance [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lazy-loading 'pci_devices' on Instance uuid 81e82526-de13-4350-a618-49168b2e029c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.083 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <uuid>81e82526-de13-4350-a618-49168b2e029c</uuid>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <name>instance-0000000c</name>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <nova:name>tempest-TestVolumeBackupRestore-server-1898927466</nova:name>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:57:40</nova:creationTime>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <nova:user uuid="11e11652beb841579a10eab85f0c13f9">tempest-TestVolumeBackupRestore-1464538626-project-member</nova:user>
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <nova:project uuid="9d3648d4d8b045ca9d33086f2d66a86b">tempest-TestVolumeBackupRestore-1464538626</nova:project>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <nova:port uuid="3142a2d6-f8a3-4cb2-b1f3-d90d7877515a">
Nov 29 07:57:41 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <system>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <entry name="serial">81e82526-de13-4350-a618-49168b2e029c</entry>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <entry name="uuid">81e82526-de13-4350-a618-49168b2e029c</entry>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     </system>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <os>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   </os>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <features>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   </features>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/81e82526-de13-4350-a618-49168b2e029c_disk.config">
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       </source>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-f3b5216d-549d-4b06-8579-7bdf5ec8d7a8">
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       </source>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:57:41 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <serial>f3b5216d-549d-4b06-8579-7bdf5ec8d7a8</serial>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:e4:99:59"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <target dev="tap3142a2d6-f8"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c/console.log" append="off"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <video>
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     </video>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:57:41 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:57:41 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:57:41 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:57:41 compute-0 nova_compute[256729]: </domain>
Nov 29 07:57:41 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.085 256736 DEBUG nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Preparing to wait for external event network-vif-plugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.085 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Acquiring lock "81e82526-de13-4350-a618-49168b2e029c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.086 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.086 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.087 256736 DEBUG nova.virt.libvirt.vif [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1898927466',display_name='tempest-TestVolumeBackupRestore-server-1898927466',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1898927466',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD2oFI/eGmO1KgdFdCw086AXtsGkSZs6bW4dbz9aiUPNwQuO25ubQRmkoDg9ydhZ/LChEtq7wDa01nNKHOQTVokzoOLAWiQnh1zMeKdK8LOnzJC9plK7JZmNkhNTjpNqfg==',key_name='tempest-TestVolumeBackupRestore-309075728',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d3648d4d8b045ca9d33086f2d66a86b',ramdisk_id='',reservation_id='r-zak6fese',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1464538626',owner_user_name='tempest-TestVolumeBackupRestore-1464538626-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:57:37Z,user_data=None,user_id='11e11652beb841579a10eab85f0c13f9',uuid=81e82526-de13-4350-a618-49168b2e029c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.087 256736 DEBUG nova.network.os_vif_util [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Converting VIF {"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.088 256736 DEBUG nova.network.os_vif_util [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:99:59,bridge_name='br-int',has_traffic_filtering=True,id=3142a2d6-f8a3-4cb2-b1f3-d90d7877515a,network=Network(22f97d85-f65d-44f6-8f02-46e31590c8a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3142a2d6-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.088 256736 DEBUG os_vif [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:99:59,bridge_name='br-int',has_traffic_filtering=True,id=3142a2d6-f8a3-4cb2-b1f3-d90d7877515a,network=Network(22f97d85-f65d-44f6-8f02-46e31590c8a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3142a2d6-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.089 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.089 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.089 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.094 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.094 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3142a2d6-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.095 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3142a2d6-f8, col_values=(('external_ids', {'iface-id': '3142a2d6-f8a3-4cb2-b1f3-d90d7877515a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:99:59', 'vm-uuid': '81e82526-de13-4350-a618-49168b2e029c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:41 compute-0 NetworkManager[48962]: <info>  [1764403061.0985] manager: (tap3142a2d6-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.101 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.108 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.110 256736 INFO os_vif [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:99:59,bridge_name='br-int',has_traffic_filtering=True,id=3142a2d6-f8a3-4cb2-b1f3-d90d7877515a,network=Network(22f97d85-f65d-44f6-8f02-46e31590c8a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3142a2d6-f8')
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.703 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.703 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.704 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] No VIF found with MAC fa:16:3e:e4:99:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.704 256736 INFO nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Using config drive
Nov 29 07:57:41 compute-0 nova_compute[256729]: 2025-11-29 07:57:41.975 256736 DEBUG nova.storage.rbd_utils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] rbd image 81e82526-de13-4350-a618-49168b2e029c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1160431038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:42 compute-0 sshd-session[282338]: Connection closed by authenticating user root 143.14.121.41 port 48144 [preauth]
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.216 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.256 256736 DEBUG nova.network.neutron [req-9d6dcf2b-311d-437b-bd39-dc26503c7155 req-de26aba1-b7fd-4006-ba5f-e772907a6a07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updated VIF entry in instance network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.257 256736 DEBUG nova.network.neutron [req-9d6dcf2b-311d-437b-bd39-dc26503c7155 req-de26aba1-b7fd-4006-ba5f-e772907a6a07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updating instance_info_cache with network_info: [{"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.281 256736 DEBUG oslo_concurrency.lockutils [req-9d6dcf2b-311d-437b-bd39-dc26503c7155 req-de26aba1-b7fd-4006-ba5f-e772907a6a07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.351 256736 INFO nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Creating config drive at /var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c/disk.config
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.357 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpom4nnhmr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.406 256736 DEBUG oslo_concurrency.lockutils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.406 256736 DEBUG oslo_concurrency.lockutils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.425 256736 DEBUG nova.objects.instance [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'flavor' on Instance uuid 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.450 256736 INFO nova.virt.libvirt.driver [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Ignoring supplied device name: /dev/vdb
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.467 256736 DEBUG oslo_concurrency.lockutils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.485 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpom4nnhmr" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.523 256736 DEBUG nova.storage.rbd_utils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] rbd image 81e82526-de13-4350-a618-49168b2e029c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.529 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c/disk.config 81e82526-de13-4350-a618-49168b2e029c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.676 256736 DEBUG oslo_concurrency.lockutils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.677 256736 DEBUG oslo_concurrency.lockutils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.677 256736 INFO nova.compute.manager [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Attaching volume e11e21e5-216d-4574-8e25-cba67af94fd0 to /dev/vdb
Nov 29 07:57:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 306 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 506 KiB/s rd, 151 KiB/s wr, 40 op/s
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.873 256736 DEBUG oslo_concurrency.processutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c/disk.config 81e82526-de13-4350-a618-49168b2e029c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.874 256736 INFO nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Deleting local config drive /var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c/disk.config because it was imported into RBD.
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.917 256736 DEBUG os_brick.utils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.919 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.935 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.936 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5f5e6b-4c28-40a0-9077-10121498698e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.938 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:42 compute-0 kernel: tap3142a2d6-f8: entered promiscuous mode
Nov 29 07:57:42 compute-0 ovn_controller[153383]: 2025-11-29T07:57:42Z|00133|binding|INFO|Claiming lport 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a for this chassis.
Nov 29 07:57:42 compute-0 ovn_controller[153383]: 2025-11-29T07:57:42Z|00134|binding|INFO|3142a2d6-f8a3-4cb2-b1f3-d90d7877515a: Claiming fa:16:3e:e4:99:59 10.100.0.8
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.944 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:42 compute-0 NetworkManager[48962]: <info>  [1764403062.9508] manager: (tap3142a2d6-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Nov 29 07:57:42 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:42.971 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:99:59 10.100.0.8'], port_security=['fa:16:3e:e4:99:59 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '81e82526-de13-4350-a618-49168b2e029c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22f97d85-f65d-44f6-8f02-46e31590c8a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d3648d4d8b045ca9d33086f2d66a86b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33020b1e-7391-4d15-9b02-5112a272566b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=214018cc-7cfb-4351-8051-151143afc580, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=3142a2d6-f8a3-4cb2-b1f3-d90d7877515a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:57:42 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:42.975 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a in datapath 22f97d85-f65d-44f6-8f02-46e31590c8a6 bound to our chassis
Nov 29 07:57:42 compute-0 ovn_controller[153383]: 2025-11-29T07:57:42Z|00135|binding|INFO|Setting lport 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a ovn-installed in OVS
Nov 29 07:57:42 compute-0 ovn_controller[153383]: 2025-11-29T07:57:42Z|00136|binding|INFO|Setting lport 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a up in Southbound
Nov 29 07:57:42 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:42.978 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 22f97d85-f65d-44f6-8f02-46e31590c8a6
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.979 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.984 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.991 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.992 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[aed22ebe-6162-4da6-af21-69af38b4ae8f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:42 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:42.992 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[25a8e4ff-1dd0-4ab7-9e5e-81b3a8436ca6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:42 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:42.993 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap22f97d85-f1 in ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:57:42 compute-0 nova_compute[256729]: 2025-11-29 07:57:42.994 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:42 compute-0 systemd-machined[217781]: New machine qemu-12-instance-0000000c.
Nov 29 07:57:42 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:42.996 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap22f97d85-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:57:42 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:42.996 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d5deb6-ffa1-4b3d-b308-b7f1d2e8b329]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:42 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:42.998 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebeb713-4baa-418b-920d-9ab214f1f629]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.008 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.009 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfb6d23-fbdd-4b9b-8d71-2bd5a27158fc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.010 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[87b230e2-edfd-4513-b33d-f41c2b9ce9ce]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.011 256736 DEBUG oslo_concurrency.processutils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.015 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[8417dde0-0625-4a10-9283-7ce46942d3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 29 07:57:43 compute-0 systemd-udevd[282465]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.037 256736 DEBUG oslo_concurrency.processutils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:43 compute-0 ceph-mon[75050]: pgmap v1608: 305 pgs: 305 active+clean; 306 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 506 KiB/s rd, 151 KiB/s wr, 40 op/s
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.040 256736 DEBUG os_brick.initiator.connectors.lightos [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.041 256736 DEBUG os_brick.initiator.connectors.lightos [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.041 256736 DEBUG os_brick.initiator.connectors.lightos [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.041 256736 DEBUG os_brick.utils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] <== get_connector_properties: return (123ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.042 256736 DEBUG nova.virt.block_device [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Updating existing volume attachment record: 4b2b0f3d-ac13-4b09-8447-d667c698dea8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 07:57:43 compute-0 NetworkManager[48962]: <info>  [1764403063.0452] device (tap3142a2d6-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:57:43 compute-0 NetworkManager[48962]: <info>  [1764403063.0473] device (tap3142a2d6-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.046 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1835ab8a-1344-4085-a20f-783448bb998e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.084 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1122aa-27e5-4e7d-b396-d7b64d5e6399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.089 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d5ff5e-8763-497a-8cbe-c9054aa9f306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 systemd-udevd[282467]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:57:43 compute-0 NetworkManager[48962]: <info>  [1764403063.0940] manager: (tap22f97d85-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.125 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[0f502d17-1309-4f3d-a785-ec5a38a4b603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.128 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[74634a89-7661-4392-99fd-189f816f4285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 NetworkManager[48962]: <info>  [1764403063.1526] device (tap22f97d85-f0): carrier: link connected
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.158 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7e12e2f8-2cc8-408e-94fa-dbe1e19af817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.174 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[af103aea-c41f-42db-bd49-2e64b6674b63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22f97d85-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:d7:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537821, 'reachable_time': 22567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282495, 'error': None, 'target': 'ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.189 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[15a332d7-9fb5-4e95-a085-3256f0d223f6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:d7bb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537821, 'tstamp': 537821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282496, 'error': None, 'target': 'ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.209 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[62e1dcd6-4965-449f-a161-4e5012705c8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap22f97d85-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:d7:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537821, 'reachable_time': 22567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282497, 'error': None, 'target': 'ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.238 256736 DEBUG nova.compute.manager [req-16da885e-4cbc-41b0-8523-0577fabd7f7c req-3ef51fdf-292c-4b7f-8c4b-df610669c56b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-vif-plugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.239 256736 DEBUG oslo_concurrency.lockutils [req-16da885e-4cbc-41b0-8523-0577fabd7f7c req-3ef51fdf-292c-4b7f-8c4b-df610669c56b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "81e82526-de13-4350-a618-49168b2e029c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.239 256736 DEBUG oslo_concurrency.lockutils [req-16da885e-4cbc-41b0-8523-0577fabd7f7c req-3ef51fdf-292c-4b7f-8c4b-df610669c56b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.239 256736 DEBUG oslo_concurrency.lockutils [req-16da885e-4cbc-41b0-8523-0577fabd7f7c req-3ef51fdf-292c-4b7f-8c4b-df610669c56b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.240 256736 DEBUG nova.compute.manager [req-16da885e-4cbc-41b0-8523-0577fabd7f7c req-3ef51fdf-292c-4b7f-8c4b-df610669c56b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Processing event network-vif-plugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.239 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[aef38148-3faa-4cdc-b528-b242ecfcfd85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.302 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2be80d55-d935-40ce-a27a-d7149aeb6483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.303 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22f97d85-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.304 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.304 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22f97d85-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:43 compute-0 NetworkManager[48962]: <info>  [1764403063.3069] manager: (tap22f97d85-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 29 07:57:43 compute-0 kernel: tap22f97d85-f0: entered promiscuous mode
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.306 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.309 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap22f97d85-f0, col_values=(('external_ids', {'iface-id': 'b141b25b-cafc-4e40-9859-f2161517d326'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:43 compute-0 ovn_controller[153383]: 2025-11-29T07:57:43Z|00137|binding|INFO|Releasing lport b141b25b-cafc-4e40-9859-f2161517d326 from this chassis (sb_readonly=0)
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.339 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.339 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/22f97d85-f65d-44f6-8f02-46e31590c8a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/22f97d85-f65d-44f6-8f02-46e31590c8a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.340 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4b7c02-ef28-43e2-b320-4a3c894f6ea5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.341 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-22f97d85-f65d-44f6-8f02-46e31590c8a6
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/22f97d85-f65d-44f6-8f02-46e31590c8a6.pid.haproxy
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 22f97d85-f65d-44f6-8f02-46e31590c8a6
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:57:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:43.342 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6', 'env', 'PROCESS_TAG=haproxy-22f97d85-f65d-44f6-8f02-46e31590c8a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/22f97d85-f65d-44f6-8f02-46e31590c8a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:57:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/306144367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:43 compute-0 podman[282530]: 2025-11-29 07:57:43.772731017 +0000 UTC m=+0.089792428 container create b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:57:43 compute-0 podman[282530]: 2025-11-29 07:57:43.708070211 +0000 UTC m=+0.025131672 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:57:43 compute-0 systemd[1]: Started libpod-conmon-b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34.scope.
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.839 256736 DEBUG nova.objects.instance [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'flavor' on Instance uuid 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d01d680b4df555935c011151b02a3135afbbd56c601210a614903800daf428/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.867 256736 DEBUG nova.virt.libvirt.driver [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Attempting to attach volume e11e21e5-216d-4574-8e25-cba67af94fd0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.870 256736 DEBUG nova.virt.libvirt.guest [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 07:57:43 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:57:43 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-e11e21e5-216d-4574-8e25-cba67af94fd0">
Nov 29 07:57:43 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:43 compute-0 nova_compute[256729]:   </source>
Nov 29 07:57:43 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 07:57:43 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:57:43 compute-0 nova_compute[256729]:   </auth>
Nov 29 07:57:43 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:57:43 compute-0 nova_compute[256729]:   <serial>e11e21e5-216d-4574-8e25-cba67af94fd0</serial>
Nov 29 07:57:43 compute-0 nova_compute[256729]: </disk>
Nov 29 07:57:43 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 07:57:43 compute-0 podman[282530]: 2025-11-29 07:57:43.899763228 +0000 UTC m=+0.216824689 container init b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:57:43 compute-0 podman[282530]: 2025-11-29 07:57:43.913216527 +0000 UTC m=+0.230277948 container start b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.930 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403063.9297943, 81e82526-de13-4350-a618-49168b2e029c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.930 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] VM Started (Lifecycle Event)
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.932 256736 DEBUG nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.935 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.938 256736 INFO nova.virt.libvirt.driver [-] [instance: 81e82526-de13-4350-a618-49168b2e029c] Instance spawned successfully.
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.939 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:57:43 compute-0 neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6[282585]: [NOTICE]   (282601) : New worker (282614) forked
Nov 29 07:57:43 compute-0 neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6[282585]: [NOTICE]   (282601) : Loading success.
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.971 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:43 compute-0 nova_compute[256729]: 2025-11-29 07:57:43.975 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.008 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.009 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403063.9300025, 81e82526-de13-4350-a618-49168b2e029c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.010 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] VM Paused (Lifecycle Event)
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.129 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.130 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.130 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.131 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.131 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.131 256736 DEBUG nova.virt.libvirt.driver [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.172 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.177 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403063.934809, 81e82526-de13-4350-a618-49168b2e029c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.177 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] VM Resumed (Lifecycle Event)
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.182 256736 DEBUG nova.virt.libvirt.driver [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.182 256736 DEBUG nova.virt.libvirt.driver [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.183 256736 DEBUG nova.virt.libvirt.driver [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.183 256736 DEBUG nova.virt.libvirt.driver [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] No VIF found with MAC fa:16:3e:15:64:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.216 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.224 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.230 256736 INFO nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Took 5.90 seconds to spawn the instance on the hypervisor.
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.231 256736 DEBUG nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.244 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.301 256736 INFO nova.compute.manager [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Took 8.15 seconds to build instance.
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.318 256736 DEBUG oslo_concurrency.lockutils [None req-8b2df6ad-b0de-4193-892a-211a7453ba14 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:44 compute-0 nova_compute[256729]: 2025-11-29 07:57:44.458 256736 DEBUG oslo_concurrency.lockutils [None req-3a24e383-9962-4ddb-b389-36235b8c12b2 c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:44 compute-0 ceph-mon[75050]: pgmap v1609: 305 pgs: 305 active+clean; 306 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 506 KiB/s rd, 151 KiB/s wr, 40 op/s
Nov 29 07:57:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/306144367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 306 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 25 KiB/s wr, 16 op/s
Nov 29 07:57:45 compute-0 nova_compute[256729]: 2025-11-29 07:57:45.348 256736 DEBUG nova.compute.manager [req-c36926b7-c049-404a-8e83-bc9ba0c4c9da req-e9db912b-dd1f-45e2-b23d-a08b47badd17 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-vif-plugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:45 compute-0 nova_compute[256729]: 2025-11-29 07:57:45.349 256736 DEBUG oslo_concurrency.lockutils [req-c36926b7-c049-404a-8e83-bc9ba0c4c9da req-e9db912b-dd1f-45e2-b23d-a08b47badd17 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "81e82526-de13-4350-a618-49168b2e029c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:45 compute-0 nova_compute[256729]: 2025-11-29 07:57:45.349 256736 DEBUG oslo_concurrency.lockutils [req-c36926b7-c049-404a-8e83-bc9ba0c4c9da req-e9db912b-dd1f-45e2-b23d-a08b47badd17 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:45 compute-0 nova_compute[256729]: 2025-11-29 07:57:45.350 256736 DEBUG oslo_concurrency.lockutils [req-c36926b7-c049-404a-8e83-bc9ba0c4c9da req-e9db912b-dd1f-45e2-b23d-a08b47badd17 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:45 compute-0 nova_compute[256729]: 2025-11-29 07:57:45.350 256736 DEBUG nova.compute.manager [req-c36926b7-c049-404a-8e83-bc9ba0c4c9da req-e9db912b-dd1f-45e2-b23d-a08b47badd17 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] No waiting events found dispatching network-vif-plugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:57:45 compute-0 nova_compute[256729]: 2025-11-29 07:57:45.350 256736 WARNING nova.compute.manager [req-c36926b7-c049-404a-8e83-bc9ba0c4c9da req-e9db912b-dd1f-45e2-b23d-a08b47badd17 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received unexpected event network-vif-plugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a for instance with vm_state active and task_state None.
Nov 29 07:57:45 compute-0 sshd-session[282401]: Connection closed by authenticating user root 143.14.121.41 port 56640 [preauth]
Nov 29 07:57:46 compute-0 nova_compute[256729]: 2025-11-29 07:57:46.097 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 29 07:57:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 29 07:57:46 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 29 07:57:46 compute-0 ceph-mon[75050]: pgmap v1610: 305 pgs: 305 active+clean; 306 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 25 KiB/s wr, 16 op/s
Nov 29 07:57:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 306 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 16 KiB/s wr, 19 op/s
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.012 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Acquiring lock "f8159132-7d73-48fd-baa4-4d6eed2d8b66" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.013 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "f8159132-7d73-48fd-baa4-4d6eed2d8b66" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.029 256736 DEBUG nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.119 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.120 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.129 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.129 256736 INFO nova.compute.claims [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.219 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.297 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:47 compute-0 ceph-mon[75050]: osdmap e251: 3 total, 3 up, 3 in
Nov 29 07:57:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897007713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.747 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.753 256736 DEBUG nova.compute.provider_tree [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.769 256736 DEBUG nova.scheduler.client.report [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.790 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.791 256736 DEBUG nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.835 256736 DEBUG nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.836 256736 DEBUG nova.network.neutron [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.860 256736 INFO nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.882 256736 DEBUG nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.979 256736 DEBUG nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.981 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:57:47 compute-0 nova_compute[256729]: 2025-11-29 07:57:47.981 256736 INFO nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Creating image(s)
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.004 256736 DEBUG nova.storage.rbd_utils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] rbd image f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.035 256736 DEBUG nova.storage.rbd_utils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] rbd image f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.062 256736 DEBUG nova.storage.rbd_utils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] rbd image f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.067 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.122 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.123 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.124 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.124 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.144 256736 DEBUG nova.storage.rbd_utils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] rbd image f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.148 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.320 256736 DEBUG nova.network.neutron [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.321 256736 DEBUG nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.611 256736 DEBUG nova.compute.manager [req-f75bd3c5-5c1b-4fa5-916a-a4b506332097 req-a9013d44-52e9-43e0-9e87-936d13384cfc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.612 256736 DEBUG nova.compute.manager [req-f75bd3c5-5c1b-4fa5-916a-a4b506332097 req-a9013d44-52e9-43e0-9e87-936d13384cfc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing instance network info cache due to event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.612 256736 DEBUG oslo_concurrency.lockutils [req-f75bd3c5-5c1b-4fa5-916a-a4b506332097 req-a9013d44-52e9-43e0-9e87-936d13384cfc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.613 256736 DEBUG oslo_concurrency.lockutils [req-f75bd3c5-5c1b-4fa5-916a-a4b506332097 req-a9013d44-52e9-43e0-9e87-936d13384cfc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:48 compute-0 nova_compute[256729]: 2025-11-29 07:57:48.613 256736 DEBUG nova.network.neutron [req-f75bd3c5-5c1b-4fa5-916a-a4b506332097 req-a9013d44-52e9-43e0-9e87-936d13384cfc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:57:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 306 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 107 op/s
Nov 29 07:57:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 29 07:57:48 compute-0 ceph-mon[75050]: pgmap v1612: 305 pgs: 305 active+clean; 306 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 16 KiB/s wr, 19 op/s
Nov 29 07:57:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/897007713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 29 07:57:48 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.143 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.995s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.237 256736 DEBUG nova.storage.rbd_utils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] resizing rbd image f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.633 256736 DEBUG nova.compute.manager [req-52a53550-bc47-4680-922f-73a45df59229 req-eb1ed376-2ab3-416b-96ad-e0aeab4b05f8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.634 256736 DEBUG nova.compute.manager [req-52a53550-bc47-4680-922f-73a45df59229 req-eb1ed376-2ab3-416b-96ad-e0aeab4b05f8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing instance network info cache due to event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.634 256736 DEBUG oslo_concurrency.lockutils [req-52a53550-bc47-4680-922f-73a45df59229 req-eb1ed376-2ab3-416b-96ad-e0aeab4b05f8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.686 256736 DEBUG nova.objects.instance [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lazy-loading 'migration_context' on Instance uuid f8159132-7d73-48fd-baa4-4d6eed2d8b66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.703 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.704 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Ensure instance console log exists: /var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.705 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.706 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.706 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.709 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.715 256736 WARNING nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.721 256736 DEBUG nova.virt.libvirt.host [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.722 256736 DEBUG nova.virt.libvirt.host [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.727 256736 DEBUG nova.virt.libvirt.host [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.728 256736 DEBUG nova.virt.libvirt.host [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.728 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.729 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.730 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.731 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.731 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.732 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.732 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.733 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.734 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.734 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.735 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.735 256736 DEBUG nova.virt.hardware [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:57:49 compute-0 nova_compute[256729]: 2025-11-29 07:57:49.740 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:49 compute-0 sshd-session[282623]: Connection closed by authenticating user root 143.14.121.41 port 56644 [preauth]
Nov 29 07:57:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:49 compute-0 ceph-mon[75050]: pgmap v1613: 305 pgs: 305 active+clean; 306 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 107 op/s
Nov 29 07:57:49 compute-0 ceph-mon[75050]: osdmap e252: 3 total, 3 up, 3 in
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.001 256736 DEBUG nova.network.neutron [req-f75bd3c5-5c1b-4fa5-916a-a4b506332097 req-a9013d44-52e9-43e0-9e87-936d13384cfc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updated VIF entry in instance network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.004 256736 DEBUG nova.network.neutron [req-f75bd3c5-5c1b-4fa5-916a-a4b506332097 req-a9013d44-52e9-43e0-9e87-936d13384cfc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updating instance_info_cache with network_info: [{"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.031 256736 DEBUG oslo_concurrency.lockutils [req-f75bd3c5-5c1b-4fa5-916a-a4b506332097 req-a9013d44-52e9-43e0-9e87-936d13384cfc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.032 256736 DEBUG oslo_concurrency.lockutils [req-52a53550-bc47-4680-922f-73a45df59229 req-eb1ed376-2ab3-416b-96ad-e0aeab4b05f8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.033 256736 DEBUG nova.network.neutron [req-52a53550-bc47-4680-922f-73a45df59229 req-eb1ed376-2ab3-416b-96ad-e0aeab4b05f8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:57:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242779599' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.275 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.298 256736 DEBUG nova.storage.rbd_utils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] rbd image f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.301 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:57:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/468808484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.742 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.745 256736 DEBUG nova.objects.instance [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lazy-loading 'pci_devices' on Instance uuid f8159132-7d73-48fd-baa4-4d6eed2d8b66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.764 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <uuid>f8159132-7d73-48fd-baa4-4d6eed2d8b66</uuid>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <name>instance-0000000d</name>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <nova:name>tempest-VolumesNegativeTest-instance-52083156</nova:name>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:57:49</nova:creationTime>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <nova:user uuid="85fac9ee02ca4f26a1ee6aae755a0145">tempest-VolumesNegativeTest-863168843-project-member</nova:user>
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <nova:project uuid="566aa0cfc2e84edf93c00d5df18f3c2f">tempest-VolumesNegativeTest-863168843</nova:project>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <nova:ports/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <system>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <entry name="serial">f8159132-7d73-48fd-baa4-4d6eed2d8b66</entry>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <entry name="uuid">f8159132-7d73-48fd-baa4-4d6eed2d8b66</entry>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     </system>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <os>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   </os>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <features>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   </features>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk">
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       </source>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk.config">
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       </source>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:57:50 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66/console.log" append="off"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <video>
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     </video>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:57:50 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:57:50 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:57:50 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:57:50 compute-0 nova_compute[256729]: </domain>
Nov 29 07:57:50 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.831 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.832 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.833 256736 INFO nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Using config drive
Nov 29 07:57:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 306 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 134 op/s
Nov 29 07:57:50 compute-0 nova_compute[256729]: 2025-11-29 07:57:50.864 256736 DEBUG nova.storage.rbd_utils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] rbd image f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 29 07:57:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 29 07:57:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1242779599' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/468808484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:57:50 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.053 256736 INFO nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Creating config drive at /var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66/disk.config
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.058 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hdlrn6v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.099 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.215 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hdlrn6v" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.257 256736 DEBUG nova.storage.rbd_utils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] rbd image f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.261 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66/disk.config f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.328 256736 DEBUG nova.network.neutron [req-52a53550-bc47-4680-922f-73a45df59229 req-eb1ed376-2ab3-416b-96ad-e0aeab4b05f8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updated VIF entry in instance network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.330 256736 DEBUG nova.network.neutron [req-52a53550-bc47-4680-922f-73a45df59229 req-eb1ed376-2ab3-416b-96ad-e0aeab4b05f8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updating instance_info_cache with network_info: [{"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.353 256736 DEBUG oslo_concurrency.lockutils [req-52a53550-bc47-4680-922f-73a45df59229 req-eb1ed376-2ab3-416b-96ad-e0aeab4b05f8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.457 256736 DEBUG oslo_concurrency.processutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66/disk.config f8159132-7d73-48fd-baa4-4d6eed2d8b66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.459 256736 INFO nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Deleting local config drive /var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66/disk.config because it was imported into RBD.
Nov 29 07:57:51 compute-0 systemd-machined[217781]: New machine qemu-13-instance-0000000d.
Nov 29 07:57:51 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.732 256736 DEBUG nova.compute.manager [req-3ce62034-451b-455f-b7cd-7ea7af98aa65 req-aa83b515-cc6e-4d94-9718-9be240c79d5c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.733 256736 DEBUG nova.compute.manager [req-3ce62034-451b-455f-b7cd-7ea7af98aa65 req-aa83b515-cc6e-4d94-9718-9be240c79d5c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing instance network info cache due to event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.734 256736 DEBUG oslo_concurrency.lockutils [req-3ce62034-451b-455f-b7cd-7ea7af98aa65 req-aa83b515-cc6e-4d94-9718-9be240c79d5c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.734 256736 DEBUG oslo_concurrency.lockutils [req-3ce62034-451b-455f-b7cd-7ea7af98aa65 req-aa83b515-cc6e-4d94-9718-9be240c79d5c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:51 compute-0 nova_compute[256729]: 2025-11-29 07:57:51.735 256736 DEBUG nova.network.neutron [req-3ce62034-451b-455f-b7cd-7ea7af98aa65 req-aa83b515-cc6e-4d94-9718-9be240c79d5c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:57:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 29 07:57:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 29 07:57:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 29 07:57:51 compute-0 ceph-mon[75050]: pgmap v1615: 305 pgs: 305 active+clean; 306 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 134 op/s
Nov 29 07:57:51 compute-0 ceph-mon[75050]: osdmap e253: 3 total, 3 up, 3 in
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.138 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403072.1375327, f8159132-7d73-48fd-baa4-4d6eed2d8b66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.138 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] VM Resumed (Lifecycle Event)
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.140 256736 DEBUG nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.140 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.144 256736 INFO nova.virt.libvirt.driver [-] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Instance spawned successfully.
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.144 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.180 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.180 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.181 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.181 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.181 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.182 256736 DEBUG nova.virt.libvirt.driver [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.185 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.193 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.220 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.226 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.226 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403072.1376364, f8159132-7d73-48fd-baa4-4d6eed2d8b66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.227 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] VM Started (Lifecycle Event)
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.246 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.250 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.254 256736 INFO nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Took 4.27 seconds to spawn the instance on the hypervisor.
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.254 256736 DEBUG nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.277 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.316 256736 INFO nova.compute.manager [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Took 5.23 seconds to build instance.
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.346 256736 DEBUG oslo_concurrency.lockutils [None req-556b63f4-5a39-40f1-b994-3ca0afb8f2fb 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "f8159132-7d73-48fd-baa4-4d6eed2d8b66" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 323 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 198 op/s
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.958 256736 DEBUG nova.network.neutron [req-3ce62034-451b-455f-b7cd-7ea7af98aa65 req-aa83b515-cc6e-4d94-9718-9be240c79d5c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updated VIF entry in instance network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.958 256736 DEBUG nova.network.neutron [req-3ce62034-451b-455f-b7cd-7ea7af98aa65 req-aa83b515-cc6e-4d94-9718-9be240c79d5c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updating instance_info_cache with network_info: [{"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:52 compute-0 nova_compute[256729]: 2025-11-29 07:57:52.974 256736 DEBUG oslo_concurrency.lockutils [req-3ce62034-451b-455f-b7cd-7ea7af98aa65 req-aa83b515-cc6e-4d94-9718-9be240c79d5c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:57:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 29 07:57:52 compute-0 ceph-mon[75050]: osdmap e254: 3 total, 3 up, 3 in
Nov 29 07:57:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 29 07:57:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 29 07:57:53 compute-0 nova_compute[256729]: 2025-11-29 07:57:53.538 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Acquiring lock "f8159132-7d73-48fd-baa4-4d6eed2d8b66" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:53 compute-0 nova_compute[256729]: 2025-11-29 07:57:53.540 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "f8159132-7d73-48fd-baa4-4d6eed2d8b66" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:53 compute-0 nova_compute[256729]: 2025-11-29 07:57:53.540 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Acquiring lock "f8159132-7d73-48fd-baa4-4d6eed2d8b66-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:53 compute-0 nova_compute[256729]: 2025-11-29 07:57:53.541 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "f8159132-7d73-48fd-baa4-4d6eed2d8b66-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:53 compute-0 nova_compute[256729]: 2025-11-29 07:57:53.541 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "f8159132-7d73-48fd-baa4-4d6eed2d8b66-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:53 compute-0 nova_compute[256729]: 2025-11-29 07:57:53.543 256736 INFO nova.compute.manager [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Terminating instance
Nov 29 07:57:53 compute-0 nova_compute[256729]: 2025-11-29 07:57:53.545 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Acquiring lock "refresh_cache-f8159132-7d73-48fd-baa4-4d6eed2d8b66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:53 compute-0 nova_compute[256729]: 2025-11-29 07:57:53.545 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Acquired lock "refresh_cache-f8159132-7d73-48fd-baa4-4d6eed2d8b66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:53 compute-0 nova_compute[256729]: 2025-11-29 07:57:53.546 256736 DEBUG nova.network.neutron [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:57:53 compute-0 sshd-session[282833]: Connection closed by authenticating user root 143.14.121.41 port 56652 [preauth]
Nov 29 07:57:53 compute-0 ceph-mon[75050]: pgmap v1618: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 323 MiB data, 427 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 198 op/s
Nov 29 07:57:53 compute-0 ceph-mon[75050]: osdmap e255: 3 total, 3 up, 3 in
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.232 256736 DEBUG nova.network.neutron [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.448 256736 DEBUG oslo_concurrency.lockutils [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.449 256736 DEBUG oslo_concurrency.lockutils [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.463 256736 INFO nova.compute.manager [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Detaching volume e11e21e5-216d-4574-8e25-cba67af94fd0
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.529 256736 DEBUG nova.network.neutron [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.549 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Releasing lock "refresh_cache-f8159132-7d73-48fd-baa4-4d6eed2d8b66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.550 256736 DEBUG nova.compute.manager [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:57:54 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 29 07:57:54 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 3.050s CPU time.
Nov 29 07:57:54 compute-0 systemd-machined[217781]: Machine qemu-13-instance-0000000d terminated.
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.648 256736 DEBUG oslo_concurrency.lockutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.671 256736 INFO nova.virt.block_device [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Attempting to driver detach volume e11e21e5-216d-4574-8e25-cba67af94fd0 from mountpoint /dev/vdb
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.685 256736 DEBUG nova.virt.libvirt.driver [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Attempting to detach device vdb from instance 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.686 256736 DEBUG nova.virt.libvirt.guest [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-e11e21e5-216d-4574-8e25-cba67af94fd0">
Nov 29 07:57:54 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   </source>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <serial>e11e21e5-216d-4574-8e25-cba67af94fd0</serial>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:57:54 compute-0 nova_compute[256729]: </disk>
Nov 29 07:57:54 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.700 256736 INFO nova.virt.libvirt.driver [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Successfully detached device vdb from instance 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 from the persistent domain config.
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.701 256736 DEBUG nova.virt.libvirt.driver [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.702 256736 DEBUG nova.virt.libvirt.guest [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-e11e21e5-216d-4574-8e25-cba67af94fd0">
Nov 29 07:57:54 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   </source>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <serial>e11e21e5-216d-4574-8e25-cba67af94fd0</serial>
Nov 29 07:57:54 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 07:57:54 compute-0 nova_compute[256729]: </disk>
Nov 29 07:57:54 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.775 256736 INFO nova.virt.libvirt.driver [-] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Instance destroyed successfully.
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.776 256736 DEBUG nova.objects.instance [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lazy-loading 'resources' on Instance uuid f8159132-7d73-48fd-baa4-4d6eed2d8b66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.821 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764403074.8208256, 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.822 256736 DEBUG nova.virt.libvirt.driver [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 07:57:54 compute-0 nova_compute[256729]: 2025-11-29 07:57:54.825 256736 INFO nova.virt.libvirt.driver [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Successfully detached device vdb from instance 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 from the live domain config.
Nov 29 07:57:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:57:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 352 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.058 256736 DEBUG nova.objects.instance [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'flavor' on Instance uuid 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.121 256736 DEBUG oslo_concurrency.lockutils [None req-b0b63307-404d-4d99-a9ae-a1dedc2ddc6e c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.123 256736 DEBUG oslo_concurrency.lockutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.123 256736 DEBUG oslo_concurrency.lockutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.124 256736 DEBUG oslo_concurrency.lockutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.124 256736 DEBUG oslo_concurrency.lockutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.127 256736 INFO nova.compute.manager [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Terminating instance
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.129 256736 DEBUG nova.compute.manager [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.178 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.180 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.181 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.181 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.182 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:55 compute-0 kernel: tapbc8c91e4-3e (unregistering): left promiscuous mode
Nov 29 07:57:55 compute-0 NetworkManager[48962]: <info>  [1764403075.2108] device (tapbc8c91e4-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.226 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:55 compute-0 ovn_controller[153383]: 2025-11-29T07:57:55Z|00138|binding|INFO|Releasing lport bc8c91e4-3e52-4696-8921-d8013cfb7b7c from this chassis (sb_readonly=0)
Nov 29 07:57:55 compute-0 ovn_controller[153383]: 2025-11-29T07:57:55Z|00139|binding|INFO|Setting lport bc8c91e4-3e52-4696-8921-d8013cfb7b7c down in Southbound
Nov 29 07:57:55 compute-0 ovn_controller[153383]: 2025-11-29T07:57:55Z|00140|binding|INFO|Removing iface tapbc8c91e4-3e ovn-installed in OVS
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.237 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:64:59 10.100.0.3'], port_security=['fa:16:3e:15:64:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '147c2de5-0104-4eb0-bc20-b3bdc3909ed9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aede5de4449e445582aa074918be39c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2cb872f-de4b-4850-8126-1e4dfb0f16a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6a691ab-2be1-4362-9a9a-3c54aabcf5a5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=bc8c91e4-3e52-4696-8921-d8013cfb7b7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.239 163655 INFO neutron.agent.ovn.metadata.agent [-] Port bc8c91e4-3e52-4696-8921-d8013cfb7b7c in datapath 5908d283-c1b3-46ec-8e8e-b81d59c13f9a unbound from our chassis
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.240 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5908d283-c1b3-46ec-8e8e-b81d59c13f9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.245 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e9683470-bb2b-41ae-8383-b223b16bac2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.246 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a namespace which is not needed anymore
Nov 29 07:57:55 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 29 07:57:55 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 15.399s CPU time.
Nov 29 07:57:55 compute-0 systemd-machined[217781]: Machine qemu-11-instance-0000000b terminated.
Nov 29 07:57:55 compute-0 podman[283022]: 2025-11-29 07:57:55.355881189 +0000 UTC m=+0.117370034 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 07:57:55 compute-0 podman[283023]: 2025-11-29 07:57:55.356268119 +0000 UTC m=+0.101261253 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.371 256736 INFO nova.virt.libvirt.driver [-] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Instance destroyed successfully.
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.375 256736 DEBUG nova.objects.instance [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lazy-loading 'resources' on Instance uuid 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:55 compute-0 podman[283020]: 2025-11-29 07:57:55.39075825 +0000 UTC m=+0.155030959 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.396 256736 DEBUG nova.virt.libvirt.vif [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:56:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-599798234',display_name='tempest-VolumesSnapshotTestJSON-instance-599798234',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-599798234',id=11,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBObDc+NbrQmtYcY6EwSmvzU0R3Gi/UQJqyfQZjkI4/toFRRTIoIgfCy8x3M1DrT2i/Xfl3y4TiKeD8LDdjTp6tKwDxJPyEMTV5d+3JcYVoid++iXEGL2INbaZ4J9doILLQ==',key_name='tempest-keypair-576206054',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:57:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aede5de4449e445582aa074918be39c9',ramdisk_id='',reservation_id='r-kcxuajyb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-1121052015',owner_user_name='tempest-VolumesSnapshotTestJSON-1121052015-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:57:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c0b3479158714faaa4e8c3c336457d6d',uuid=147c2de5-0104-4eb0-bc20-b3bdc3909ed9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.396 256736 DEBUG nova.network.os_vif_util [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converting VIF {"id": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "address": "fa:16:3e:15:64:59", "network": {"id": "5908d283-c1b3-46ec-8e8e-b81d59c13f9a", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-774029930-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aede5de4449e445582aa074918be39c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8c91e4-3e", "ovs_interfaceid": "bc8c91e4-3e52-4696-8921-d8013cfb7b7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.397 256736 DEBUG nova.network.os_vif_util [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:64:59,bridge_name='br-int',has_traffic_filtering=True,id=bc8c91e4-3e52-4696-8921-d8013cfb7b7c,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8c91e4-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.398 256736 DEBUG os_vif [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:64:59,bridge_name='br-int',has_traffic_filtering=True,id=bc8c91e4-3e52-4696-8921-d8013cfb7b7c,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8c91e4-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.399 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.400 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc8c91e4-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.401 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.402 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.404 256736 INFO os_vif [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:64:59,bridge_name='br-int',has_traffic_filtering=True,id=bc8c91e4-3e52-4696-8921-d8013cfb7b7c,network=Network(5908d283-c1b3-46ec-8e8e-b81d59c13f9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8c91e4-3e')
Nov 29 07:57:55 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[282065]: [NOTICE]   (282070) : haproxy version is 2.8.14-c23fe91
Nov 29 07:57:55 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[282065]: [NOTICE]   (282070) : path to executable is /usr/sbin/haproxy
Nov 29 07:57:55 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[282065]: [WARNING]  (282070) : Exiting Master process...
Nov 29 07:57:55 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[282065]: [ALERT]    (282070) : Current worker (282072) exited with code 143 (Terminated)
Nov 29 07:57:55 compute-0 neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a[282065]: [WARNING]  (282070) : All workers exited. Exiting... (0)
Nov 29 07:57:55 compute-0 systemd[1]: libpod-fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7.scope: Deactivated successfully.
Nov 29 07:57:55 compute-0 podman[283122]: 2025-11-29 07:57:55.417871054 +0000 UTC m=+0.063084476 container died fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.451 256736 DEBUG nova.compute.manager [req-fcabca6c-c0bd-444a-9608-d975ef8db704 req-c8fa2297-93cb-4b48-8433-265c13612bb0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received event network-vif-unplugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.451 256736 DEBUG oslo_concurrency.lockutils [req-fcabca6c-c0bd-444a-9608-d975ef8db704 req-c8fa2297-93cb-4b48-8433-265c13612bb0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.452 256736 DEBUG oslo_concurrency.lockutils [req-fcabca6c-c0bd-444a-9608-d975ef8db704 req-c8fa2297-93cb-4b48-8433-265c13612bb0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.452 256736 DEBUG oslo_concurrency.lockutils [req-fcabca6c-c0bd-444a-9608-d975ef8db704 req-c8fa2297-93cb-4b48-8433-265c13612bb0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.452 256736 DEBUG nova.compute.manager [req-fcabca6c-c0bd-444a-9608-d975ef8db704 req-c8fa2297-93cb-4b48-8433-265c13612bb0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] No waiting events found dispatching network-vif-unplugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.452 256736 DEBUG nova.compute.manager [req-fcabca6c-c0bd-444a-9608-d975ef8db704 req-c8fa2297-93cb-4b48-8433-265c13612bb0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received event network-vif-unplugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7-userdata-shm.mount: Deactivated successfully.
Nov 29 07:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-315ba7488003b5b0338ca7747cbb0222584bd5a6f2f673d36d67fb5dc439ec90-merged.mount: Deactivated successfully.
Nov 29 07:57:55 compute-0 podman[283122]: 2025-11-29 07:57:55.528407374 +0000 UTC m=+0.173620816 container cleanup fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 07:57:55 compute-0 systemd[1]: libpod-conmon-fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7.scope: Deactivated successfully.
Nov 29 07:57:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/251607169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.666 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:55 compute-0 podman[283183]: 2025-11-29 07:57:55.697736915 +0000 UTC m=+0.142309870 container remove fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.709 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[44c916b8-233a-407f-bac7-32e73794a2a5]: (4, ('Sat Nov 29 07:57:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a (fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7)\nfd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7\nSat Nov 29 07:57:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a (fd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7)\nfd82ccbb76e0a59c2684b3a4d0172d553923b7dd98a3aabb126fbd665f6e69d7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.712 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ee49d80b-1650-4a46-81f4-4b7d2c6e1acf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.714 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5908d283-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.717 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:55 compute-0 kernel: tap5908d283-c0: left promiscuous mode
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.743 256736 INFO nova.virt.libvirt.driver [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Deleting instance files /var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66_del
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.745 256736 INFO nova.virt.libvirt.driver [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Deletion of /var/lib/nova/instances/f8159132-7d73-48fd-baa4-4d6eed2d8b66_del complete
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.752 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1d33d5cc-708c-487a-8e1c-a839ca24c881]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.753 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.763 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5c9b3f-f57f-494b-b295-c5201026bde5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.764 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[92e4913c-b127-4292-a7a3-db75b507a8c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.779 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a6775ee3-123a-4dcc-b49f-6048081d8132]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 533463, 'reachable_time': 42958, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283201, 'error': None, 'target': 'ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d5908d283\x2dc1b3\x2d46ec\x2d8e8e\x2db81d59c13f9a.mount: Deactivated successfully.
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.783 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5908d283-c1b3-46ec-8e8e-b81d59c13f9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:57:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:55.783 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef0a85c-2cbd-4845-9228-20f74bf9bda5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.787 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.788 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.791 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.791 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.794 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.794 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.809 256736 INFO nova.compute.manager [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Took 1.26 seconds to destroy the instance on the hypervisor.
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.809 256736 DEBUG oslo.service.loopingcall [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.809 256736 DEBUG nova.compute.manager [-] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.810 256736 DEBUG nova.network.neutron [-] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.910 256736 DEBUG nova.network.neutron [-] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.925 256736 DEBUG nova.network.neutron [-] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.944 256736 INFO nova.compute.manager [-] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Took 0.13 seconds to deallocate network for instance.
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.980 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.981 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4266MB free_disk=59.92164993286133GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.982 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.982 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:55 compute-0 nova_compute[256729]: 2025-11-29 07:57:55.993 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.060 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.061 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 81e82526-de13-4350-a618-49168b2e029c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.061 256736 WARNING nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance f8159132-7d73-48fd-baa4-4d6eed2d8b66 is not being actively managed by this compute host but has allocations referencing this compute host: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. Skipping heal of allocation because we do not know what to do.
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.062 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.062 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.174 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:57:56 compute-0 ceph-mon[75050]: pgmap v1620: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 352 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Nov 29 07:57:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/251607169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 318 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.6 MiB/s wr, 316 op/s
Nov 29 07:57:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:57:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634529880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.935 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.945 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.968 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.996 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.997 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:56 compute-0 nova_compute[256729]: 2025-11-29 07:57:56.998 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.005 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.040 256736 INFO nova.scheduler.client.report [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Deleted allocations for instance f8159132-7d73-48fd-baa4-4d6eed2d8b66
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.125 256736 DEBUG oslo_concurrency.lockutils [None req-72bb7f45-b984-45ed-a38a-446e258ddb97 85fac9ee02ca4f26a1ee6aae755a0145 566aa0cfc2e84edf93c00d5df18f3c2f - - default default] Lock "f8159132-7d73-48fd-baa4-4d6eed2d8b66" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.223 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:57:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3634529880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.546 256736 DEBUG nova.compute.manager [req-c93f7577-1ba3-48ce-a28d-01922f1c6c73 req-ccc41601-14d7-47bb-ac85-6dcdde24b8fb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received event network-vif-plugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.547 256736 DEBUG oslo_concurrency.lockutils [req-c93f7577-1ba3-48ce-a28d-01922f1c6c73 req-ccc41601-14d7-47bb-ac85-6dcdde24b8fb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.548 256736 DEBUG oslo_concurrency.lockutils [req-c93f7577-1ba3-48ce-a28d-01922f1c6c73 req-ccc41601-14d7-47bb-ac85-6dcdde24b8fb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.548 256736 DEBUG oslo_concurrency.lockutils [req-c93f7577-1ba3-48ce-a28d-01922f1c6c73 req-ccc41601-14d7-47bb-ac85-6dcdde24b8fb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.549 256736 DEBUG nova.compute.manager [req-c93f7577-1ba3-48ce-a28d-01922f1c6c73 req-ccc41601-14d7-47bb-ac85-6dcdde24b8fb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] No waiting events found dispatching network-vif-plugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:57:57 compute-0 nova_compute[256729]: 2025-11-29 07:57:57.549 256736 WARNING nova.compute.manager [req-c93f7577-1ba3-48ce-a28d-01922f1c6c73 req-ccc41601-14d7-47bb-ac85-6dcdde24b8fb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received unexpected event network-vif-plugged-bc8c91e4-3e52-4696-8921-d8013cfb7b7c for instance with vm_state active and task_state deleting.
Nov 29 07:57:57 compute-0 sshd-session[282993]: Connection closed by authenticating user root 143.14.121.41 port 43458 [preauth]
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.000 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.001 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:58 compute-0 ovn_controller[153383]: 2025-11-29T07:57:58Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:99:59 10.100.0.8
Nov 29 07:57:58 compute-0 ovn_controller[153383]: 2025-11-29T07:57:58Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:99:59 10.100.0.8
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.173 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.434 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.435 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.435 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 07:57:58 compute-0 nova_compute[256729]: 2025-11-29 07:57:58.435 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 81e82526-de13-4350-a618-49168b2e029c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:57:58 compute-0 ceph-mon[75050]: pgmap v1621: 305 pgs: 305 active+clean; 318 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.6 MiB/s wr, 316 op/s
Nov 29 07:57:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 261 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 MiB/s wr, 360 op/s
Nov 29 07:57:59 compute-0 nova_compute[256729]: 2025-11-29 07:57:59.233 256736 INFO nova.virt.libvirt.driver [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Deleting instance files /var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9_del
Nov 29 07:57:59 compute-0 nova_compute[256729]: 2025-11-29 07:57:59.233 256736 INFO nova.virt.libvirt.driver [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Deletion of /var/lib/nova/instances/147c2de5-0104-4eb0-bc20-b3bdc3909ed9_del complete
Nov 29 07:57:59 compute-0 nova_compute[256729]: 2025-11-29 07:57:59.511 256736 INFO nova.compute.manager [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Took 4.38 seconds to destroy the instance on the hypervisor.
Nov 29 07:57:59 compute-0 nova_compute[256729]: 2025-11-29 07:57:59.512 256736 DEBUG oslo.service.loopingcall [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:57:59 compute-0 nova_compute[256729]: 2025-11-29 07:57:59.513 256736 DEBUG nova.compute.manager [-] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:57:59 compute-0 nova_compute[256729]: 2025-11-29 07:57:59.513 256736 DEBUG nova.network.neutron [-] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:57:59 compute-0 sudo[283227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:59 compute-0 sudo[283227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:59 compute-0 sudo[283227]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:59.776 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:57:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:59.777 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:57:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:57:59.777 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:57:59 compute-0 sudo[283252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:57:59 compute-0 sudo[283252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:59 compute-0 sudo[283252]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:59 compute-0 sudo[283277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:57:59 compute-0 sudo[283277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:57:59 compute-0 sudo[283277]: pam_unix(sudo:session): session closed for user root
Nov 29 07:57:59 compute-0 sudo[283302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:57:59 compute-0 sudo[283302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 29 07:58:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 29 07:58:00 compute-0 ceph-mon[75050]: pgmap v1622: 305 pgs: 305 active+clean; 261 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.8 MiB/s wr, 360 op/s
Nov 29 07:58:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 29 07:58:00 compute-0 nova_compute[256729]: 2025-11-29 07:58:00.402 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:00 compute-0 sudo[283302]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:58:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:58:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:58:00 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:58:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:58:00 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:58:00 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3e920c04-a3b2-4a7b-acef-56073df2fce4 does not exist
Nov 29 07:58:00 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a26d2aed-30bf-4929-9220-5cc3f47aff36 does not exist
Nov 29 07:58:00 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 87f68b29-a371-4a39-b4a9-0617279270c3 does not exist
Nov 29 07:58:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:58:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:58:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:58:00 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:58:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:58:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:58:00 compute-0 sudo[283358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:00 compute-0 sudo[283358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:00 compute-0 sudo[283358]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:00 compute-0 sudo[283383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:58:00 compute-0 sudo[283383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:00 compute-0 sudo[283383]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:00 compute-0 sudo[283408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:00 compute-0 sudo[283408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:00 compute-0 sudo[283408]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:00 compute-0 sudo[283433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:58:00 compute-0 sudo[283433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 261 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.7 MiB/s wr, 316 op/s
Nov 29 07:58:01 compute-0 nova_compute[256729]: 2025-11-29 07:58:01.305 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updating instance_info_cache with network_info: [{"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:01 compute-0 sshd-session[283225]: Connection closed by authenticating user root 143.14.121.41 port 43462 [preauth]
Nov 29 07:58:01 compute-0 podman[283499]: 2025-11-29 07:58:01.248862432 +0000 UTC m=+0.039020893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:01 compute-0 nova_compute[256729]: 2025-11-29 07:58:01.400 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:58:01 compute-0 nova_compute[256729]: 2025-11-29 07:58:01.400 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 07:58:01 compute-0 nova_compute[256729]: 2025-11-29 07:58:01.402 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:01 compute-0 nova_compute[256729]: 2025-11-29 07:58:01.402 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:58:01 compute-0 anacron[30860]: Job `cron.monthly' started
Nov 29 07:58:01 compute-0 anacron[30860]: Job `cron.monthly' terminated
Nov 29 07:58:01 compute-0 anacron[30860]: Normal exit (3 jobs run)
Nov 29 07:58:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 29 07:58:01 compute-0 ceph-mon[75050]: osdmap e256: 3 total, 3 up, 3 in
Nov 29 07:58:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:58:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:58:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:58:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:58:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:58:01 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:58:01 compute-0 podman[283499]: 2025-11-29 07:58:01.926197564 +0000 UTC m=+0.716355985 container create 6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.042 256736 DEBUG nova.network.neutron [-] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.064 256736 INFO nova.compute.manager [-] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Took 2.55 seconds to deallocate network for instance.
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.137 256736 DEBUG nova.compute.manager [req-68113f65-5b6d-4ecf-8f50-5454061c88b0 req-fda67f5f-7680-48c2-a04c-480b71a11e4c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Received event network-vif-deleted-bc8c91e4-3e52-4696-8921-d8013cfb7b7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:02 compute-0 systemd[1]: Started libpod-conmon-6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9.scope.
Nov 29 07:58:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.226 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.265 256736 WARNING nova.volume.cinder [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Attachment 4b2b0f3d-ac13-4b09-8447-d667c698dea8 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 4b2b0f3d-ac13-4b09-8447-d667c698dea8. (HTTP 404) (Request-ID: req-59e72ce9-9d9d-4020-94ea-ee7ceb9a6bf0)
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.266 256736 INFO nova.compute.manager [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Took 0.20 seconds to detach 1 volumes for instance.
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.325 256736 DEBUG oslo_concurrency.lockutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.325 256736 DEBUG oslo_concurrency.lockutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.397 256736 DEBUG oslo_concurrency.processutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:02 compute-0 podman[283499]: 2025-11-29 07:58:02.524913927 +0000 UTC m=+1.315072388 container init 6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 07:58:02 compute-0 podman[283499]: 2025-11-29 07:58:02.533473855 +0000 UTC m=+1.323632276 container start 6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:58:02 compute-0 heuristic_germain[283518]: 167 167
Nov 29 07:58:02 compute-0 systemd[1]: libpod-6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9.scope: Deactivated successfully.
Nov 29 07:58:02 compute-0 conmon[283518]: conmon 6289e88064706b51ba10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9.scope/container/memory.events
Nov 29 07:58:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 29 07:58:02 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 29 07:58:02 compute-0 podman[283499]: 2025-11-29 07:58:02.647729245 +0000 UTC m=+1.437887656 container attach 6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:02 compute-0 podman[283499]: 2025-11-29 07:58:02.64904596 +0000 UTC m=+1.439204401 container died 6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 07:58:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 262 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 242 op/s
Nov 29 07:58:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890385754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:02 compute-0 ceph-mon[75050]: pgmap v1624: 305 pgs: 305 active+clean; 261 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.7 MiB/s wr, 316 op/s
Nov 29 07:58:02 compute-0 ceph-mon[75050]: osdmap e257: 3 total, 3 up, 3 in
Nov 29 07:58:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-81b028af658d18e2d02fc94ce99b58b38984756adb8f55bc2c03c6f367953b2b-merged.mount: Deactivated successfully.
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.919 256736 DEBUG oslo_concurrency.processutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.927 256736 DEBUG nova.compute.provider_tree [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.947 256736 DEBUG nova.scheduler.client.report [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:58:02 compute-0 nova_compute[256729]: 2025-11-29 07:58:02.968 256736 DEBUG oslo_concurrency.lockutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:03 compute-0 nova_compute[256729]: 2025-11-29 07:58:03.002 256736 INFO nova.scheduler.client.report [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Deleted allocations for instance 147c2de5-0104-4eb0-bc20-b3bdc3909ed9
Nov 29 07:58:03 compute-0 podman[283499]: 2025-11-29 07:58:03.023517187 +0000 UTC m=+1.813675578 container remove 6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:03 compute-0 systemd[1]: libpod-conmon-6289e88064706b51ba10308ad3cb6711f349fe8cd1b40def0dd65ebc6748d0f9.scope: Deactivated successfully.
Nov 29 07:58:03 compute-0 nova_compute[256729]: 2025-11-29 07:58:03.085 256736 DEBUG oslo_concurrency.lockutils [None req-870a58ce-5207-4641-b2ce-fdd18dc3ba4c c0b3479158714faaa4e8c3c336457d6d aede5de4449e445582aa074918be39c9 - - default default] Lock "147c2de5-0104-4eb0-bc20-b3bdc3909ed9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:03 compute-0 podman[283564]: 2025-11-29 07:58:03.236624936 +0000 UTC m=+0.045868965 container create a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:03 compute-0 systemd[1]: Started libpod-conmon-a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab.scope.
Nov 29 07:58:03 compute-0 podman[283564]: 2025-11-29 07:58:03.215322467 +0000 UTC m=+0.024566476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30d17a98ae6345e0c982870d30d5eeff4a8f195b4fa3969e87940e7c5d13123/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30d17a98ae6345e0c982870d30d5eeff4a8f195b4fa3969e87940e7c5d13123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30d17a98ae6345e0c982870d30d5eeff4a8f195b4fa3969e87940e7c5d13123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30d17a98ae6345e0c982870d30d5eeff4a8f195b4fa3969e87940e7c5d13123/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a30d17a98ae6345e0c982870d30d5eeff4a8f195b4fa3969e87940e7c5d13123/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:03 compute-0 podman[283564]: 2025-11-29 07:58:03.339880732 +0000 UTC m=+0.149124751 container init a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:58:03 compute-0 podman[283564]: 2025-11-29 07:58:03.347835355 +0000 UTC m=+0.157079354 container start a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:58:03 compute-0 podman[283564]: 2025-11-29 07:58:03.351283317 +0000 UTC m=+0.160527336 container attach a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:58:03 compute-0 ceph-mon[75050]: pgmap v1626: 305 pgs: 305 active+clean; 262 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 242 op/s
Nov 29 07:58:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3890385754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:04 compute-0 nice_mclean[283580]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:58:04 compute-0 nice_mclean[283580]: --> relative data size: 1.0
Nov 29 07:58:04 compute-0 nice_mclean[283580]: --> All data devices are unavailable
Nov 29 07:58:04 compute-0 systemd[1]: libpod-a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab.scope: Deactivated successfully.
Nov 29 07:58:04 compute-0 systemd[1]: libpod-a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab.scope: Consumed 1.161s CPU time.
Nov 29 07:58:04 compute-0 podman[283564]: 2025-11-29 07:58:04.554775853 +0000 UTC m=+1.364019842 container died a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:58:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a30d17a98ae6345e0c982870d30d5eeff4a8f195b4fa3969e87940e7c5d13123-merged.mount: Deactivated successfully.
Nov 29 07:58:04 compute-0 podman[283564]: 2025-11-29 07:58:04.615862244 +0000 UTC m=+1.425106233 container remove a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mclean, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:58:04 compute-0 systemd[1]: libpod-conmon-a6f2fec86f721e122fd4d47589da2aa05226cc4ef843acdd6ece2acd165cccab.scope: Deactivated successfully.
Nov 29 07:58:04 compute-0 sudo[283433]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:04 compute-0 sudo[283620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:04 compute-0 sudo[283620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:04 compute-0 sudo[283620]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:04 compute-0 sudo[283645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:58:04 compute-0 sudo[283645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:04 compute-0 sudo[283645]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 269 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 176 op/s
Nov 29 07:58:04 compute-0 sudo[283670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:04 compute-0 sudo[283670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:04 compute-0 sudo[283670]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:04 compute-0 sudo[283695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:58:04 compute-0 sudo[283695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 29 07:58:05 compute-0 sshd-session[283515]: Connection closed by authenticating user root 143.14.121.41 port 43476 [preauth]
Nov 29 07:58:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.404 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:05 compute-0 podman[283762]: 2025-11-29 07:58:05.321559773 +0000 UTC m=+0.027754312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.417 256736 DEBUG nova.compute.manager [req-58b999ae-c154-4643-abb2-479fa7340d02 req-7c8406f4-113d-44f1-8e20-9acc788e9dd3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.417 256736 DEBUG nova.compute.manager [req-58b999ae-c154-4643-abb2-479fa7340d02 req-7c8406f4-113d-44f1-8e20-9acc788e9dd3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing instance network info cache due to event network-changed-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.418 256736 DEBUG oslo_concurrency.lockutils [req-58b999ae-c154-4643-abb2-479fa7340d02 req-7c8406f4-113d-44f1-8e20-9acc788e9dd3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.418 256736 DEBUG oslo_concurrency.lockutils [req-58b999ae-c154-4643-abb2-479fa7340d02 req-7c8406f4-113d-44f1-8e20-9acc788e9dd3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.418 256736 DEBUG nova.network.neutron [req-58b999ae-c154-4643-abb2-479fa7340d02 req-7c8406f4-113d-44f1-8e20-9acc788e9dd3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Refreshing network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:58:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.474 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:05 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:05.474 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:58:05 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:05.476 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.601 256736 DEBUG oslo_concurrency.lockutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Acquiring lock "81e82526-de13-4350-a618-49168b2e029c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.602 256736 DEBUG oslo_concurrency.lockutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.603 256736 DEBUG oslo_concurrency.lockutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Acquiring lock "81e82526-de13-4350-a618-49168b2e029c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.604 256736 DEBUG oslo_concurrency.lockutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.604 256736 DEBUG oslo_concurrency.lockutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.606 256736 INFO nova.compute.manager [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Terminating instance
Nov 29 07:58:05 compute-0 nova_compute[256729]: 2025-11-29 07:58:05.608 256736 DEBUG nova.compute.manager [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:58:05
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.meta', '.mgr', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups']
Nov 29 07:58:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:58:05 compute-0 podman[283762]: 2025-11-29 07:58:05.846180587 +0000 UTC m=+0.552375066 container create b0335201f5f6a2d623990738205716c51f36479b3fe4f47ce300e42fed45e50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 07:58:06 compute-0 systemd[1]: Started libpod-conmon-b0335201f5f6a2d623990738205716c51f36479b3fe4f47ce300e42fed45e50b.scope.
Nov 29 07:58:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:06 compute-0 ceph-mon[75050]: pgmap v1627: 305 pgs: 305 active+clean; 269 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 176 op/s
Nov 29 07:58:06 compute-0 ceph-mon[75050]: osdmap e258: 3 total, 3 up, 3 in
Nov 29 07:58:06 compute-0 podman[283762]: 2025-11-29 07:58:06.348444626 +0000 UTC m=+1.054639095 container init b0335201f5f6a2d623990738205716c51f36479b3fe4f47ce300e42fed45e50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:58:06 compute-0 podman[283762]: 2025-11-29 07:58:06.357151598 +0000 UTC m=+1.063346047 container start b0335201f5f6a2d623990738205716c51f36479b3fe4f47ce300e42fed45e50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 07:58:06 compute-0 kernel: tap3142a2d6-f8 (unregistering): left promiscuous mode
Nov 29 07:58:06 compute-0 eager_hermann[283778]: 167 167
Nov 29 07:58:06 compute-0 systemd[1]: libpod-b0335201f5f6a2d623990738205716c51f36479b3fe4f47ce300e42fed45e50b.scope: Deactivated successfully.
Nov 29 07:58:06 compute-0 NetworkManager[48962]: <info>  [1764403086.3653] device (tap3142a2d6-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 07:58:06 compute-0 ovn_controller[153383]: 2025-11-29T07:58:06Z|00141|binding|INFO|Releasing lport 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a from this chassis (sb_readonly=0)
Nov 29 07:58:06 compute-0 ovn_controller[153383]: 2025-11-29T07:58:06Z|00142|binding|INFO|Setting lport 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a down in Southbound
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.376 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:06 compute-0 podman[283762]: 2025-11-29 07:58:06.379651509 +0000 UTC m=+1.085845988 container attach b0335201f5f6a2d623990738205716c51f36479b3fe4f47ce300e42fed45e50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 07:58:06 compute-0 podman[283762]: 2025-11-29 07:58:06.380899282 +0000 UTC m=+1.087093751 container died b0335201f5f6a2d623990738205716c51f36479b3fe4f47ce300e42fed45e50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 29 07:58:06 compute-0 ovn_controller[153383]: 2025-11-29T07:58:06Z|00143|binding|INFO|Removing iface tap3142a2d6-f8 ovn-installed in OVS
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.382 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.391 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:99:59 10.100.0.8'], port_security=['fa:16:3e:e4:99:59 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '81e82526-de13-4350-a618-49168b2e029c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-22f97d85-f65d-44f6-8f02-46e31590c8a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d3648d4d8b045ca9d33086f2d66a86b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33020b1e-7391-4d15-9b02-5112a272566b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=214018cc-7cfb-4351-8051-151143afc580, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=3142a2d6-f8a3-4cb2-b1f3-d90d7877515a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.392 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a in datapath 22f97d85-f65d-44f6-8f02-46e31590c8a6 unbound from our chassis
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.394 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 22f97d85-f65d-44f6-8f02-46e31590c8a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.395 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[eb560520-139e-45e7-beb2-2ee014dc93cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.397 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6 namespace which is not needed anymore
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.409 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e5df542c14992366dffec20349d9b248ffe520f1466f43a6d581973d58066b5-merged.mount: Deactivated successfully.
Nov 29 07:58:06 compute-0 podman[283762]: 2025-11-29 07:58:06.429163361 +0000 UTC m=+1.135357810 container remove b0335201f5f6a2d623990738205716c51f36479b3fe4f47ce300e42fed45e50b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hermann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:58:06 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 29 07:58:06 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 13.820s CPU time.
Nov 29 07:58:06 compute-0 systemd-machined[217781]: Machine qemu-12-instance-0000000c terminated.
Nov 29 07:58:06 compute-0 systemd[1]: libpod-conmon-b0335201f5f6a2d623990738205716c51f36479b3fe4f47ce300e42fed45e50b.scope: Deactivated successfully.
Nov 29 07:58:06 compute-0 neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6[282585]: [NOTICE]   (282601) : haproxy version is 2.8.14-c23fe91
Nov 29 07:58:06 compute-0 neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6[282585]: [NOTICE]   (282601) : path to executable is /usr/sbin/haproxy
Nov 29 07:58:06 compute-0 neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6[282585]: [WARNING]  (282601) : Exiting Master process...
Nov 29 07:58:06 compute-0 neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6[282585]: [WARNING]  (282601) : Exiting Master process...
Nov 29 07:58:06 compute-0 neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6[282585]: [ALERT]    (282601) : Current worker (282614) exited with code 143 (Terminated)
Nov 29 07:58:06 compute-0 neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6[282585]: [WARNING]  (282601) : All workers exited. Exiting... (0)
Nov 29 07:58:06 compute-0 systemd[1]: libpod-b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34.scope: Deactivated successfully.
Nov 29 07:58:06 compute-0 podman[283822]: 2025-11-29 07:58:06.547268683 +0000 UTC m=+0.055348038 container died b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 07:58:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34-userdata-shm.mount: Deactivated successfully.
Nov 29 07:58:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-25d01d680b4df555935c011151b02a3135afbbd56c601210a614903800daf428-merged.mount: Deactivated successfully.
Nov 29 07:58:06 compute-0 podman[283822]: 2025-11-29 07:58:06.587335843 +0000 UTC m=+0.095415208 container cleanup b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 07:58:06 compute-0 systemd[1]: libpod-conmon-b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34.scope: Deactivated successfully.
Nov 29 07:58:06 compute-0 podman[283848]: 2025-11-29 07:58:06.645210228 +0000 UTC m=+0.069272120 container create 8f57c55315e0d9b53e65f9f7c1cfcdad0a88e0f5fdb26fac50a2920ee4df5082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hodgkin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.649 256736 INFO nova.virt.libvirt.driver [-] [instance: 81e82526-de13-4350-a618-49168b2e029c] Instance destroyed successfully.
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.650 256736 DEBUG nova.objects.instance [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lazy-loading 'resources' on Instance uuid 81e82526-de13-4350-a618-49168b2e029c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.663 256736 DEBUG nova.virt.libvirt.vif [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:57:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1898927466',display_name='tempest-TestVolumeBackupRestore-server-1898927466',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1898927466',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD2oFI/eGmO1KgdFdCw086AXtsGkSZs6bW4dbz9aiUPNwQuO25ubQRmkoDg9ydhZ/LChEtq7wDa01nNKHOQTVokzoOLAWiQnh1zMeKdK8LOnzJC9plK7JZmNkhNTjpNqfg==',key_name='tempest-TestVolumeBackupRestore-309075728',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:57:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9d3648d4d8b045ca9d33086f2d66a86b',ramdisk_id='',reservation_id='r-zak6fese',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1464538626',owner_user_name='tempest-TestVolumeBackupRestore-1464538626-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:57:44Z,user_data=None,user_id='11e11652beb841579a10eab85f0c13f9',uuid=81e82526-de13-4350-a618-49168b2e029c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.664 256736 DEBUG nova.network.os_vif_util [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Converting VIF {"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.664 256736 DEBUG nova.network.os_vif_util [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:99:59,bridge_name='br-int',has_traffic_filtering=True,id=3142a2d6-f8a3-4cb2-b1f3-d90d7877515a,network=Network(22f97d85-f65d-44f6-8f02-46e31590c8a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3142a2d6-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.665 256736 DEBUG os_vif [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:99:59,bridge_name='br-int',has_traffic_filtering=True,id=3142a2d6-f8a3-4cb2-b1f3-d90d7877515a,network=Network(22f97d85-f65d-44f6-8f02-46e31590c8a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3142a2d6-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.666 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.667 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3142a2d6-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.668 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.669 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.671 256736 INFO os_vif [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:99:59,bridge_name='br-int',has_traffic_filtering=True,id=3142a2d6-f8a3-4cb2-b1f3-d90d7877515a,network=Network(22f97d85-f65d-44f6-8f02-46e31590c8a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3142a2d6-f8')
Nov 29 07:58:06 compute-0 podman[283865]: 2025-11-29 07:58:06.683543531 +0000 UTC m=+0.060946548 container remove b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.689 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[deccd69a-6586-4fc8-bdd3-9dea1aa1f29b]: (4, ('Sat Nov 29 07:58:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6 (b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34)\nb86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34\nSat Nov 29 07:58:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6 (b86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34)\nb86316b58ae7109943f17c4b548e415e62bfb03a39fd5526c8ae571dd3956c34\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 systemd[1]: Started libpod-conmon-8f57c55315e0d9b53e65f9f7c1cfcdad0a88e0f5fdb26fac50a2920ee4df5082.scope.
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.691 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1a782d88-a717-49cf-a2ba-e57396554ca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.692 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22f97d85-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:06 compute-0 kernel: tap22f97d85-f0: left promiscuous mode
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.695 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.710 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.712 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[07a71204-4cca-49fe-9bb4-a996ba8a2883]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 podman[283848]: 2025-11-29 07:58:06.622862831 +0000 UTC m=+0.046924763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068e72db81ee5e541bb89ac97a95906557f9037c4ffcabb2491881c1365a0953/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068e72db81ee5e541bb89ac97a95906557f9037c4ffcabb2491881c1365a0953/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068e72db81ee5e541bb89ac97a95906557f9037c4ffcabb2491881c1365a0953/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068e72db81ee5e541bb89ac97a95906557f9037c4ffcabb2491881c1365a0953/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.734 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[53d950c2-cc57-4274-8603-b9938f12e043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.735 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[95c4fbf2-e550-4269-95ea-e6173abdff7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 podman[283848]: 2025-11-29 07:58:06.739139185 +0000 UTC m=+0.163201087 container init 8f57c55315e0d9b53e65f9f7c1cfcdad0a88e0f5fdb26fac50a2920ee4df5082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 07:58:06 compute-0 podman[283848]: 2025-11-29 07:58:06.747408696 +0000 UTC m=+0.171470598 container start 8f57c55315e0d9b53e65f9f7c1cfcdad0a88e0f5fdb26fac50a2920ee4df5082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:58:06 compute-0 podman[283848]: 2025-11-29 07:58:06.750460617 +0000 UTC m=+0.174522519 container attach 8f57c55315e0d9b53e65f9f7c1cfcdad0a88e0f5fdb26fac50a2920ee4df5082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hodgkin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.753 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e78f27a2-bd0f-4458-b024-1ae18f1d503c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537814, 'reachable_time': 32876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283917, 'error': None, 'target': 'ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.755 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-22f97d85-f65d-44f6-8f02-46e31590c8a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 07:58:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:06.755 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef4d49f-11fd-4031-9c74-feb35611d2be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:58:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d22f97d85\x2df65d\x2d44f6\x2d8f02\x2d46e31590c8a6.mount: Deactivated successfully.
Nov 29 07:58:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 269 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 220 KiB/s rd, 173 KiB/s wr, 71 op/s
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.910 256736 DEBUG nova.network.neutron [req-58b999ae-c154-4643-abb2-479fa7340d02 req-7c8406f4-113d-44f1-8e20-9acc788e9dd3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updated VIF entry in instance network info cache for port 3142a2d6-f8a3-4cb2-b1f3-d90d7877515a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.910 256736 DEBUG nova.network.neutron [req-58b999ae-c154-4643-abb2-479fa7340d02 req-7c8406f4-113d-44f1-8e20-9acc788e9dd3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updating instance_info_cache with network_info: [{"id": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "address": "fa:16:3e:e4:99:59", "network": {"id": "22f97d85-f65d-44f6-8f02-46e31590c8a6", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-2112645378-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d3648d4d8b045ca9d33086f2d66a86b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3142a2d6-f8", "ovs_interfaceid": "3142a2d6-f8a3-4cb2-b1f3-d90d7877515a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.928 256736 DEBUG oslo_concurrency.lockutils [req-58b999ae-c154-4643-abb2-479fa7340d02 req-7c8406f4-113d-44f1-8e20-9acc788e9dd3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-81e82526-de13-4350-a618-49168b2e029c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.955 256736 INFO nova.virt.libvirt.driver [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Deleting instance files /var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c_del
Nov 29 07:58:06 compute-0 nova_compute[256729]: 2025-11-29 07:58:06.955 256736 INFO nova.virt.libvirt.driver [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Deletion of /var/lib/nova/instances/81e82526-de13-4350-a618-49168b2e029c_del complete
Nov 29 07:58:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:58:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:58:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:58:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:58:06 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:58:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:58:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:58:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:58:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:58:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.008 256736 INFO nova.compute.manager [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Took 1.40 seconds to destroy the instance on the hypervisor.
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.008 256736 DEBUG oslo.service.loopingcall [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.009 256736 DEBUG nova.compute.manager [-] [instance: 81e82526-de13-4350-a618-49168b2e029c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.009 256736 DEBUG nova.network.neutron [-] [instance: 81e82526-de13-4350-a618-49168b2e029c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.228 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.532 256736 DEBUG nova.compute.manager [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-vif-unplugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.533 256736 DEBUG oslo_concurrency.lockutils [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "81e82526-de13-4350-a618-49168b2e029c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]: {
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.533 256736 DEBUG oslo_concurrency.lockutils [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:     "0": [
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:         {
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "devices": [
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "/dev/loop3"
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             ],
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_name": "ceph_lv0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_size": "21470642176",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "name": "ceph_lv0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "tags": {
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.cluster_name": "ceph",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.crush_device_class": "",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.encrypted": "0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.osd_id": "0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.type": "block",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.vdo": "0"
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             },
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "type": "block",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "vg_name": "ceph_vg0"
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:         }
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:     ],
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:     "1": [
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:         {
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "devices": [
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "/dev/loop4"
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             ],
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_name": "ceph_lv1",
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.534 256736 DEBUG oslo_concurrency.lockutils [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_size": "21470642176",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "name": "ceph_lv1",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "tags": {
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.cluster_name": "ceph",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.crush_device_class": "",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.encrypted": "0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.osd_id": "1",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.type": "block",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.vdo": "0"
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             },
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "type": "block",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "vg_name": "ceph_vg1"
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:         }
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:     ],
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:     "2": [
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:         {
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "devices": [
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "/dev/loop5"
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             ],
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_name": "ceph_lv2",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_size": "21470642176",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "name": "ceph_lv2",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "tags": {
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.cluster_name": "ceph",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.crush_device_class": "",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.encrypted": "0",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.osd_id": "2",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.type": "block",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:                 "ceph.vdo": "0"
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             },
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "type": "block",
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:             "vg_name": "ceph_vg2"
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:         }
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]:     ]
Nov 29 07:58:07 compute-0 stoic_hodgkin[283908]: }
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.534 256736 DEBUG nova.compute.manager [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] No waiting events found dispatching network-vif-unplugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.535 256736 DEBUG nova.compute.manager [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-vif-unplugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.535 256736 DEBUG nova.compute.manager [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-vif-plugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.536 256736 DEBUG oslo_concurrency.lockutils [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "81e82526-de13-4350-a618-49168b2e029c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.536 256736 DEBUG oslo_concurrency.lockutils [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.537 256736 DEBUG oslo_concurrency.lockutils [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.537 256736 DEBUG nova.compute.manager [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] No waiting events found dispatching network-vif-plugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.537 256736 WARNING nova.compute.manager [req-86bf1460-a830-4f86-8aeb-44ed7b3d6d06 req-3f9c7ce0-296b-4cb1-a34b-3eb49ea6c03b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received unexpected event network-vif-plugged-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a for instance with vm_state active and task_state deleting.
Nov 29 07:58:07 compute-0 systemd[1]: libpod-8f57c55315e0d9b53e65f9f7c1cfcdad0a88e0f5fdb26fac50a2920ee4df5082.scope: Deactivated successfully.
Nov 29 07:58:07 compute-0 podman[283924]: 2025-11-29 07:58:07.635208636 +0000 UTC m=+0.036492345 container died 8f57c55315e0d9b53e65f9f7c1cfcdad0a88e0f5fdb26fac50a2920ee4df5082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.643 256736 DEBUG nova.network.neutron [-] [instance: 81e82526-de13-4350-a618-49168b2e029c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:58:07 compute-0 nova_compute[256729]: 2025-11-29 07:58:07.664 256736 INFO nova.compute.manager [-] [instance: 81e82526-de13-4350-a618-49168b2e029c] Took 0.65 seconds to deallocate network for instance.
Nov 29 07:58:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 29 07:58:08 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 29 07:58:08 compute-0 sshd-session[283819]: Connection closed by authenticating user root 143.14.121.41 port 59630 [preauth]
Nov 29 07:58:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4278900494' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4278900494' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:08 compute-0 nova_compute[256729]: 2025-11-29 07:58:08.757 256736 INFO nova.compute.manager [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Took 1.09 seconds to detach 1 volumes for instance.
Nov 29 07:58:08 compute-0 ceph-mon[75050]: pgmap v1629: 305 pgs: 305 active+clean; 269 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 220 KiB/s rd, 173 KiB/s wr, 71 op/s
Nov 29 07:58:08 compute-0 nova_compute[256729]: 2025-11-29 07:58:08.814 256736 DEBUG oslo_concurrency.lockutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:08 compute-0 nova_compute[256729]: 2025-11-29 07:58:08.815 256736 DEBUG oslo_concurrency.lockutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 269 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 133 KiB/s wr, 84 op/s
Nov 29 07:58:08 compute-0 nova_compute[256729]: 2025-11-29 07:58:08.875 256736 DEBUG oslo_concurrency.processutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-068e72db81ee5e541bb89ac97a95906557f9037c4ffcabb2491881c1365a0953-merged.mount: Deactivated successfully.
Nov 29 07:58:09 compute-0 nova_compute[256729]: 2025-11-29 07:58:09.614 256736 DEBUG nova.compute.manager [req-7eced017-acba-4963-be85-b311f52e976d req-88c3f4cb-1962-43d8-949c-0ff3ca6ef52f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 81e82526-de13-4350-a618-49168b2e029c] Received event network-vif-deleted-3142a2d6-f8a3-4cb2-b1f3-d90d7877515a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:58:09 compute-0 nova_compute[256729]: 2025-11-29 07:58:09.773 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403074.7716763, f8159132-7d73-48fd-baa4-4d6eed2d8b66 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:58:09 compute-0 nova_compute[256729]: 2025-11-29 07:58:09.773 256736 INFO nova.compute.manager [-] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] VM Stopped (Lifecycle Event)
Nov 29 07:58:09 compute-0 nova_compute[256729]: 2025-11-29 07:58:09.796 256736 DEBUG nova.compute.manager [None req-d2efd500-33ef-455b-a7aa-ff95e2e4a854 - - - - - -] [instance: f8159132-7d73-48fd-baa4-4d6eed2d8b66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:58:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 29 07:58:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/527563046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:09 compute-0 nova_compute[256729]: 2025-11-29 07:58:09.903 256736 DEBUG oslo_concurrency.processutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 29 07:58:09 compute-0 podman[283924]: 2025-11-29 07:58:09.90882014 +0000 UTC m=+2.310103849 container remove 8f57c55315e0d9b53e65f9f7c1cfcdad0a88e0f5fdb26fac50a2920ee4df5082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hodgkin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 07:58:09 compute-0 nova_compute[256729]: 2025-11-29 07:58:09.909 256736 DEBUG nova.compute.provider_tree [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:58:09 compute-0 systemd[1]: libpod-conmon-8f57c55315e0d9b53e65f9f7c1cfcdad0a88e0f5fdb26fac50a2920ee4df5082.scope: Deactivated successfully.
Nov 29 07:58:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 29 07:58:09 compute-0 nova_compute[256729]: 2025-11-29 07:58:09.925 256736 DEBUG nova.scheduler.client.report [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:58:09 compute-0 ceph-mon[75050]: osdmap e259: 3 total, 3 up, 3 in
Nov 29 07:58:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4278900494' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4278900494' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:09 compute-0 ceph-mon[75050]: pgmap v1631: 305 pgs: 305 active+clean; 269 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 133 KiB/s wr, 84 op/s
Nov 29 07:58:09 compute-0 nova_compute[256729]: 2025-11-29 07:58:09.959 256736 DEBUG oslo_concurrency.lockutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:09 compute-0 sudo[283695]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:09 compute-0 nova_compute[256729]: 2025-11-29 07:58:09.992 256736 INFO nova.scheduler.client.report [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Deleted allocations for instance 81e82526-de13-4350-a618-49168b2e029c
Nov 29 07:58:10 compute-0 sudo[283963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:10 compute-0 sudo[283963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:10 compute-0 sudo[283963]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:10 compute-0 nova_compute[256729]: 2025-11-29 07:58:10.074 256736 DEBUG oslo_concurrency.lockutils [None req-f4c12678-afbe-4205-b1b0-b2675947693c 11e11652beb841579a10eab85f0c13f9 9d3648d4d8b045ca9d33086f2d66a86b - - default default] Lock "81e82526-de13-4350-a618-49168b2e029c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:10 compute-0 sudo[283988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:58:10 compute-0 sudo[283988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:10 compute-0 sudo[283988]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:10 compute-0 sudo[284013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:10 compute-0 sudo[284013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:10 compute-0 sudo[284013]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:10 compute-0 sudo[284038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:58:10 compute-0 sudo[284038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:10 compute-0 nova_compute[256729]: 2025-11-29 07:58:10.367 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403075.3653967, 147c2de5-0104-4eb0-bc20-b3bdc3909ed9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:58:10 compute-0 nova_compute[256729]: 2025-11-29 07:58:10.367 256736 INFO nova.compute.manager [-] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] VM Stopped (Lifecycle Event)
Nov 29 07:58:10 compute-0 nova_compute[256729]: 2025-11-29 07:58:10.388 256736 DEBUG nova.compute.manager [None req-06680971-1dc1-42fa-a329-73ca54d2b225 - - - - - -] [instance: 147c2de5-0104-4eb0-bc20-b3bdc3909ed9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:58:10 compute-0 podman[284103]: 2025-11-29 07:58:10.610779609 +0000 UTC m=+0.033426384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:10 compute-0 podman[284103]: 2025-11-29 07:58:10.714044126 +0000 UTC m=+0.136690911 container create 6642d6aa3ef1fc7a9ba91742633b3a1260d1a5340c019953fa82b73eda8f00e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 07:58:10 compute-0 systemd[1]: Started libpod-conmon-6642d6aa3ef1fc7a9ba91742633b3a1260d1a5340c019953fa82b73eda8f00e2.scope.
Nov 29 07:58:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 269 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 28 KiB/s wr, 32 op/s
Nov 29 07:58:10 compute-0 podman[284103]: 2025-11-29 07:58:10.887825555 +0000 UTC m=+0.310472310 container init 6642d6aa3ef1fc7a9ba91742633b3a1260d1a5340c019953fa82b73eda8f00e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 07:58:10 compute-0 podman[284103]: 2025-11-29 07:58:10.896536857 +0000 UTC m=+0.319183602 container start 6642d6aa3ef1fc7a9ba91742633b3a1260d1a5340c019953fa82b73eda8f00e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 07:58:10 compute-0 flamboyant_jang[284118]: 167 167
Nov 29 07:58:10 compute-0 systemd[1]: libpod-6642d6aa3ef1fc7a9ba91742633b3a1260d1a5340c019953fa82b73eda8f00e2.scope: Deactivated successfully.
Nov 29 07:58:10 compute-0 podman[284103]: 2025-11-29 07:58:10.977373425 +0000 UTC m=+0.400020260 container attach 6642d6aa3ef1fc7a9ba91742633b3a1260d1a5340c019953fa82b73eda8f00e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 07:58:10 compute-0 podman[284103]: 2025-11-29 07:58:10.979294927 +0000 UTC m=+0.401941712 container died 6642d6aa3ef1fc7a9ba91742633b3a1260d1a5340c019953fa82b73eda8f00e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:58:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/527563046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:11 compute-0 ceph-mon[75050]: osdmap e260: 3 total, 3 up, 3 in
Nov 29 07:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ad5b28a2a3fe0a7f9ca172a68f5bbf611c9b31b19a47fe948fe8876f0a67ae9-merged.mount: Deactivated successfully.
Nov 29 07:58:11 compute-0 podman[284103]: 2025-11-29 07:58:11.184485724 +0000 UTC m=+0.607132509 container remove 6642d6aa3ef1fc7a9ba91742633b3a1260d1a5340c019953fa82b73eda8f00e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 07:58:11 compute-0 systemd[1]: libpod-conmon-6642d6aa3ef1fc7a9ba91742633b3a1260d1a5340c019953fa82b73eda8f00e2.scope: Deactivated successfully.
Nov 29 07:58:11 compute-0 podman[284143]: 2025-11-29 07:58:11.360584375 +0000 UTC m=+0.024059053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:58:11 compute-0 podman[284143]: 2025-11-29 07:58:11.467563061 +0000 UTC m=+0.131037739 container create a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chandrasekhar, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 07:58:11 compute-0 systemd[1]: Started libpod-conmon-a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669.scope.
Nov 29 07:58:11 compute-0 nova_compute[256729]: 2025-11-29 07:58:11.670 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd8148a838673d14070979fe2653656be95c1dbb5c6910af9cc53a313498e4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd8148a838673d14070979fe2653656be95c1dbb5c6910af9cc53a313498e4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd8148a838673d14070979fe2653656be95c1dbb5c6910af9cc53a313498e4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd8148a838673d14070979fe2653656be95c1dbb5c6910af9cc53a313498e4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:58:11 compute-0 podman[284143]: 2025-11-29 07:58:11.740052075 +0000 UTC m=+0.403526743 container init a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chandrasekhar, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:58:11 compute-0 podman[284143]: 2025-11-29 07:58:11.750457593 +0000 UTC m=+0.413932241 container start a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 07:58:11 compute-0 podman[284143]: 2025-11-29 07:58:11.793352778 +0000 UTC m=+0.456827406 container attach a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:58:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 29 07:58:12 compute-0 ceph-mon[75050]: pgmap v1633: 305 pgs: 305 active+clean; 269 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 28 KiB/s wr, 32 op/s
Nov 29 07:58:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 29 07:58:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 29 07:58:12 compute-0 nova_compute[256729]: 2025-11-29 07:58:12.230 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:12 compute-0 sshd-session[283939]: Connection closed by authenticating user root 143.14.121.41 port 59648 [preauth]
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]: {
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "osd_id": 2,
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "type": "bluestore"
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:     },
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "osd_id": 1,
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "type": "bluestore"
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:     },
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "osd_id": 0,
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:         "type": "bluestore"
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]:     }
Nov 29 07:58:12 compute-0 elegant_chandrasekhar[284159]: }
Nov 29 07:58:12 compute-0 systemd[1]: libpod-a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669.scope: Deactivated successfully.
Nov 29 07:58:12 compute-0 systemd[1]: libpod-a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669.scope: Consumed 1.113s CPU time.
Nov 29 07:58:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 269 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 28 KiB/s wr, 81 op/s
Nov 29 07:58:12 compute-0 podman[284143]: 2025-11-29 07:58:12.865821507 +0000 UTC m=+1.529296245 container died a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chandrasekhar, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 07:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fd8148a838673d14070979fe2653656be95c1dbb5c6910af9cc53a313498e4d-merged.mount: Deactivated successfully.
Nov 29 07:58:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 29 07:58:13 compute-0 ceph-mon[75050]: osdmap e261: 3 total, 3 up, 3 in
Nov 29 07:58:13 compute-0 podman[284143]: 2025-11-29 07:58:13.193678209 +0000 UTC m=+1.857152877 container remove a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 07:58:13 compute-0 systemd[1]: libpod-conmon-a321999c443f1e84850f56a550cdd90662d25a4bdb6b4d8c5a10510c7f958669.scope: Deactivated successfully.
Nov 29 07:58:13 compute-0 sudo[284038]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:58:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 29 07:58:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 29 07:58:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:58:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:58:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:58:13 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev d3a76758-ee18-4061-ac70-1b458386187c does not exist
Nov 29 07:58:13 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev fbc706c3-2ec2-4915-aace-5b17f91a28ce does not exist
Nov 29 07:58:13 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:13.479 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:58:13 compute-0 sudo[284209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:58:13 compute-0 sudo[284209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:13 compute-0 sudo[284209]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:13 compute-0 sudo[284234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:58:13 compute-0 sudo[284234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:58:13 compute-0 sudo[284234]: pam_unix(sudo:session): session closed for user root
Nov 29 07:58:14 compute-0 ceph-mon[75050]: pgmap v1635: 305 pgs: 305 active+clean; 269 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 28 KiB/s wr, 81 op/s
Nov 29 07:58:14 compute-0 ceph-mon[75050]: osdmap e262: 3 total, 3 up, 3 in
Nov 29 07:58:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:58:14 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:58:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1240110583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1240110583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 263 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 5.0 KiB/s wr, 103 op/s
Nov 29 07:58:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1582940178' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1582940178' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3028574345' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3028574345' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014516486781272384 of space, bias 1.0, pg target 0.43549460343817153 quantized to 32 (current 32)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003460606319593671 of space, bias 1.0, pg target 0.10381818958781013 quantized to 32 (current 32)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660490737123136 of space, bias 1.0, pg target 0.19981472211369408 quantized to 32 (current 32)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:58:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:58:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1240110583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1240110583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1582940178' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1582940178' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3028574345' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3028574345' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:15 compute-0 sshd-session[284187]: Connection closed by authenticating user root 143.14.121.41 port 59188 [preauth]
Nov 29 07:58:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 29 07:58:16 compute-0 nova_compute[256729]: 2025-11-29 07:58:16.673 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 167 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 13 KiB/s wr, 240 op/s
Nov 29 07:58:17 compute-0 ceph-mon[75050]: pgmap v1637: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 263 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 5.0 KiB/s wr, 103 op/s
Nov 29 07:58:17 compute-0 nova_compute[256729]: 2025-11-29 07:58:17.234 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 29 07:58:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 29 07:58:18 compute-0 ceph-mon[75050]: pgmap v1638: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 167 MiB data, 362 MiB used, 60 GiB / 60 GiB avail; 176 KiB/s rd, 13 KiB/s wr, 240 op/s
Nov 29 07:58:18 compute-0 ceph-mon[75050]: osdmap e263: 3 total, 3 up, 3 in
Nov 29 07:58:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 12 KiB/s wr, 234 op/s
Nov 29 07:58:18 compute-0 sshd-session[284259]: Connection closed by authenticating user root 143.14.121.41 port 59204 [preauth]
Nov 29 07:58:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2608901816' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2608901816' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 29 07:58:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 29 07:58:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 29 07:58:20 compute-0 ceph-mon[75050]: pgmap v1640: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 12 KiB/s wr, 234 op/s
Nov 29 07:58:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2608901816' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2608901816' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:20 compute-0 ceph-mon[75050]: osdmap e264: 3 total, 3 up, 3 in
Nov 29 07:58:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 11 KiB/s wr, 210 op/s
Nov 29 07:58:21 compute-0 nova_compute[256729]: 2025-11-29 07:58:21.648 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403086.647541, 81e82526-de13-4350-a618-49168b2e029c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:58:21 compute-0 nova_compute[256729]: 2025-11-29 07:58:21.649 256736 INFO nova.compute.manager [-] [instance: 81e82526-de13-4350-a618-49168b2e029c] VM Stopped (Lifecycle Event)
Nov 29 07:58:21 compute-0 nova_compute[256729]: 2025-11-29 07:58:21.677 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:21 compute-0 nova_compute[256729]: 2025-11-29 07:58:21.789 256736 DEBUG nova.compute.manager [None req-b6e080eb-3eca-412b-b2ef-b32dd329a9ad - - - - - -] [instance: 81e82526-de13-4350-a618-49168b2e029c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:58:22 compute-0 nova_compute[256729]: 2025-11-29 07:58:22.235 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:22 compute-0 nova_compute[256729]: 2025-11-29 07:58:22.458 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:22 compute-0 nova_compute[256729]: 2025-11-29 07:58:22.642 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:22 compute-0 ceph-mon[75050]: pgmap v1642: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 11 KiB/s wr, 210 op/s
Nov 29 07:58:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 9.6 KiB/s wr, 190 op/s
Nov 29 07:58:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2254337466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2254337466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 29 07:58:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 29 07:58:23 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 29 07:58:23 compute-0 ceph-mon[75050]: pgmap v1643: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 9.6 KiB/s wr, 190 op/s
Nov 29 07:58:23 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2254337466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:23 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2254337466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:24 compute-0 sshd-session[284261]: Connection closed by authenticating user root 143.14.121.41 port 59216 [preauth]
Nov 29 07:58:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.7 KiB/s wr, 75 op/s
Nov 29 07:58:24 compute-0 ceph-mon[75050]: osdmap e265: 3 total, 3 up, 3 in
Nov 29 07:58:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 29 07:58:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 29 07:58:25 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 29 07:58:25 compute-0 podman[284267]: 2025-11-29 07:58:25.729446862 +0000 UTC m=+0.073649446 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 07:58:25 compute-0 podman[284266]: 2025-11-29 07:58:25.735447443 +0000 UTC m=+0.082922715 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 07:58:25 compute-0 podman[284265]: 2025-11-29 07:58:25.757907912 +0000 UTC m=+0.105096766 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 29 07:58:26 compute-0 ceph-mon[75050]: pgmap v1645: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.7 KiB/s wr, 75 op/s
Nov 29 07:58:26 compute-0 ceph-mon[75050]: osdmap e266: 3 total, 3 up, 3 in
Nov 29 07:58:26 compute-0 nova_compute[256729]: 2025-11-29 07:58:26.679 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.4 KiB/s wr, 77 op/s
Nov 29 07:58:27 compute-0 nova_compute[256729]: 2025-11-29 07:58:27.236 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.6 KiB/s wr, 91 op/s
Nov 29 07:58:29 compute-0 ceph-mon[75050]: pgmap v1647: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.4 KiB/s wr, 77 op/s
Nov 29 07:58:30 compute-0 sshd-session[284264]: Connection closed by authenticating user root 143.14.121.41 port 46158 [preauth]
Nov 29 07:58:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.6 KiB/s wr, 59 op/s
Nov 29 07:58:31 compute-0 nova_compute[256729]: 2025-11-29 07:58:31.682 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:32 compute-0 nova_compute[256729]: 2025-11-29 07:58:32.238 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 KiB/s wr, 42 op/s
Nov 29 07:58:34 compute-0 ceph-mon[75050]: pgmap v1648: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.6 KiB/s wr, 91 op/s
Nov 29 07:58:34 compute-0 ceph-mon[75050]: pgmap v1649: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.6 KiB/s wr, 59 op/s
Nov 29 07:58:34 compute-0 sshd-session[284326]: Connection closed by authenticating user root 143.14.121.41 port 46164 [preauth]
Nov 29 07:58:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.9 KiB/s wr, 40 op/s
Nov 29 07:58:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:58:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:58:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:36 compute-0 nova_compute[256729]: 2025-11-29 07:58:36.687 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Nov 29 07:58:37 compute-0 nova_compute[256729]: 2025-11-29 07:58:37.239 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:37 compute-0 ceph-mon[75050]: pgmap v1650: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 KiB/s wr, 42 op/s
Nov 29 07:58:37 compute-0 ceph-mon[75050]: pgmap v1651: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.9 KiB/s wr, 40 op/s
Nov 29 07:58:38 compute-0 sshd-session[284328]: Connection closed by authenticating user root 143.14.121.41 port 38590 [preauth]
Nov 29 07:58:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Nov 29 07:58:38 compute-0 ceph-mon[75050]: pgmap v1652: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Nov 29 07:58:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2726609623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2726609623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 29 07:58:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 29 07:58:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 29 07:58:39 compute-0 ceph-mon[75050]: pgmap v1653: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Nov 29 07:58:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2726609623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2726609623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 409 B/s wr, 10 op/s
Nov 29 07:58:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 29 07:58:41 compute-0 ceph-mon[75050]: osdmap e267: 3 total, 3 up, 3 in
Nov 29 07:58:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 29 07:58:41 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 29 07:58:41 compute-0 nova_compute[256729]: 2025-11-29 07:58:41.727 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:58:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/894763481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:42 compute-0 nova_compute[256729]: 2025-11-29 07:58:42.242 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 29 07:58:42 compute-0 ceph-mon[75050]: pgmap v1655: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 409 B/s wr, 10 op/s
Nov 29 07:58:42 compute-0 ceph-mon[75050]: osdmap e268: 3 total, 3 up, 3 in
Nov 29 07:58:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/894763481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 29 07:58:42 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 29 07:58:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 29 07:58:43 compute-0 sshd-session[284330]: Connection closed by authenticating user root 143.14.121.41 port 38600 [preauth]
Nov 29 07:58:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 29 07:58:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 29 07:58:43 compute-0 ceph-mon[75050]: osdmap e269: 3 total, 3 up, 3 in
Nov 29 07:58:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 29 07:58:44 compute-0 nova_compute[256729]: 2025-11-29 07:58:44.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:44 compute-0 nova_compute[256729]: 2025-11-29 07:58:44.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 07:58:44 compute-0 ceph-mon[75050]: pgmap v1658: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Nov 29 07:58:44 compute-0 ceph-mon[75050]: osdmap e270: 3 total, 3 up, 3 in
Nov 29 07:58:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1716765585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1716765585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 5.5 KiB/s wr, 80 op/s
Nov 29 07:58:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1745876148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1745876148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 29 07:58:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 29 07:58:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1716765585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1716765585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1745876148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1745876148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 29 07:58:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:46 compute-0 nova_compute[256729]: 2025-11-29 07:58:46.730 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 9.5 KiB/s wr, 141 op/s
Nov 29 07:58:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:58:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4083879782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:58:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4083879782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:47 compute-0 ceph-mon[75050]: pgmap v1660: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 5.5 KiB/s wr, 80 op/s
Nov 29 07:58:47 compute-0 ceph-mon[75050]: osdmap e271: 3 total, 3 up, 3 in
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.044773) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403127044934, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2339, "num_deletes": 268, "total_data_size": 3613093, "memory_usage": 3665392, "flush_reason": "Manual Compaction"}
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403127219875, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3537889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26780, "largest_seqno": 29118, "table_properties": {"data_size": 3526910, "index_size": 7153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23113, "raw_average_key_size": 21, "raw_value_size": 3504911, "raw_average_value_size": 3227, "num_data_blocks": 312, "num_entries": 1086, "num_filter_entries": 1086, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402927, "oldest_key_time": 1764402927, "file_creation_time": 1764403127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 175216 microseconds, and 14864 cpu microseconds.
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:58:47 compute-0 nova_compute[256729]: 2025-11-29 07:58:47.243 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.220025) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3537889 bytes OK
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.220051) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.277492) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.277534) EVENT_LOG_v1 {"time_micros": 1764403127277525, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.277557) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3602920, prev total WAL file size 3602920, number of live WAL files 2.
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.278607) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3454KB)], [59(8081KB)]
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403127278675, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11813352, "oldest_snapshot_seqno": -1}
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5908 keys, 10035547 bytes, temperature: kUnknown
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403127497267, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 10035547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9992227, "index_size": 27466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 148350, "raw_average_key_size": 25, "raw_value_size": 9881985, "raw_average_value_size": 1672, "num_data_blocks": 1117, "num_entries": 5908, "num_filter_entries": 5908, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764403127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.497699) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10035547 bytes
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.499393) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.0 rd, 45.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 6447, records dropped: 539 output_compression: NoCompression
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.499428) EVENT_LOG_v1 {"time_micros": 1764403127499414, "job": 32, "event": "compaction_finished", "compaction_time_micros": 218848, "compaction_time_cpu_micros": 44783, "output_level": 6, "num_output_files": 1, "total_output_size": 10035547, "num_input_records": 6447, "num_output_records": 5908, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403127501092, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403127503126, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.278485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.503265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.503269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.503271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.503272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:58:47 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-07:58:47.503276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 07:58:47 compute-0 sshd-session[284332]: Connection closed by authenticating user root 143.14.121.41 port 59836 [preauth]
Nov 29 07:58:48 compute-0 ceph-mon[75050]: pgmap v1662: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 9.5 KiB/s wr, 141 op/s
Nov 29 07:58:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4083879782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:58:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4083879782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:58:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 7.8 KiB/s wr, 190 op/s
Nov 29 07:58:49 compute-0 nova_compute[256729]: 2025-11-29 07:58:49.159 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:58:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945455812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 29 07:58:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 29 07:58:50 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 29 07:58:50 compute-0 ceph-mon[75050]: pgmap v1663: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 7.8 KiB/s wr, 190 op/s
Nov 29 07:58:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1945455812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:58:50 compute-0 nova_compute[256729]: 2025-11-29 07:58:50.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:50 compute-0 nova_compute[256729]: 2025-11-29 07:58:50.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 07:58:50 compute-0 nova_compute[256729]: 2025-11-29 07:58:50.225 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 07:58:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 29 07:58:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 29 07:58:50 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 29 07:58:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 5.2 KiB/s wr, 154 op/s
Nov 29 07:58:51 compute-0 sshd-session[284334]: Connection closed by authenticating user root 143.14.121.41 port 59842 [preauth]
Nov 29 07:58:51 compute-0 nova_compute[256729]: 2025-11-29 07:58:51.734 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:52 compute-0 ceph-mon[75050]: osdmap e272: 3 total, 3 up, 3 in
Nov 29 07:58:52 compute-0 ceph-mon[75050]: osdmap e273: 3 total, 3 up, 3 in
Nov 29 07:58:52 compute-0 nova_compute[256729]: 2025-11-29 07:58:52.219 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:52 compute-0 nova_compute[256729]: 2025-11-29 07:58:52.244 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.7 KiB/s wr, 133 op/s
Nov 29 07:58:53 compute-0 ceph-mon[75050]: pgmap v1666: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 5.2 KiB/s wr, 154 op/s
Nov 29 07:58:54 compute-0 nova_compute[256729]: 2025-11-29 07:58:54.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:54 compute-0 nova_compute[256729]: 2025-11-29 07:58:54.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:54 compute-0 ceph-mon[75050]: pgmap v1667: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.7 KiB/s wr, 133 op/s
Nov 29 07:58:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.6 KiB/s wr, 78 op/s
Nov 29 07:58:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:58:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 29 07:58:56 compute-0 nova_compute[256729]: 2025-11-29 07:58:56.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:56 compute-0 nova_compute[256729]: 2025-11-29 07:58:56.187 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:56 compute-0 nova_compute[256729]: 2025-11-29 07:58:56.188 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:56 compute-0 nova_compute[256729]: 2025-11-29 07:58:56.189 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:56 compute-0 nova_compute[256729]: 2025-11-29 07:58:56.189 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:58:56 compute-0 nova_compute[256729]: 2025-11-29 07:58:56.190 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 29 07:58:56 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 29 07:58:56 compute-0 ceph-mon[75050]: pgmap v1668: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 1.6 KiB/s wr, 78 op/s
Nov 29 07:58:56 compute-0 podman[284361]: 2025-11-29 07:58:56.714564059 +0000 UTC m=+0.068239453 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:58:56 compute-0 nova_compute[256729]: 2025-11-29 07:58:56.736 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:56 compute-0 podman[284360]: 2025-11-29 07:58:56.744641471 +0000 UTC m=+0.102904937 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 07:58:56 compute-0 podman[284359]: 2025-11-29 07:58:56.748128805 +0000 UTC m=+0.118867244 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 07:58:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 31 op/s
Nov 29 07:58:57 compute-0 nova_compute[256729]: 2025-11-29 07:58:57.245 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:58:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897451595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:57 compute-0 nova_compute[256729]: 2025-11-29 07:58:57.547 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:57 compute-0 nova_compute[256729]: 2025-11-29 07:58:57.847 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:58:57 compute-0 nova_compute[256729]: 2025-11-29 07:58:57.850 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4519MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:58:57 compute-0 nova_compute[256729]: 2025-11-29 07:58:57.850 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:57 compute-0 nova_compute[256729]: 2025-11-29 07:58:57.851 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:58 compute-0 nova_compute[256729]: 2025-11-29 07:58:58.469 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:58:58 compute-0 nova_compute[256729]: 2025-11-29 07:58:58.469 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:58:58 compute-0 nova_compute[256729]: 2025-11-29 07:58:58.576 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:58:58 compute-0 sshd-session[284336]: Connection closed by authenticating user root 143.14.121.41 port 57058 [preauth]
Nov 29 07:58:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 17 op/s
Nov 29 07:58:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:58:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996174639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:59 compute-0 nova_compute[256729]: 2025-11-29 07:58:59.066 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:58:59 compute-0 nova_compute[256729]: 2025-11-29 07:58:59.077 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:58:59 compute-0 nova_compute[256729]: 2025-11-29 07:58:59.102 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:58:59 compute-0 nova_compute[256729]: 2025-11-29 07:58:59.131 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:58:59 compute-0 nova_compute[256729]: 2025-11-29 07:58:59.132 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:58:59 compute-0 nova_compute[256729]: 2025-11-29 07:58:59.133 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:58:59 compute-0 ceph-mon[75050]: osdmap e274: 3 total, 3 up, 3 in
Nov 29 07:58:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2897451595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:58:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:59.777 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:58:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:59.778 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:58:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:58:59.778 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:00 compute-0 nova_compute[256729]: 2025-11-29 07:59:00.147 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:00 compute-0 nova_compute[256729]: 2025-11-29 07:59:00.148 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 07:59:00 compute-0 nova_compute[256729]: 2025-11-29 07:59:00.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 07:59:00 compute-0 nova_compute[256729]: 2025-11-29 07:59:00.164 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 07:59:00 compute-0 nova_compute[256729]: 2025-11-29 07:59:00.164 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:00 compute-0 nova_compute[256729]: 2025-11-29 07:59:00.165 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:00 compute-0 nova_compute[256729]: 2025-11-29 07:59:00.166 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:00 compute-0 nova_compute[256729]: 2025-11-29 07:59:00.166 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 07:59:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 17 op/s
Nov 29 07:59:01 compute-0 nova_compute[256729]: 2025-11-29 07:59:01.739 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:02 compute-0 ceph-mon[75050]: pgmap v1670: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 31 op/s
Nov 29 07:59:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1996174639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:02 compute-0 nova_compute[256729]: 2025-11-29 07:59:02.247 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 818 B/s wr, 10 op/s
Nov 29 07:59:03 compute-0 sshd-session[284446]: Connection closed by authenticating user root 143.14.121.41 port 57070 [preauth]
Nov 29 07:59:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 29 07:59:04 compute-0 nova_compute[256729]: 2025-11-29 07:59:04.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 818 B/s wr, 9 op/s
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_07:59:05
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.data']
Nov 29 07:59:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 07:59:06 compute-0 nova_compute[256729]: 2025-11-29 07:59:06.741 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 890 B/s wr, 9 op/s
Nov 29 07:59:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:59:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 07:59:07 compute-0 ceph-mon[75050]: pgmap v1671: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 17 op/s
Nov 29 07:59:07 compute-0 ceph-mon[75050]: pgmap v1672: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 17 op/s
Nov 29 07:59:07 compute-0 nova_compute[256729]: 2025-11-29 07:59:07.249 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:07.307 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:59:07 compute-0 nova_compute[256729]: 2025-11-29 07:59:07.308 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:07.309 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 07:59:07 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 29 07:59:07 compute-0 sshd-session[284448]: Connection closed by authenticating user root 143.14.121.41 port 54954 [preauth]
Nov 29 07:59:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 29 07:59:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 29 07:59:08 compute-0 ceph-mon[75050]: pgmap v1673: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 818 B/s wr, 10 op/s
Nov 29 07:59:08 compute-0 ceph-mon[75050]: pgmap v1674: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 818 B/s wr, 9 op/s
Nov 29 07:59:08 compute-0 ceph-mon[75050]: pgmap v1675: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 890 B/s wr, 9 op/s
Nov 29 07:59:08 compute-0 ceph-mon[75050]: osdmap e275: 3 total, 3 up, 3 in
Nov 29 07:59:08 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 29 07:59:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3216470164' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3216470164' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 895 B/s wr, 7 op/s
Nov 29 07:59:10 compute-0 ceph-mon[75050]: osdmap e276: 3 total, 3 up, 3 in
Nov 29 07:59:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3216470164' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3216470164' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 895 B/s wr, 14 op/s
Nov 29 07:59:11 compute-0 ceph-mon[75050]: pgmap v1678: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 895 B/s wr, 7 op/s
Nov 29 07:59:11 compute-0 sshd-session[284450]: Connection closed by authenticating user root 143.14.121.41 port 54964 [preauth]
Nov 29 07:59:11 compute-0 nova_compute[256729]: 2025-11-29 07:59:11.743 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:12 compute-0 nova_compute[256729]: 2025-11-29 07:59:12.251 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:12 compute-0 nova_compute[256729]: 2025-11-29 07:59:12.774 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 639 B/s wr, 13 op/s
Nov 29 07:59:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:13 compute-0 ceph-mon[75050]: pgmap v1679: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 895 B/s wr, 14 op/s
Nov 29 07:59:13 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:13.311 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:13 compute-0 sudo[284454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:13 compute-0 sudo[284454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:13 compute-0 sudo[284454]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:13 compute-0 sudo[284479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:13 compute-0 sudo[284479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:13 compute-0 sudo[284479]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:13 compute-0 sudo[284504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:13 compute-0 sudo[284504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:13 compute-0 sudo[284504]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:13 compute-0 sudo[284529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 07:59:13 compute-0 sudo[284529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:14 compute-0 ceph-mon[75050]: pgmap v1680: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 639 B/s wr, 13 op/s
Nov 29 07:59:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 511 B/s wr, 16 op/s
Nov 29 07:59:14 compute-0 sshd-session[284452]: Connection closed by authenticating user root 143.14.121.41 port 35910 [preauth]
Nov 29 07:59:15 compute-0 podman[284625]: 2025-11-29 07:59:15.193099592 +0000 UTC m=+0.949203970 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034720526470013676 of space, bias 1.0, pg target 0.10416157941004103 quantized to 32 (current 32)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 07:59:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 07:59:15 compute-0 podman[284625]: 2025-11-29 07:59:15.640180927 +0000 UTC m=+1.396285355 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:16 compute-0 ceph-mon[75050]: pgmap v1681: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 511 B/s wr, 16 op/s
Nov 29 07:59:16 compute-0 nova_compute[256729]: 2025-11-29 07:59:16.744 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 29 07:59:16 compute-0 sudo[284529]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 844 B/s wr, 23 op/s
Nov 29 07:59:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:59:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 29 07:59:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 29 07:59:17 compute-0 nova_compute[256729]: 2025-11-29 07:59:17.253 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:59:17 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:17 compute-0 sudo[284783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:17 compute-0 sudo[284783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:17 compute-0 sudo[284783]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:17 compute-0 sudo[284808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:17 compute-0 sudo[284808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:17 compute-0 sudo[284808]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:17 compute-0 sudo[284833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:17 compute-0 sudo[284833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:17 compute-0 sudo[284833]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:17 compute-0 sudo[284858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 07:59:17 compute-0 sudo[284858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 29 07:59:18 compute-0 ceph-mon[75050]: pgmap v1682: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 844 B/s wr, 23 op/s
Nov 29 07:59:18 compute-0 ceph-mon[75050]: osdmap e277: 3 total, 3 up, 3 in
Nov 29 07:59:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:18 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:18 compute-0 sudo[284858]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Nov 29 07:59:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 07:59:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:59:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:59:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 07:59:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:59:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 07:59:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 29 07:59:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 29 07:59:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 07:59:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:19 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 07:59:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:19 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 50156baf-015f-41f2-840f-c259de888ad5 does not exist
Nov 29 07:59:19 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a35abecf-7bc6-4419-825d-79b23a3acbfc does not exist
Nov 29 07:59:19 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 269b7525-b2d0-4eb2-82e7-9a0bbcc595ae does not exist
Nov 29 07:59:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 07:59:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:59:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 07:59:19 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:59:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 07:59:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:19 compute-0 sudo[284914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:19 compute-0 sudo[284914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:19 compute-0 sudo[284914]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:19 compute-0 sudo[284939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:19 compute-0 sudo[284939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:19 compute-0 sudo[284939]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:19 compute-0 sudo[284964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:19 compute-0 sudo[284964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:19 compute-0 sudo[284964]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:19 compute-0 sudo[284989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 07:59:19 compute-0 sudo[284989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2596339260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:20 compute-0 podman[285055]: 2025-11-29 07:59:20.304677226 +0000 UTC m=+0.083185962 container create 0697e54814791c2eab137bd41e1bba493ace349aa6c20ee607f77f6b93f58c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 07:59:20 compute-0 podman[285055]: 2025-11-29 07:59:20.265944382 +0000 UTC m=+0.044453128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:20 compute-0 systemd[1]: Started libpod-conmon-0697e54814791c2eab137bd41e1bba493ace349aa6c20ee607f77f6b93f58c55.scope.
Nov 29 07:59:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:20 compute-0 podman[285055]: 2025-11-29 07:59:20.520268011 +0000 UTC m=+0.298776737 container init 0697e54814791c2eab137bd41e1bba493ace349aa6c20ee607f77f6b93f58c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 07:59:20 compute-0 podman[285055]: 2025-11-29 07:59:20.536568665 +0000 UTC m=+0.315077391 container start 0697e54814791c2eab137bd41e1bba493ace349aa6c20ee607f77f6b93f58c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:59:20 compute-0 crazy_heisenberg[285071]: 167 167
Nov 29 07:59:20 compute-0 systemd[1]: libpod-0697e54814791c2eab137bd41e1bba493ace349aa6c20ee607f77f6b93f58c55.scope: Deactivated successfully.
Nov 29 07:59:20 compute-0 podman[285055]: 2025-11-29 07:59:20.551132715 +0000 UTC m=+0.329641491 container attach 0697e54814791c2eab137bd41e1bba493ace349aa6c20ee607f77f6b93f58c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 07:59:20 compute-0 podman[285055]: 2025-11-29 07:59:20.551811303 +0000 UTC m=+0.330320039 container died 0697e54814791c2eab137bd41e1bba493ace349aa6c20ee607f77f6b93f58c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd8737ddbc2a56c58cd89f00ee986e9646abaca194fd0f1c45a38acfe8227612-merged.mount: Deactivated successfully.
Nov 29 07:59:20 compute-0 podman[285055]: 2025-11-29 07:59:20.597627336 +0000 UTC m=+0.376136092 container remove 0697e54814791c2eab137bd41e1bba493ace349aa6c20ee607f77f6b93f58c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 07:59:20 compute-0 systemd[1]: libpod-conmon-0697e54814791c2eab137bd41e1bba493ace349aa6c20ee607f77f6b93f58c55.scope: Deactivated successfully.
Nov 29 07:59:20 compute-0 nova_compute[256729]: 2025-11-29 07:59:20.681 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:20 compute-0 nova_compute[256729]: 2025-11-29 07:59:20.684 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 29 07:59:20 compute-0 ceph-mon[75050]: pgmap v1684: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Nov 29 07:59:20 compute-0 ceph-mon[75050]: osdmap e278: 3 total, 3 up, 3 in
Nov 29 07:59:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 07:59:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 07:59:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 07:59:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2596339260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:20 compute-0 nova_compute[256729]: 2025-11-29 07:59:20.706 256736 DEBUG nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 07:59:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 29 07:59:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 29 07:59:20 compute-0 nova_compute[256729]: 2025-11-29 07:59:20.814 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:20 compute-0 nova_compute[256729]: 2025-11-29 07:59:20.815 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:20 compute-0 nova_compute[256729]: 2025-11-29 07:59:20.824 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 07:59:20 compute-0 nova_compute[256729]: 2025-11-29 07:59:20.824 256736 INFO nova.compute.claims [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Claim successful on node compute-0.ctlplane.example.com
Nov 29 07:59:20 compute-0 sshd-session[284645]: Connection closed by authenticating user root 143.14.121.41 port 35924 [preauth]
Nov 29 07:59:20 compute-0 podman[285094]: 2025-11-29 07:59:20.753867207 +0000 UTC m=+0.021844645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 KiB/s wr, 34 op/s
Nov 29 07:59:21 compute-0 podman[285094]: 2025-11-29 07:59:21.077393443 +0000 UTC m=+0.345370891 container create e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:59:21 compute-0 systemd[1]: Started libpod-conmon-e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115.scope.
Nov 29 07:59:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac19f7f867bbdb316e2b23692206a85a787b169bd14f1bc1579653c6e6ac34b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac19f7f867bbdb316e2b23692206a85a787b169bd14f1bc1579653c6e6ac34b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac19f7f867bbdb316e2b23692206a85a787b169bd14f1bc1579653c6e6ac34b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac19f7f867bbdb316e2b23692206a85a787b169bd14f1bc1579653c6e6ac34b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac19f7f867bbdb316e2b23692206a85a787b169bd14f1bc1579653c6e6ac34b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:21 compute-0 podman[285094]: 2025-11-29 07:59:21.196668117 +0000 UTC m=+0.464645605 container init e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meninsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:59:21 compute-0 nova_compute[256729]: 2025-11-29 07:59:21.204 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:21 compute-0 podman[285094]: 2025-11-29 07:59:21.212300595 +0000 UTC m=+0.480278003 container start e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 07:59:21 compute-0 podman[285094]: 2025-11-29 07:59:21.371791713 +0000 UTC m=+0.639769151 container attach e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 07:59:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3317610541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:21 compute-0 nova_compute[256729]: 2025-11-29 07:59:21.660 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:21 compute-0 nova_compute[256729]: 2025-11-29 07:59:21.667 256736 DEBUG nova.compute.provider_tree [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:59:21 compute-0 nova_compute[256729]: 2025-11-29 07:59:21.713 256736 DEBUG nova.scheduler.client.report [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:59:21 compute-0 nova_compute[256729]: 2025-11-29 07:59:21.746 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:21 compute-0 nova_compute[256729]: 2025-11-29 07:59:21.836 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:21 compute-0 nova_compute[256729]: 2025-11-29 07:59:21.838 256736 DEBUG nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 07:59:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 29 07:59:21 compute-0 nova_compute[256729]: 2025-11-29 07:59:21.971 256736 DEBUG nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 07:59:21 compute-0 nova_compute[256729]: 2025-11-29 07:59:21.971 256736 DEBUG nova.network.neutron [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.044 256736 INFO nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 07:59:22 compute-0 ceph-mon[75050]: osdmap e279: 3 total, 3 up, 3 in
Nov 29 07:59:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3317610541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.070 256736 DEBUG nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 07:59:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 29 07:59:22 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.171 256736 DEBUG nova.policy [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4dd11438bdce4fc7982e86e6bc9fbf46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fcc62171a1a3439e8156931de2a25f02', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.255 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.300 256736 DEBUG nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.303 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.303 256736 INFO nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Creating image(s)
Nov 29 07:59:22 compute-0 blissful_meninsky[285110]: --> passed data devices: 0 physical, 3 LVM
Nov 29 07:59:22 compute-0 blissful_meninsky[285110]: --> relative data size: 1.0
Nov 29 07:59:22 compute-0 blissful_meninsky[285110]: --> All data devices are unavailable
Nov 29 07:59:22 compute-0 systemd[1]: libpod-e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115.scope: Deactivated successfully.
Nov 29 07:59:22 compute-0 podman[285094]: 2025-11-29 07:59:22.352813151 +0000 UTC m=+1.620790559 container died e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meninsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:59:22 compute-0 systemd[1]: libpod-e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115.scope: Consumed 1.076s CPU time.
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.375 256736 DEBUG nova.storage.rbd_utils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] rbd image b7d73f17-a739-4ace-8e3a-00050fcea21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.407 256736 DEBUG nova.storage.rbd_utils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] rbd image b7d73f17-a739-4ace-8e3a-00050fcea21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.428 256736 DEBUG nova.storage.rbd_utils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] rbd image b7d73f17-a739-4ace-8e3a-00050fcea21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.434 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ac19f7f867bbdb316e2b23692206a85a787b169bd14f1bc1579653c6e6ac34b-merged.mount: Deactivated successfully.
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.500 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.500 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.501 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.501 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.526 256736 DEBUG nova.storage.rbd_utils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] rbd image b7d73f17-a739-4ace-8e3a-00050fcea21c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:22 compute-0 nova_compute[256729]: 2025-11-29 07:59:22.531 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 b7d73f17-a739-4ace-8e3a-00050fcea21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:22 compute-0 podman[285094]: 2025-11-29 07:59:22.613024317 +0000 UTC m=+1.881001725 container remove e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:59:22 compute-0 systemd[1]: libpod-conmon-e27e83b48d8676ff3c9dd1da361603e81fc4f65a24aae47c10366d5f02416115.scope: Deactivated successfully.
Nov 29 07:59:22 compute-0 sudo[284989]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:22 compute-0 sudo[285265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:22 compute-0 sudo[285265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:22 compute-0 sudo[285265]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:22 compute-0 sudo[285290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:22 compute-0 sudo[285290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:22 compute-0 sudo[285290]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 29 07:59:22 compute-0 sudo[285315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:22 compute-0 sudo[285315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:22 compute-0 sudo[285315]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:23 compute-0 sudo[285340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 07:59:23 compute-0 sudo[285340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:23 compute-0 ceph-mon[75050]: pgmap v1687: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 KiB/s wr, 34 op/s
Nov 29 07:59:23 compute-0 ceph-mon[75050]: osdmap e280: 3 total, 3 up, 3 in
Nov 29 07:59:23 compute-0 nova_compute[256729]: 2025-11-29 07:59:23.472 256736 DEBUG nova.network.neutron [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Successfully created port: b47fb61f-f97f-4119-b96f-3ba939ef6867 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 07:59:23 compute-0 podman[285410]: 2025-11-29 07:59:23.474028791 +0000 UTC m=+0.036111205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 29 07:59:24 compute-0 podman[285410]: 2025-11-29 07:59:24.389082819 +0000 UTC m=+0.951165223 container create 553015cdf1ca26a89f7383ec4f92d7a43f6af00150df3b07545a540eb86263d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:59:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 29 07:59:24 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 29 07:59:24 compute-0 nova_compute[256729]: 2025-11-29 07:59:24.592 256736 DEBUG nova.network.neutron [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Successfully updated port: b47fb61f-f97f-4119-b96f-3ba939ef6867 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 07:59:24 compute-0 nova_compute[256729]: 2025-11-29 07:59:24.750 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:59:24 compute-0 nova_compute[256729]: 2025-11-29 07:59:24.750 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquired lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:59:24 compute-0 nova_compute[256729]: 2025-11-29 07:59:24.750 256736 DEBUG nova.network.neutron [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 07:59:24 compute-0 nova_compute[256729]: 2025-11-29 07:59:24.763 256736 DEBUG nova.compute.manager [req-25f38913-313f-48ec-83ca-cb1ddeca3ce0 req-f19ca4ac-863b-4074-9c03-eff279cacee5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received event network-changed-b47fb61f-f97f-4119-b96f-3ba939ef6867 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:24 compute-0 nova_compute[256729]: 2025-11-29 07:59:24.764 256736 DEBUG nova.compute.manager [req-25f38913-313f-48ec-83ca-cb1ddeca3ce0 req-f19ca4ac-863b-4074-9c03-eff279cacee5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Refreshing instance network info cache due to event network-changed-b47fb61f-f97f-4119-b96f-3ba939ef6867. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:59:24 compute-0 nova_compute[256729]: 2025-11-29 07:59:24.764 256736 DEBUG oslo_concurrency.lockutils [req-25f38913-313f-48ec-83ca-cb1ddeca3ce0 req-f19ca4ac-863b-4074-9c03-eff279cacee5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:59:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.8 KiB/s wr, 42 op/s
Nov 29 07:59:24 compute-0 nova_compute[256729]: 2025-11-29 07:59:24.985 256736 DEBUG nova.network.neutron [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 07:59:25 compute-0 nova_compute[256729]: 2025-11-29 07:59:25.942 256736 DEBUG nova.network.neutron [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Updating instance_info_cache with network_info: [{"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:25 compute-0 nova_compute[256729]: 2025-11-29 07:59:25.969 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Releasing lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:59:25 compute-0 nova_compute[256729]: 2025-11-29 07:59:25.970 256736 DEBUG nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Instance network_info: |[{"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 07:59:25 compute-0 nova_compute[256729]: 2025-11-29 07:59:25.970 256736 DEBUG oslo_concurrency.lockutils [req-25f38913-313f-48ec-83ca-cb1ddeca3ce0 req-f19ca4ac-863b-4074-9c03-eff279cacee5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:59:25 compute-0 nova_compute[256729]: 2025-11-29 07:59:25.971 256736 DEBUG nova.network.neutron [req-25f38913-313f-48ec-83ca-cb1ddeca3ce0 req-f19ca4ac-863b-4074-9c03-eff279cacee5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Refreshing network info cache for port b47fb61f-f97f-4119-b96f-3ba939ef6867 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:59:26 compute-0 systemd[1]: Started libpod-conmon-553015cdf1ca26a89f7383ec4f92d7a43f6af00150df3b07545a540eb86263d2.scope.
Nov 29 07:59:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:26 compute-0 ceph-mon[75050]: pgmap v1689: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Nov 29 07:59:26 compute-0 sshd-session[285113]: Connection closed by authenticating user root 143.14.121.41 port 35930 [preauth]
Nov 29 07:59:26 compute-0 podman[285410]: 2025-11-29 07:59:26.489715414 +0000 UTC m=+3.051797888 container init 553015cdf1ca26a89f7383ec4f92d7a43f6af00150df3b07545a540eb86263d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curran, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 07:59:26 compute-0 podman[285410]: 2025-11-29 07:59:26.507895719 +0000 UTC m=+3.069978123 container start 553015cdf1ca26a89f7383ec4f92d7a43f6af00150df3b07545a540eb86263d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 07:59:26 compute-0 eloquent_curran[285426]: 167 167
Nov 29 07:59:26 compute-0 systemd[1]: libpod-553015cdf1ca26a89f7383ec4f92d7a43f6af00150df3b07545a540eb86263d2.scope: Deactivated successfully.
Nov 29 07:59:26 compute-0 podman[285410]: 2025-11-29 07:59:26.684895344 +0000 UTC m=+3.246977808 container attach 553015cdf1ca26a89f7383ec4f92d7a43f6af00150df3b07545a540eb86263d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:59:26 compute-0 podman[285410]: 2025-11-29 07:59:26.687426172 +0000 UTC m=+3.249508536 container died 553015cdf1ca26a89f7383ec4f92d7a43f6af00150df3b07545a540eb86263d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curran, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 29 07:59:26 compute-0 nova_compute[256729]: 2025-11-29 07:59:26.748 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 106 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1008 KiB/s wr, 77 op/s
Nov 29 07:59:27 compute-0 nova_compute[256729]: 2025-11-29 07:59:27.017 256736 DEBUG nova.network.neutron [req-25f38913-313f-48ec-83ca-cb1ddeca3ce0 req-f19ca4ac-863b-4074-9c03-eff279cacee5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Updated VIF entry in instance network info cache for port b47fb61f-f97f-4119-b96f-3ba939ef6867. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:59:27 compute-0 nova_compute[256729]: 2025-11-29 07:59:27.018 256736 DEBUG nova.network.neutron [req-25f38913-313f-48ec-83ca-cb1ddeca3ce0 req-f19ca4ac-863b-4074-9c03-eff279cacee5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Updating instance_info_cache with network_info: [{"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:27 compute-0 nova_compute[256729]: 2025-11-29 07:59:27.036 256736 DEBUG oslo_concurrency.lockutils [req-25f38913-313f-48ec-83ca-cb1ddeca3ce0 req-f19ca4ac-863b-4074-9c03-eff279cacee5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e28b1ad30b2f2bae0ff63f71aaeea2d63a7716acbcae942b43915a319e26cd2-merged.mount: Deactivated successfully.
Nov 29 07:59:27 compute-0 nova_compute[256729]: 2025-11-29 07:59:27.216 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 b7d73f17-a739-4ace-8e3a-00050fcea21c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:27 compute-0 nova_compute[256729]: 2025-11-29 07:59:27.280 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:27 compute-0 nova_compute[256729]: 2025-11-29 07:59:27.287 256736 DEBUG nova.storage.rbd_utils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] resizing rbd image b7d73f17-a739-4ace-8e3a-00050fcea21c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 07:59:27 compute-0 podman[285410]: 2025-11-29 07:59:27.384429608 +0000 UTC m=+3.946511992 container remove 553015cdf1ca26a89f7383ec4f92d7a43f6af00150df3b07545a540eb86263d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 07:59:27 compute-0 systemd[1]: libpod-conmon-553015cdf1ca26a89f7383ec4f92d7a43f6af00150df3b07545a540eb86263d2.scope: Deactivated successfully.
Nov 29 07:59:27 compute-0 podman[285447]: 2025-11-29 07:59:27.454392256 +0000 UTC m=+0.278562088 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 07:59:27 compute-0 podman[285446]: 2025-11-29 07:59:27.463619402 +0000 UTC m=+0.293031273 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 07:59:27 compute-0 podman[285445]: 2025-11-29 07:59:27.497831065 +0000 UTC m=+0.327540925 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 07:59:27 compute-0 podman[285569]: 2025-11-29 07:59:27.564878295 +0000 UTC m=+0.026713534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:27 compute-0 podman[285569]: 2025-11-29 07:59:27.687460338 +0000 UTC m=+0.149295537 container create c1c4fde9abdf0cce0301983d7c7bef5ebecd8b74922e96aa12412d00e2053bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:59:27 compute-0 ceph-mon[75050]: osdmap e281: 3 total, 3 up, 3 in
Nov 29 07:59:27 compute-0 ceph-mon[75050]: pgmap v1691: 305 pgs: 305 active+clean; 88 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.8 KiB/s wr, 42 op/s
Nov 29 07:59:27 compute-0 systemd[1]: Started libpod-conmon-c1c4fde9abdf0cce0301983d7c7bef5ebecd8b74922e96aa12412d00e2053bc9.scope.
Nov 29 07:59:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a36d6da5177258be5c32e043139bc759c84b022170a07dffeda81ee42183ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a36d6da5177258be5c32e043139bc759c84b022170a07dffeda81ee42183ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a36d6da5177258be5c32e043139bc759c84b022170a07dffeda81ee42183ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a36d6da5177258be5c32e043139bc759c84b022170a07dffeda81ee42183ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:27 compute-0 podman[285569]: 2025-11-29 07:59:27.93779015 +0000 UTC m=+0.399625319 container init c1c4fde9abdf0cce0301983d7c7bef5ebecd8b74922e96aa12412d00e2053bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 07:59:27 compute-0 nova_compute[256729]: 2025-11-29 07:59:27.949 256736 DEBUG nova.objects.instance [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lazy-loading 'migration_context' on Instance uuid b7d73f17-a739-4ace-8e3a-00050fcea21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:59:27 compute-0 podman[285569]: 2025-11-29 07:59:27.95203738 +0000 UTC m=+0.413872569 container start c1c4fde9abdf0cce0301983d7c7bef5ebecd8b74922e96aa12412d00e2053bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:27 compute-0 podman[285569]: 2025-11-29 07:59:27.960640751 +0000 UTC m=+0.422475920 container attach c1c4fde9abdf0cce0301983d7c7bef5ebecd8b74922e96aa12412d00e2053bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.177 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.179 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Ensure instance console log exists: /var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.180 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.180 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.180 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.183 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Start _get_guest_xml network_info=[{"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.189 256736 WARNING nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.201 256736 DEBUG nova.virt.libvirt.host [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.201 256736 DEBUG nova.virt.libvirt.host [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.205 256736 DEBUG nova.virt.libvirt.host [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.205 256736 DEBUG nova.virt.libvirt.host [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.206 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.206 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.207 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.207 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.207 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.208 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.208 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.208 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.209 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.209 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.209 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.209 256736 DEBUG nova.virt.hardware [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.213 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:28 compute-0 sshd-session[285442]: Connection closed by authenticating user root 143.14.121.41 port 42464 [preauth]
Nov 29 07:59:28 compute-0 ovn_controller[153383]: 2025-11-29T07:59:28Z|00144|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 07:59:28 compute-0 ceph-mon[75050]: pgmap v1692: 305 pgs: 305 active+clean; 106 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1008 KiB/s wr, 77 op/s
Nov 29 07:59:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3992113035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.717 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.753 256736 DEBUG nova.storage.rbd_utils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] rbd image b7d73f17-a739-4ace-8e3a-00050fcea21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:28 compute-0 nova_compute[256729]: 2025-11-29 07:59:28.759 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:28 compute-0 competent_elgamal[285586]: {
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:     "0": [
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:         {
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "devices": [
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "/dev/loop3"
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             ],
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_name": "ceph_lv0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_size": "21470642176",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "name": "ceph_lv0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "tags": {
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.cluster_name": "ceph",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.crush_device_class": "",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.encrypted": "0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.osd_id": "0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.type": "block",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.vdo": "0"
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             },
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "type": "block",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "vg_name": "ceph_vg0"
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:         }
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:     ],
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:     "1": [
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:         {
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "devices": [
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "/dev/loop4"
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             ],
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_name": "ceph_lv1",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_size": "21470642176",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "name": "ceph_lv1",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "tags": {
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.cluster_name": "ceph",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.crush_device_class": "",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.encrypted": "0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.osd_id": "1",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.type": "block",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.vdo": "0"
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             },
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "type": "block",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "vg_name": "ceph_vg1"
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:         }
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:     ],
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:     "2": [
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:         {
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "devices": [
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "/dev/loop5"
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             ],
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_name": "ceph_lv2",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_size": "21470642176",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "name": "ceph_lv2",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "tags": {
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.cluster_name": "ceph",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.crush_device_class": "",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.encrypted": "0",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.osd_id": "2",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.type": "block",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:                 "ceph.vdo": "0"
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             },
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "type": "block",
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:             "vg_name": "ceph_vg2"
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:         }
Nov 29 07:59:28 compute-0 competent_elgamal[285586]:     ]
Nov 29 07:59:28 compute-0 competent_elgamal[285586]: }
Nov 29 07:59:28 compute-0 systemd[1]: libpod-c1c4fde9abdf0cce0301983d7c7bef5ebecd8b74922e96aa12412d00e2053bc9.scope: Deactivated successfully.
Nov 29 07:59:28 compute-0 podman[285569]: 2025-11-29 07:59:28.802092873 +0000 UTC m=+1.263928092 container died c1c4fde9abdf0cce0301983d7c7bef5ebecd8b74922e96aa12412d00e2053bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 07:59:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 29 07:59:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 29 07:59:28 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 29 07:59:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-46a36d6da5177258be5c32e043139bc759c84b022170a07dffeda81ee42183ed-merged.mount: Deactivated successfully.
Nov 29 07:59:28 compute-0 podman[285569]: 2025-11-29 07:59:28.869010319 +0000 UTC m=+1.330845498 container remove c1c4fde9abdf0cce0301983d7c7bef5ebecd8b74922e96aa12412d00e2053bc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:59:28 compute-0 systemd[1]: libpod-conmon-c1c4fde9abdf0cce0301983d7c7bef5ebecd8b74922e96aa12412d00e2053bc9.scope: Deactivated successfully.
Nov 29 07:59:28 compute-0 sudo[285340]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 129 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Nov 29 07:59:28 compute-0 sudo[285669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:28 compute-0 sudo[285669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:28 compute-0 sudo[285669]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:29 compute-0 sudo[285713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 07:59:29 compute-0 sudo[285713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:29 compute-0 sudo[285713]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:29 compute-0 sudo[285738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:29 compute-0 sudo[285738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:29 compute-0 sudo[285738]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:29 compute-0 sudo[285763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 07:59:29 compute-0 sudo[285763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915423885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.265 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.268 256736 DEBUG nova.virt.libvirt.vif [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:59:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1381471149',display_name='tempest-TestEncryptedCinderVolumes-server-1381471149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1381471149',id=14,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEoRpe6JudF8DL3YPpG4xQlF0vmXbah+aakKvQkXx0vN4UE4/OrFObDfMHltj6lE4DUsspUgVRZMFZxOYhHVrh4+CzV296LnoZu7BzFkKEj4ePlqyWTpPKligP/ipXKSjw==',key_name='tempest-keypair-1205044998',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fcc62171a1a3439e8156931de2a25f02',ramdisk_id='',reservation_id='r-1qif0ojv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1760439909',owner_user_name='tempest-TestEncryptedCinderVolumes-1760439909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:59:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4dd11438bdce4fc7982e86e6bc9fbf46',uuid=b7d73f17-a739-4ace-8e3a-00050fcea21c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.268 256736 DEBUG nova.network.os_vif_util [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Converting VIF {"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.270 256736 DEBUG nova.network.os_vif_util [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:99:e9,bridge_name='br-int',has_traffic_filtering=True,id=b47fb61f-f97f-4119-b96f-3ba939ef6867,network=Network(bedd0aba-5435-430a-9787-eb355b452278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb47fb61f-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.271 256736 DEBUG nova.objects.instance [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lazy-loading 'pci_devices' on Instance uuid b7d73f17-a739-4ace-8e3a-00050fcea21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.670 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <uuid>b7d73f17-a739-4ace-8e3a-00050fcea21c</uuid>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <name>instance-0000000e</name>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <metadata>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1381471149</nova:name>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 07:59:28</nova:creationTime>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <nova:user uuid="4dd11438bdce4fc7982e86e6bc9fbf46">tempest-TestEncryptedCinderVolumes-1760439909-project-member</nova:user>
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <nova:project uuid="fcc62171a1a3439e8156931de2a25f02">tempest-TestEncryptedCinderVolumes-1760439909</nova:project>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <nova:port uuid="b47fb61f-f97f-4119-b96f-3ba939ef6867">
Nov 29 07:59:29 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   </metadata>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <system>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <entry name="serial">b7d73f17-a739-4ace-8e3a-00050fcea21c</entry>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <entry name="uuid">b7d73f17-a739-4ace-8e3a-00050fcea21c</entry>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     </system>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <os>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   </os>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <features>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <apic/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   </features>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   </clock>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   </cpu>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   <devices>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/b7d73f17-a739-4ace-8e3a-00050fcea21c_disk">
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       </source>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/b7d73f17-a739-4ace-8e3a-00050fcea21c_disk.config">
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       </source>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 07:59:29 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       </auth>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     </disk>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:e2:99:e9"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <target dev="tapb47fb61f-f9"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     </interface>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c/console.log" append="off"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     </serial>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <video>
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     </video>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     </rng>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 07:59:29 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 07:59:29 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 07:59:29 compute-0 nova_compute[256729]:   </devices>
Nov 29 07:59:29 compute-0 nova_compute[256729]: </domain>
Nov 29 07:59:29 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.672 256736 DEBUG nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Preparing to wait for external event network-vif-plugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.673 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.673 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.673 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.674 256736 DEBUG nova.virt.libvirt.vif [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:59:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1381471149',display_name='tempest-TestEncryptedCinderVolumes-server-1381471149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1381471149',id=14,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEoRpe6JudF8DL3YPpG4xQlF0vmXbah+aakKvQkXx0vN4UE4/OrFObDfMHltj6lE4DUsspUgVRZMFZxOYhHVrh4+CzV296LnoZu7BzFkKEj4ePlqyWTpPKligP/ipXKSjw==',key_name='tempest-keypair-1205044998',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fcc62171a1a3439e8156931de2a25f02',ramdisk_id='',reservation_id='r-1qif0ojv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1760439909',owner_user_name='tempest-TestEncryptedCinderVolumes-1760439909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:59:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4dd11438bdce4fc7982e86e6bc9fbf46',uuid=b7d73f17-a739-4ace-8e3a-00050fcea21c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.674 256736 DEBUG nova.network.os_vif_util [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Converting VIF {"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.674 256736 DEBUG nova.network.os_vif_util [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:99:e9,bridge_name='br-int',has_traffic_filtering=True,id=b47fb61f-f97f-4119-b96f-3ba939ef6867,network=Network(bedd0aba-5435-430a-9787-eb355b452278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb47fb61f-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.675 256736 DEBUG os_vif [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:99:e9,bridge_name='br-int',has_traffic_filtering=True,id=b47fb61f-f97f-4119-b96f-3ba939ef6867,network=Network(bedd0aba-5435-430a-9787-eb355b452278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb47fb61f-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.675 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:29 compute-0 podman[285830]: 2025-11-29 07:59:29.581638253 +0000 UTC m=+0.026513689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.676 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.676 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.688 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.688 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb47fb61f-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.689 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb47fb61f-f9, col_values=(('external_ids', {'iface-id': 'b47fb61f-f97f-4119-b96f-3ba939ef6867', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:99:e9', 'vm-uuid': 'b7d73f17-a739-4ace-8e3a-00050fcea21c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.691 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:29 compute-0 NetworkManager[48962]: <info>  [1764403169.6928] manager: (tapb47fb61f-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.693 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.699 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:29 compute-0 nova_compute[256729]: 2025-11-29 07:59:29.700 256736 INFO os_vif [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:99:e9,bridge_name='br-int',has_traffic_filtering=True,id=b47fb61f-f97f-4119-b96f-3ba939ef6867,network=Network(bedd0aba-5435-430a-9787-eb355b452278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb47fb61f-f9')
Nov 29 07:59:30 compute-0 podman[285830]: 2025-11-29 07:59:30.621492361 +0000 UTC m=+1.066367807 container create 489135a98a0af4814220a4dba55aeaa8df43e6b00123f5a53decdbef4d53f973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 07:59:30 compute-0 nova_compute[256729]: 2025-11-29 07:59:30.662 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:59:30 compute-0 nova_compute[256729]: 2025-11-29 07:59:30.663 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 07:59:30 compute-0 nova_compute[256729]: 2025-11-29 07:59:30.663 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] No VIF found with MAC fa:16:3e:e2:99:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 07:59:30 compute-0 nova_compute[256729]: 2025-11-29 07:59:30.664 256736 INFO nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Using config drive
Nov 29 07:59:30 compute-0 nova_compute[256729]: 2025-11-29 07:59:30.696 256736 DEBUG nova.storage.rbd_utils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] rbd image b7d73f17-a739-4ace-8e3a-00050fcea21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 129 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.0 MiB/s wr, 68 op/s
Nov 29 07:59:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3992113035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:31 compute-0 ceph-mon[75050]: osdmap e282: 3 total, 3 up, 3 in
Nov 29 07:59:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3915423885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:31 compute-0 nova_compute[256729]: 2025-11-29 07:59:31.406 256736 INFO nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Creating config drive at /var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c/disk.config
Nov 29 07:59:31 compute-0 nova_compute[256729]: 2025-11-29 07:59:31.417 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywi6t5xg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:31 compute-0 nova_compute[256729]: 2025-11-29 07:59:31.570 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywi6t5xg" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:32 compute-0 systemd[1]: Started libpod-conmon-489135a98a0af4814220a4dba55aeaa8df43e6b00123f5a53decdbef4d53f973.scope.
Nov 29 07:59:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:32 compute-0 sshd-session[285629]: Connection closed by authenticating user root 143.14.121.41 port 42474 [preauth]
Nov 29 07:59:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.5 MiB/s wr, 69 op/s
Nov 29 07:59:33 compute-0 podman[285830]: 2025-11-29 07:59:33.204499615 +0000 UTC m=+3.649375121 container init 489135a98a0af4814220a4dba55aeaa8df43e6b00123f5a53decdbef4d53f973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.209 256736 DEBUG nova.storage.rbd_utils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] rbd image b7d73f17-a739-4ace-8e3a-00050fcea21c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.217 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c/disk.config b7d73f17-a739-4ace-8e3a-00050fcea21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:33 compute-0 podman[285830]: 2025-11-29 07:59:33.221325595 +0000 UTC m=+3.666201041 container start 489135a98a0af4814220a4dba55aeaa8df43e6b00123f5a53decdbef4d53f973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:33 compute-0 sad_lichterman[285876]: 167 167
Nov 29 07:59:33 compute-0 systemd[1]: libpod-489135a98a0af4814220a4dba55aeaa8df43e6b00123f5a53decdbef4d53f973.scope: Deactivated successfully.
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.248 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:33 compute-0 podman[285830]: 2025-11-29 07:59:33.252780695 +0000 UTC m=+3.697656121 container attach 489135a98a0af4814220a4dba55aeaa8df43e6b00123f5a53decdbef4d53f973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 07:59:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:33 compute-0 podman[285830]: 2025-11-29 07:59:33.255315043 +0000 UTC m=+3.700190519 container died 489135a98a0af4814220a4dba55aeaa8df43e6b00123f5a53decdbef4d53f973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 07:59:33 compute-0 ceph-mon[75050]: pgmap v1694: 305 pgs: 305 active+clean; 129 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Nov 29 07:59:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-378c0cff3150ce977691dfafaefa857b93c2ae09d614a8317d44d7f3b192dc24-merged.mount: Deactivated successfully.
Nov 29 07:59:33 compute-0 podman[285830]: 2025-11-29 07:59:33.454389057 +0000 UTC m=+3.899264493 container remove 489135a98a0af4814220a4dba55aeaa8df43e6b00123f5a53decdbef4d53f973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 07:59:33 compute-0 systemd[1]: libpod-conmon-489135a98a0af4814220a4dba55aeaa8df43e6b00123f5a53decdbef4d53f973.scope: Deactivated successfully.
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.550 256736 DEBUG oslo_concurrency.processutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c/disk.config b7d73f17-a739-4ace-8e3a-00050fcea21c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.550 256736 INFO nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Deleting local config drive /var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c/disk.config because it was imported into RBD.
Nov 29 07:59:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1698056634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:33 compute-0 kernel: tapb47fb61f-f9: entered promiscuous mode
Nov 29 07:59:33 compute-0 NetworkManager[48962]: <info>  [1764403173.6336] manager: (tapb47fb61f-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Nov 29 07:59:33 compute-0 ovn_controller[153383]: 2025-11-29T07:59:33Z|00145|binding|INFO|Claiming lport b47fb61f-f97f-4119-b96f-3ba939ef6867 for this chassis.
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.634 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:33 compute-0 ovn_controller[153383]: 2025-11-29T07:59:33Z|00146|binding|INFO|b47fb61f-f97f-4119-b96f-3ba939ef6867: Claiming fa:16:3e:e2:99:e9 10.100.0.8
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.641 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.661 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:99:e9 10.100.0.8'], port_security=['fa:16:3e:e2:99:e9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b7d73f17-a739-4ace-8e3a-00050fcea21c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bedd0aba-5435-430a-9787-eb355b452278', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fcc62171a1a3439e8156931de2a25f02', 'neutron:revision_number': '2', 'neutron:security_group_ids': '881ec646-08bd-40a9-899f-8c2bf5255189', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=539ec311-4ad9-4a00-b251-38a0a82587c0, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=b47fb61f-f97f-4119-b96f-3ba939ef6867) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.663 163655 INFO neutron.agent.ovn.metadata.agent [-] Port b47fb61f-f97f-4119-b96f-3ba939ef6867 in datapath bedd0aba-5435-430a-9787-eb355b452278 bound to our chassis
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.664 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bedd0aba-5435-430a-9787-eb355b452278
Nov 29 07:59:33 compute-0 systemd-machined[217781]: New machine qemu-14-instance-0000000e.
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.678 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[09e4d87b-35ac-43b4-829a-6a78ae0e614e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.679 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbedd0aba-51 in ovnmeta-bedd0aba-5435-430a-9787-eb355b452278 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.681 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbedd0aba-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.681 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f98d5653-98cd-4158-b821-acde268aed50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 podman[285937]: 2025-11-29 07:59:33.681418887 +0000 UTC m=+0.062692155 container create c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.682 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b013075b-458f-4ebc-969a-58e126712e68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.701 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[c0de48a7-b259-4a5f-9d16-3c9e5ef4a7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 systemd-udevd[285966]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.723 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:33 compute-0 systemd[1]: Started libpod-conmon-c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed.scope.
Nov 29 07:59:33 compute-0 ovn_controller[153383]: 2025-11-29T07:59:33Z|00147|binding|INFO|Setting lport b47fb61f-f97f-4119-b96f-3ba939ef6867 ovn-installed in OVS
Nov 29 07:59:33 compute-0 ovn_controller[153383]: 2025-11-29T07:59:33Z|00148|binding|INFO|Setting lport b47fb61f-f97f-4119-b96f-3ba939ef6867 up in Southbound
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.728 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f845e9d0-104b-4f04-b188-00f223d94ff5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.730 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:33 compute-0 NetworkManager[48962]: <info>  [1764403173.7345] device (tapb47fb61f-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 07:59:33 compute-0 NetworkManager[48962]: <info>  [1764403173.7357] device (tapb47fb61f-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 07:59:33 compute-0 podman[285937]: 2025-11-29 07:59:33.662111582 +0000 UTC m=+0.043384860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 07:59:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37b70af039cd844f9f2be3e7e9a5c33331bbac8596eec81c45d9a5b8fb7c528/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37b70af039cd844f9f2be3e7e9a5c33331bbac8596eec81c45d9a5b8fb7c528/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37b70af039cd844f9f2be3e7e9a5c33331bbac8596eec81c45d9a5b8fb7c528/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37b70af039cd844f9f2be3e7e9a5c33331bbac8596eec81c45d9a5b8fb7c528/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.763 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcee0e3-96d9-4028-a706-f3cf11eadbd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 NetworkManager[48962]: <info>  [1764403173.7768] manager: (tapbedd0aba-50): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.776 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[11e9359b-4e33-442d-9b0e-951f36cae383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.815 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7dcf1e31-8172-4905-8d7e-4fa4c210abe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.819 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6f1832-2b16-4381-b710-52aa156d94e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 podman[285937]: 2025-11-29 07:59:33.822490573 +0000 UTC m=+0.203763841 container init c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 07:59:33 compute-0 podman[285937]: 2025-11-29 07:59:33.834563075 +0000 UTC m=+0.215836343 container start c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:33 compute-0 NetworkManager[48962]: <info>  [1764403173.8404] device (tapbedd0aba-50): carrier: link connected
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.846 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[cb94fde6-af64-4fc0-b946-00d325ecbacc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.861 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5c4af0-fd22-45bb-a6aa-93b8c5aaa8f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbedd0aba-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:14:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548890, 'reachable_time': 18474, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286001, 'error': None, 'target': 'ovnmeta-bedd0aba-5435-430a-9787-eb355b452278', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 podman[285937]: 2025-11-29 07:59:33.867890255 +0000 UTC m=+0.249163523 container attach c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.876 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[98443957-4258-43cf-93ca-38f5b5823b84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe29:1454'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548890, 'tstamp': 548890}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286002, 'error': None, 'target': 'ovnmeta-bedd0aba-5435-430a-9787-eb355b452278', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.894 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[98731eac-5e39-45f9-b88f-4b10fcd91b81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbedd0aba-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:14:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548890, 'reachable_time': 18474, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286003, 'error': None, 'target': 'ovnmeta-bedd0aba-5435-430a-9787-eb355b452278', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.921 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7c59e63d-dd1b-492e-9379-f3a8e4851c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.963 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e8c067-0ced-465c-a59a-0644f6821bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.965 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbedd0aba-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.965 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.965 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbedd0aba-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.967 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:33 compute-0 NetworkManager[48962]: <info>  [1764403173.9678] manager: (tapbedd0aba-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Nov 29 07:59:33 compute-0 kernel: tapbedd0aba-50: entered promiscuous mode
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.969 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.970 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbedd0aba-50, col_values=(('external_ids', {'iface-id': '59b11f2c-3d47-4f2a-91c0-29f381063782'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.975 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:33 compute-0 ovn_controller[153383]: 2025-11-29T07:59:33Z|00149|binding|INFO|Releasing lport 59b11f2c-3d47-4f2a-91c0-29f381063782 from this chassis (sb_readonly=0)
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.976 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.977 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bedd0aba-5435-430a-9787-eb355b452278.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bedd0aba-5435-430a-9787-eb355b452278.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.978 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[523f9765-64c7-415f-aef6-0c68df13b7b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.979 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: global
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-bedd0aba-5435-430a-9787-eb355b452278
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/bedd0aba-5435-430a-9787-eb355b452278.pid.haproxy
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID bedd0aba-5435-430a-9787-eb355b452278
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 07:59:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:33.979 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bedd0aba-5435-430a-9787-eb355b452278', 'env', 'PROCESS_TAG=haproxy-bedd0aba-5435-430a-9787-eb355b452278', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bedd0aba-5435-430a-9787-eb355b452278.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 07:59:33 compute-0 nova_compute[256729]: 2025-11-29 07:59:33.990 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.089 256736 DEBUG nova.compute.manager [req-b9cc4901-9df2-4ecb-b5fa-71d9f8e83569 req-0a70a0ba-f2fc-4823-b61c-6c02fa38e7a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received event network-vif-plugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.090 256736 DEBUG oslo_concurrency.lockutils [req-b9cc4901-9df2-4ecb-b5fa-71d9f8e83569 req-0a70a0ba-f2fc-4823-b61c-6c02fa38e7a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.090 256736 DEBUG oslo_concurrency.lockutils [req-b9cc4901-9df2-4ecb-b5fa-71d9f8e83569 req-0a70a0ba-f2fc-4823-b61c-6c02fa38e7a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.090 256736 DEBUG oslo_concurrency.lockutils [req-b9cc4901-9df2-4ecb-b5fa-71d9f8e83569 req-0a70a0ba-f2fc-4823-b61c-6c02fa38e7a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.090 256736 DEBUG nova.compute.manager [req-b9cc4901-9df2-4ecb-b5fa-71d9f8e83569 req-0a70a0ba-f2fc-4823-b61c-6c02fa38e7a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Processing event network-vif-plugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 07:59:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 29 07:59:34 compute-0 ceph-mon[75050]: pgmap v1695: 305 pgs: 305 active+clean; 129 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.0 MiB/s wr, 68 op/s
Nov 29 07:59:34 compute-0 ceph-mon[75050]: pgmap v1696: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.5 MiB/s wr, 69 op/s
Nov 29 07:59:34 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1698056634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 29 07:59:34 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 29 07:59:34 compute-0 podman[286034]: 2025-11-29 07:59:34.388881203 +0000 UTC m=+0.051165507 container create fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 07:59:34 compute-0 systemd[1]: Started libpod-conmon-fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45.scope.
Nov 29 07:59:34 compute-0 podman[286034]: 2025-11-29 07:59:34.363795933 +0000 UTC m=+0.026080257 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 07:59:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 07:59:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c46c7b2d2c249d8233dd735a15d9723e365a5bf4b6e8b949ca5ed5d914853e27/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 07:59:34 compute-0 podman[286034]: 2025-11-29 07:59:34.482988255 +0000 UTC m=+0.145272609 container init fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 07:59:34 compute-0 podman[286034]: 2025-11-29 07:59:34.488909693 +0000 UTC m=+0.151194007 container start fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 07:59:34 compute-0 neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278[286077]: [NOTICE]   (286094) : New worker (286096) forked
Nov 29 07:59:34 compute-0 neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278[286077]: [NOTICE]   (286094) : Loading success.
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.557 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403174.5565083, b7d73f17-a739-4ace-8e3a-00050fcea21c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.558 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] VM Started (Lifecycle Event)
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.562 256736 DEBUG nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.567 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.572 256736 INFO nova.virt.libvirt.driver [-] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Instance spawned successfully.
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.573 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.585 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.595 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.600 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.601 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.601 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.602 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.603 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.603 256736 DEBUG nova.virt.libvirt.driver [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.637 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.637 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403174.5567038, b7d73f17-a739-4ace-8e3a-00050fcea21c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.638 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] VM Paused (Lifecycle Event)
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.661 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.663 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403174.5657988, b7d73f17-a739-4ace-8e3a-00050fcea21c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.664 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] VM Resumed (Lifecycle Event)
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.682 256736 INFO nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Took 12.38 seconds to spawn the instance on the hypervisor.
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.682 256736 DEBUG nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.689 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.691 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.692 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.720 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.758 256736 INFO nova.compute.manager [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Took 13.98 seconds to build instance.
Nov 29 07:59:34 compute-0 nova_compute[256729]: 2025-11-29 07:59:34.780 256736 DEBUG oslo_concurrency.lockutils [None req-92d1bf69-628a-4414-933c-82e738762c9d 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:34 compute-0 compassionate_payne[285968]: {
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "osd_id": 2,
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "type": "bluestore"
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:     },
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "osd_id": 1,
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "type": "bluestore"
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:     },
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "osd_id": 0,
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:         "type": "bluestore"
Nov 29 07:59:34 compute-0 compassionate_payne[285968]:     }
Nov 29 07:59:34 compute-0 compassionate_payne[285968]: }
Nov 29 07:59:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.9 MiB/s wr, 47 op/s
Nov 29 07:59:34 compute-0 systemd[1]: libpod-c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed.scope: Deactivated successfully.
Nov 29 07:59:34 compute-0 systemd[1]: libpod-c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed.scope: Consumed 1.063s CPU time.
Nov 29 07:59:34 compute-0 conmon[285968]: conmon c4e73a21883a15595c92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed.scope/container/memory.events
Nov 29 07:59:34 compute-0 podman[285937]: 2025-11-29 07:59:34.921996364 +0000 UTC m=+1.303269632 container died c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 07:59:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d37b70af039cd844f9f2be3e7e9a5c33331bbac8596eec81c45d9a5b8fb7c528-merged.mount: Deactivated successfully.
Nov 29 07:59:34 compute-0 podman[285937]: 2025-11-29 07:59:34.983887087 +0000 UTC m=+1.365160355 container remove c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:34 compute-0 systemd[1]: libpod-conmon-c4e73a21883a15595c92835f3573dbd7c37f5d252d86046ec39a1a4521213bed.scope: Deactivated successfully.
Nov 29 07:59:35 compute-0 sudo[285763]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 07:59:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 07:59:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:35 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 70cd249c-ce55-4eca-a18c-8360d2192d33 does not exist
Nov 29 07:59:35 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1d78d14c-0533-4d83-9af2-eaa9c3e2435d does not exist
Nov 29 07:59:35 compute-0 sudo[286144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 07:59:35 compute-0 sudo[286144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:35 compute-0 sudo[286144]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:35 compute-0 sudo[286169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 07:59:35 compute-0 sudo[286169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 07:59:35 compute-0 sudo[286169]: pam_unix(sudo:session): session closed for user root
Nov 29 07:59:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 07:59:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 07:59:36 compute-0 nova_compute[256729]: 2025-11-29 07:59:36.216 256736 DEBUG nova.compute.manager [req-c3603d55-5ec0-4ad0-b199-03d4ce4faeef req-bc9ea537-47d0-43f3-8b8f-837e39d02006 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received event network-vif-plugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:36 compute-0 nova_compute[256729]: 2025-11-29 07:59:36.217 256736 DEBUG oslo_concurrency.lockutils [req-c3603d55-5ec0-4ad0-b199-03d4ce4faeef req-bc9ea537-47d0-43f3-8b8f-837e39d02006 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:36 compute-0 nova_compute[256729]: 2025-11-29 07:59:36.218 256736 DEBUG oslo_concurrency.lockutils [req-c3603d55-5ec0-4ad0-b199-03d4ce4faeef req-bc9ea537-47d0-43f3-8b8f-837e39d02006 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:36 compute-0 nova_compute[256729]: 2025-11-29 07:59:36.219 256736 DEBUG oslo_concurrency.lockutils [req-c3603d55-5ec0-4ad0-b199-03d4ce4faeef req-bc9ea537-47d0-43f3-8b8f-837e39d02006 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:36 compute-0 nova_compute[256729]: 2025-11-29 07:59:36.219 256736 DEBUG nova.compute.manager [req-c3603d55-5ec0-4ad0-b199-03d4ce4faeef req-bc9ea537-47d0-43f3-8b8f-837e39d02006 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] No waiting events found dispatching network-vif-plugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 07:59:36 compute-0 nova_compute[256729]: 2025-11-29 07:59:36.220 256736 WARNING nova.compute.manager [req-c3603d55-5ec0-4ad0-b199-03d4ce4faeef req-bc9ea537-47d0-43f3-8b8f-837e39d02006 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received unexpected event network-vif-plugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 for instance with vm_state active and task_state None.
Nov 29 07:59:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 726 KiB/s wr, 108 op/s
Nov 29 07:59:37 compute-0 sshd-session[285879]: Connection closed by authenticating user root 143.14.121.41 port 34566 [preauth]
Nov 29 07:59:37 compute-0 nova_compute[256729]: 2025-11-29 07:59:37.260 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 29 07:59:37 compute-0 ceph-mon[75050]: osdmap e283: 3 total, 3 up, 3 in
Nov 29 07:59:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 07:59:38 compute-0 NetworkManager[48962]: <info>  [1764403178.6671] manager: (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Nov 29 07:59:38 compute-0 NetworkManager[48962]: <info>  [1764403178.6686] manager: (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 29 07:59:38 compute-0 nova_compute[256729]: 2025-11-29 07:59:38.670 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:38 compute-0 nova_compute[256729]: 2025-11-29 07:59:38.787 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:38 compute-0 ovn_controller[153383]: 2025-11-29T07:59:38Z|00150|binding|INFO|Releasing lport 59b11f2c-3d47-4f2a-91c0-29f381063782 from this chassis (sb_readonly=0)
Nov 29 07:59:38 compute-0 nova_compute[256729]: 2025-11-29 07:59:38.801 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:38 compute-0 nova_compute[256729]: 2025-11-29 07:59:38.909 256736 DEBUG nova.compute.manager [req-7c5ebd98-43d0-4f22-b04b-d1526428853f req-826d9450-7354-4076-b297-ea176a3c8746 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received event network-changed-b47fb61f-f97f-4119-b96f-3ba939ef6867 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 07:59:38 compute-0 nova_compute[256729]: 2025-11-29 07:59:38.909 256736 DEBUG nova.compute.manager [req-7c5ebd98-43d0-4f22-b04b-d1526428853f req-826d9450-7354-4076-b297-ea176a3c8746 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Refreshing instance network info cache due to event network-changed-b47fb61f-f97f-4119-b96f-3ba939ef6867. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 07:59:38 compute-0 nova_compute[256729]: 2025-11-29 07:59:38.909 256736 DEBUG oslo_concurrency.lockutils [req-7c5ebd98-43d0-4f22-b04b-d1526428853f req-826d9450-7354-4076-b297-ea176a3c8746 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 07:59:38 compute-0 nova_compute[256729]: 2025-11-29 07:59:38.909 256736 DEBUG oslo_concurrency.lockutils [req-7c5ebd98-43d0-4f22-b04b-d1526428853f req-826d9450-7354-4076-b297-ea176a3c8746 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 07:59:38 compute-0 nova_compute[256729]: 2025-11-29 07:59:38.910 256736 DEBUG nova.network.neutron [req-7c5ebd98-43d0-4f22-b04b-d1526428853f req-826d9450-7354-4076-b297-ea176a3c8746 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Refreshing network info cache for port b47fb61f-f97f-4119-b96f-3ba939ef6867 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 07:59:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 586 KiB/s wr, 137 op/s
Nov 29 07:59:39 compute-0 nova_compute[256729]: 2025-11-29 07:59:39.693 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:40 compute-0 nova_compute[256729]: 2025-11-29 07:59:40.291 256736 DEBUG nova.network.neutron [req-7c5ebd98-43d0-4f22-b04b-d1526428853f req-826d9450-7354-4076-b297-ea176a3c8746 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Updated VIF entry in instance network info cache for port b47fb61f-f97f-4119-b96f-3ba939ef6867. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 07:59:40 compute-0 nova_compute[256729]: 2025-11-29 07:59:40.292 256736 DEBUG nova.network.neutron [req-7c5ebd98-43d0-4f22-b04b-d1526428853f req-826d9450-7354-4076-b297-ea176a3c8746 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Updating instance_info_cache with network_info: [{"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 07:59:40 compute-0 nova_compute[256729]: 2025-11-29 07:59:40.321 256736 DEBUG oslo_concurrency.lockutils [req-7c5ebd98-43d0-4f22-b04b-d1526428853f req-826d9450-7354-4076-b297-ea176a3c8746 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 07:59:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 29 07:59:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 29 07:59:40 compute-0 ceph-mon[75050]: pgmap v1698: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.9 MiB/s wr, 47 op/s
Nov 29 07:59:40 compute-0 ceph-mon[75050]: pgmap v1699: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 726 KiB/s wr, 108 op/s
Nov 29 07:59:40 compute-0 ceph-mon[75050]: pgmap v1700: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 586 KiB/s wr, 137 op/s
Nov 29 07:59:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 149 op/s
Nov 29 07:59:42 compute-0 sshd-session[286194]: Connection closed by authenticating user root 143.14.121.41 port 34568 [preauth]
Nov 29 07:59:42 compute-0 nova_compute[256729]: 2025-11-29 07:59:42.263 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 133 op/s
Nov 29 07:59:42 compute-0 ceph-mon[75050]: osdmap e284: 3 total, 3 up, 3 in
Nov 29 07:59:42 compute-0 ceph-mon[75050]: pgmap v1702: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 149 op/s
Nov 29 07:59:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 29 07:59:44 compute-0 nova_compute[256729]: 2025-11-29 07:59:44.696 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 121 op/s
Nov 29 07:59:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2404816174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 29 07:59:46 compute-0 ceph-mon[75050]: pgmap v1703: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 133 op/s
Nov 29 07:59:46 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 29 07:59:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1023 B/s wr, 28 op/s
Nov 29 07:59:47 compute-0 nova_compute[256729]: 2025-11-29 07:59:47.267 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:48 compute-0 sshd-session[286198]: Connection closed by authenticating user root 143.14.121.41 port 50084 [preauth]
Nov 29 07:59:48 compute-0 ceph-mon[75050]: pgmap v1704: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 121 op/s
Nov 29 07:59:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2404816174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:48 compute-0 ceph-mon[75050]: osdmap e285: 3 total, 3 up, 3 in
Nov 29 07:59:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 29 07:59:48 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 07:59:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 29 07:59:49 compute-0 ceph-mon[75050]: pgmap v1706: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1023 B/s wr, 28 op/s
Nov 29 07:59:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 29 07:59:49 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 29 07:59:49 compute-0 nova_compute[256729]: 2025-11-29 07:59:49.750 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:50 compute-0 ovn_controller[153383]: 2025-11-29T07:59:50Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:99:e9 10.100.0.8
Nov 29 07:59:50 compute-0 ovn_controller[153383]: 2025-11-29T07:59:50Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:99:e9 10.100.0.8
Nov 29 07:59:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 29 07:59:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 29 07:59:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Nov 29 07:59:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 29 07:59:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 29 07:59:51 compute-0 ceph-mon[75050]: pgmap v1707: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Nov 29 07:59:51 compute-0 ceph-mon[75050]: osdmap e286: 3 total, 3 up, 3 in
Nov 29 07:59:51 compute-0 sshd-session[286201]: Connection closed by authenticating user root 143.14.121.41 port 50096 [preauth]
Nov 29 07:59:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 29 07:59:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 29 07:59:52 compute-0 nova_compute[256729]: 2025-11-29 07:59:52.269 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 07:59:52 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2146514161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 07:59:52 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2146514161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 153 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 2.8 MiB/s wr, 91 op/s
Nov 29 07:59:53 compute-0 ceph-mon[75050]: pgmap v1709: 305 pgs: 305 active+clean; 134 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Nov 29 07:59:53 compute-0 ceph-mon[75050]: osdmap e287: 3 total, 3 up, 3 in
Nov 29 07:59:53 compute-0 ceph-mon[75050]: osdmap e288: 3 total, 3 up, 3 in
Nov 29 07:59:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2146514161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 07:59:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2146514161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 07:59:54 compute-0 nova_compute[256729]: 2025-11-29 07:59:54.182 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:54 compute-0 nova_compute[256729]: 2025-11-29 07:59:54.753 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 160 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 450 KiB/s rd, 4.1 MiB/s wr, 132 op/s
Nov 29 07:59:54 compute-0 ceph-mon[75050]: pgmap v1712: 305 pgs: 305 active+clean; 153 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 2.8 MiB/s wr, 91 op/s
Nov 29 07:59:55 compute-0 nova_compute[256729]: 2025-11-29 07:59:55.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:55 compute-0 nova_compute[256729]: 2025-11-29 07:59:55.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:55 compute-0 sshd-session[286203]: Connection closed by authenticating user root 143.14.121.41 port 59624 [preauth]
Nov 29 07:59:56 compute-0 ceph-mon[75050]: pgmap v1713: 305 pgs: 305 active+clean; 160 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 450 KiB/s rd, 4.1 MiB/s wr, 132 op/s
Nov 29 07:59:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 07:59:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 29 07:59:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 29 07:59:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 160 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 359 KiB/s rd, 3.3 MiB/s wr, 114 op/s
Nov 29 07:59:56 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 29 07:59:57 compute-0 nova_compute[256729]: 2025-11-29 07:59:57.272 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:57 compute-0 podman[286210]: 2025-11-29 07:59:57.732766327 +0000 UTC m=+0.084962349 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 07:59:57 compute-0 podman[286209]: 2025-11-29 07:59:57.751505967 +0000 UTC m=+0.102793555 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 07:59:57 compute-0 podman[286208]: 2025-11-29 07:59:57.806157346 +0000 UTC m=+0.158269756 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 07:59:57 compute-0 ceph-mon[75050]: pgmap v1714: 305 pgs: 305 active+clean; 160 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 359 KiB/s rd, 3.3 MiB/s wr, 114 op/s
Nov 29 07:59:57 compute-0 ceph-mon[75050]: osdmap e289: 3 total, 3 up, 3 in
Nov 29 07:59:58 compute-0 sshd-session[286206]: Invalid user plex from 143.14.121.41 port 59640
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.177 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.178 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.178 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.179 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.179 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:58 compute-0 sshd-session[286206]: Connection closed by invalid user plex 143.14.121.41 port 59640 [preauth]
Nov 29 07:59:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/145625136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.668 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.739 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.739 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.897 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.898 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4307MB free_disk=59.943199157714844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.899 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:58 compute-0 nova_compute[256729]: 2025-11-29 07:59:58.899 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 162 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 418 KiB/s rd, 3.4 MiB/s wr, 121 op/s
Nov 29 07:59:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 07:59:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3909736800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.002 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance b7d73f17-a739-4ace-8e3a-00050fcea21c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.003 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.003 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 07:59:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/145625136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3909736800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.309 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.756 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 07:59:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 07:59:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737433195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 07:59:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:59.778 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 07:59:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:59.779 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 07:59:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 07:59:59.780 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.782 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.789 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.808 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.831 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 07:59:59 compute-0 nova_compute[256729]: 2025-11-29 07:59:59.832 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 29 08:00:00 compute-0 ceph-mon[75050]: pgmap v1716: 305 pgs: 305 active+clean; 162 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 418 KiB/s rd, 3.4 MiB/s wr, 121 op/s
Nov 29 08:00:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/737433195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 29 08:00:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 29 08:00:00 compute-0 nova_compute[256729]: 2025-11-29 08:00:00.833 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:00 compute-0 nova_compute[256729]: 2025-11-29 08:00:00.833 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:00:00 compute-0 nova_compute[256729]: 2025-11-29 08:00:00.833 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:00:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 162 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 1.0 MiB/s wr, 47 op/s
Nov 29 08:00:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Nov 29 08:00:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Nov 29 08:00:01 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Nov 29 08:00:01 compute-0 ceph-mon[75050]: osdmap e290: 3 total, 3 up, 3 in
Nov 29 08:00:01 compute-0 nova_compute[256729]: 2025-11-29 08:00:01.452 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:00:01 compute-0 nova_compute[256729]: 2025-11-29 08:00:01.453 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:00:01 compute-0 nova_compute[256729]: 2025-11-29 08:00:01.453 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:00:01 compute-0 nova_compute[256729]: 2025-11-29 08:00:01.453 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7d73f17-a739-4ace-8e3a-00050fcea21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Nov 29 08:00:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Nov 29 08:00:02 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Nov 29 08:00:02 compute-0 ceph-mon[75050]: pgmap v1718: 305 pgs: 305 active+clean; 162 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 1.0 MiB/s wr, 47 op/s
Nov 29 08:00:02 compute-0 ceph-mon[75050]: osdmap e291: 3 total, 3 up, 3 in
Nov 29 08:00:02 compute-0 nova_compute[256729]: 2025-11-29 08:00:02.276 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:02 compute-0 sshd-session[286293]: Invalid user guest from 143.14.121.41 port 59656
Nov 29 08:00:02 compute-0 nova_compute[256729]: 2025-11-29 08:00:02.903 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Updating instance_info_cache with network_info: [{"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:00:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 121 KiB/s wr, 61 op/s
Nov 29 08:00:02 compute-0 nova_compute[256729]: 2025-11-29 08:00:02.927 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-b7d73f17-a739-4ace-8e3a-00050fcea21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:00:02 compute-0 nova_compute[256729]: 2025-11-29 08:00:02.927 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:00:02 compute-0 nova_compute[256729]: 2025-11-29 08:00:02.928 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:02 compute-0 nova_compute[256729]: 2025-11-29 08:00:02.929 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:02 compute-0 nova_compute[256729]: 2025-11-29 08:00:02.929 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:02 compute-0 nova_compute[256729]: 2025-11-29 08:00:02.929 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:00:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Nov 29 08:00:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Nov 29 08:00:03 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Nov 29 08:00:03 compute-0 ceph-mon[75050]: osdmap e292: 3 total, 3 up, 3 in
Nov 29 08:00:03 compute-0 sshd-session[286293]: Connection closed by invalid user guest 143.14.121.41 port 59656 [preauth]
Nov 29 08:00:04 compute-0 ceph-mon[75050]: pgmap v1721: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 121 KiB/s wr, 61 op/s
Nov 29 08:00:04 compute-0 ceph-mon[75050]: osdmap e293: 3 total, 3 up, 3 in
Nov 29 08:00:04 compute-0 nova_compute[256729]: 2025-11-29 08:00:04.758 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 139 KiB/s wr, 114 op/s
Nov 29 08:00:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Nov 29 08:00:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Nov 29 08:00:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/743157702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/743157702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:00:05
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'volumes', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 29 08:00:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:00:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Nov 29 08:00:06 compute-0 nova_compute[256729]: 2025-11-29 08:00:06.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Nov 29 08:00:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Nov 29 08:00:06 compute-0 ceph-mon[75050]: pgmap v1723: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 139 KiB/s wr, 114 op/s
Nov 29 08:00:06 compute-0 ceph-mon[75050]: osdmap e294: 3 total, 3 up, 3 in
Nov 29 08:00:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/743157702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/743157702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:06 compute-0 sshd-session[286317]: Invalid user dev from 143.14.121.41 port 43984
Nov 29 08:00:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 33 KiB/s wr, 145 op/s
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:00:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:00:07 compute-0 sshd-session[286317]: Connection closed by invalid user dev 143.14.121.41 port 43984 [preauth]
Nov 29 08:00:07 compute-0 ceph-mon[75050]: osdmap e295: 3 total, 3 up, 3 in
Nov 29 08:00:07 compute-0 nova_compute[256729]: 2025-11-29 08:00:07.280 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3072986479' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3072986479' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Nov 29 08:00:08 compute-0 ceph-mon[75050]: pgmap v1726: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 33 KiB/s wr, 145 op/s
Nov 29 08:00:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3072986479' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3072986479' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Nov 29 08:00:08 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Nov 29 08:00:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1994278995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1994278995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:08 compute-0 ovn_controller[153383]: 2025-11-29T08:00:08Z|00151|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 08:00:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1279320355' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1279320355' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 30 KiB/s wr, 166 op/s
Nov 29 08:00:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1826450410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1826450410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Nov 29 08:00:09 compute-0 nova_compute[256729]: 2025-11-29 08:00:09.760 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Nov 29 08:00:10 compute-0 ceph-mon[75050]: osdmap e296: 3 total, 3 up, 3 in
Nov 29 08:00:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1994278995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1994278995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1279320355' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1279320355' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1826450410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1826450410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Nov 29 08:00:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/721833791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/721833791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 29 KiB/s wr, 130 op/s
Nov 29 08:00:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Nov 29 08:00:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Nov 29 08:00:11 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Nov 29 08:00:11 compute-0 ceph-mon[75050]: pgmap v1728: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 30 KiB/s wr, 166 op/s
Nov 29 08:00:11 compute-0 ceph-mon[75050]: osdmap e297: 3 total, 3 up, 3 in
Nov 29 08:00:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/721833791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/721833791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:11 compute-0 nova_compute[256729]: 2025-11-29 08:00:11.900 256736 DEBUG oslo_concurrency.lockutils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:11 compute-0 nova_compute[256729]: 2025-11-29 08:00:11.900 256736 DEBUG oslo_concurrency.lockutils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Nov 29 08:00:11 compute-0 nova_compute[256729]: 2025-11-29 08:00:11.918 256736 DEBUG nova.objects.instance [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lazy-loading 'flavor' on Instance uuid b7d73f17-a739-4ace-8e3a-00050fcea21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Nov 29 08:00:11 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Nov 29 08:00:11 compute-0 nova_compute[256729]: 2025-11-29 08:00:11.963 256736 DEBUG oslo_concurrency.lockutils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:12 compute-0 ceph-mon[75050]: pgmap v1730: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 29 KiB/s wr, 130 op/s
Nov 29 08:00:12 compute-0 ceph-mon[75050]: osdmap e298: 3 total, 3 up, 3 in
Nov 29 08:00:12 compute-0 ceph-mon[75050]: osdmap e299: 3 total, 3 up, 3 in
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.166 256736 DEBUG oslo_concurrency.lockutils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.167 256736 DEBUG oslo_concurrency.lockutils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.167 256736 INFO nova.compute.manager [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Attaching volume 920fb6db-90c7-4d52-a55d-fc5cbefa1dde to /dev/vdb
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.317 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.340 256736 DEBUG os_brick.utils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.342 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.361 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.362 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbac885-1806-4df7-a9cf-7220d6f73b06]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.363 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.373 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.373 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[31001bf0-1e26-4e87-b5c7-6d15a77d4c91]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.375 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.384 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.385 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[b5255e5e-99f6-4279-82d5-aa0fddc73474]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.386 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[354a7e66-5fb1-4e17-931e-12e9a6aa0a8d]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.386 256736 DEBUG oslo_concurrency.processutils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.410 256736 DEBUG oslo_concurrency.processutils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.413 256736 DEBUG os_brick.initiator.connectors.lightos [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.413 256736 DEBUG os_brick.initiator.connectors.lightos [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.413 256736 DEBUG os_brick.initiator.connectors.lightos [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.414 256736 DEBUG os_brick.utils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:00:12 compute-0 nova_compute[256729]: 2025-11-29 08:00:12.414 256736 DEBUG nova.virt.block_device [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Updating existing volume attachment record: 16a5de0f-d906-426f-a30a-897cb3ec616f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:00:12 compute-0 sshd-session[286319]: Invalid user ae from 143.14.121.41 port 43986
Nov 29 08:00:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Nov 29 08:00:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 9.7 KiB/s wr, 199 op/s
Nov 29 08:00:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Nov 29 08:00:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Nov 29 08:00:12 compute-0 sshd-session[286319]: Connection closed by invalid user ae 143.14.121.41 port 43986 [preauth]
Nov 29 08:00:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:00:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/616462359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/790975034' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/790975034' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.283 256736 DEBUG os_brick.encryptors [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Using volume encryption metadata '{'encryption_key_id': '7c280c45-acfb-49ad-8e02-fb24434c8d22', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-920fb6db-90c7-4d52-a55d-fc5cbefa1dde', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '920fb6db-90c7-4d52-a55d-fc5cbefa1dde', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'b7d73f17-a739-4ace-8e3a-00050fcea21c', 'attached_at': '', 'detached_at': '', 'volume_id': '920fb6db-90c7-4d52-a55d-fc5cbefa1dde', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.293 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.311 256736 DEBUG barbicanclient.v1.secrets [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.312 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.337 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.338 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.382 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.382 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/198594817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/198594817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.416 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.417 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.454 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.455 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.485 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.486 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.534 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.534 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.573 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.574 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.608 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.609 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.630 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.630 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.667 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.667 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.686 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.686 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.710 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.712 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.738 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.738 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.788 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.789 256736 INFO barbicanclient.base [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Calculated Secrets uuid ref: secrets/7c280c45-acfb-49ad-8e02-fb24434c8d22
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.810 256736 DEBUG barbicanclient.client [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.811 256736 DEBUG nova.virt.libvirt.host [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:00:13 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 08:00:13 compute-0 nova_compute[256729]:     <volume>920fb6db-90c7-4d52-a55d-fc5cbefa1dde</volume>
Nov 29 08:00:13 compute-0 nova_compute[256729]:   </usage>
Nov 29 08:00:13 compute-0 nova_compute[256729]: </secret>
Nov 29 08:00:13 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.833 256736 DEBUG nova.objects.instance [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lazy-loading 'flavor' on Instance uuid b7d73f17-a739-4ace-8e3a-00050fcea21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.859 256736 DEBUG nova.virt.libvirt.driver [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Attempting to attach volume 920fb6db-90c7-4d52-a55d-fc5cbefa1dde with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:00:13 compute-0 nova_compute[256729]: 2025-11-29 08:00:13.861 256736 DEBUG nova.virt.libvirt.guest [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:00:13 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:00:13 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-920fb6db-90c7-4d52-a55d-fc5cbefa1dde">
Nov 29 08:00:13 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:00:13 compute-0 nova_compute[256729]:   </source>
Nov 29 08:00:13 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 08:00:13 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:00:13 compute-0 nova_compute[256729]:   </auth>
Nov 29 08:00:13 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:00:13 compute-0 nova_compute[256729]:   <serial>920fb6db-90c7-4d52-a55d-fc5cbefa1dde</serial>
Nov 29 08:00:13 compute-0 nova_compute[256729]:   <encryption format="luks">
Nov 29 08:00:13 compute-0 nova_compute[256729]:     <secret type="passphrase" uuid="ce84a6a0-fd73-43ce-b89c-430482719434"/>
Nov 29 08:00:13 compute-0 nova_compute[256729]:   </encryption>
Nov 29 08:00:13 compute-0 nova_compute[256729]: </disk>
Nov 29 08:00:13 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:00:14 compute-0 ceph-mon[75050]: pgmap v1733: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 9.7 KiB/s wr, 199 op/s
Nov 29 08:00:14 compute-0 ceph-mon[75050]: osdmap e300: 3 total, 3 up, 3 in
Nov 29 08:00:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/616462359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:00:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/790975034' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/790975034' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/198594817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/198594817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:14 compute-0 nova_compute[256729]: 2025-11-29 08:00:14.762 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 9.0 KiB/s wr, 219 op/s
Nov 29 08:00:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Nov 29 08:00:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:00:15 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595274142085043 of space, bias 1.0, pg target 0.22785822426255128 quantized to 32 (current 32)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003513386607084717 of space, bias 1.0, pg target 0.10540159821254151 quantized to 32 (current 32)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:00:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:00:16 compute-0 nova_compute[256729]: 2025-11-29 08:00:16.093 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:16.094 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:00:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:16.095 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:00:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/321316046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/321316046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:16 compute-0 ceph-mon[75050]: pgmap v1735: 305 pgs: 305 active+clean; 167 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 9.0 KiB/s wr, 219 op/s
Nov 29 08:00:16 compute-0 ceph-mon[75050]: osdmap e301: 3 total, 3 up, 3 in
Nov 29 08:00:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/321316046' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/321316046' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:16 compute-0 nova_compute[256729]: 2025-11-29 08:00:16.574 256736 DEBUG nova.virt.libvirt.driver [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:00:16 compute-0 nova_compute[256729]: 2025-11-29 08:00:16.575 256736 DEBUG nova.virt.libvirt.driver [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:00:16 compute-0 nova_compute[256729]: 2025-11-29 08:00:16.575 256736 DEBUG nova.virt.libvirt.driver [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:00:16 compute-0 nova_compute[256729]: 2025-11-29 08:00:16.575 256736 DEBUG nova.virt.libvirt.driver [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] No VIF found with MAC fa:16:3e:e2:99:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:00:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Nov 29 08:00:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 246 KiB/s rd, 14 KiB/s wr, 274 op/s
Nov 29 08:00:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Nov 29 08:00:16 compute-0 nova_compute[256729]: 2025-11-29 08:00:16.928 256736 DEBUG oslo_concurrency.lockutils [None req-9f661b54-2eab-48c4-9798-1707293c3e25 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:16 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Nov 29 08:00:17 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:17.097 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:17 compute-0 nova_compute[256729]: 2025-11-29 08:00:17.320 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Nov 29 08:00:17 compute-0 ceph-mon[75050]: pgmap v1737: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 246 KiB/s rd, 14 KiB/s wr, 274 op/s
Nov 29 08:00:17 compute-0 ceph-mon[75050]: osdmap e302: 3 total, 3 up, 3 in
Nov 29 08:00:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Nov 29 08:00:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Nov 29 08:00:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/617239115' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/617239115' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:18 compute-0 sshd-session[286328]: Invalid user telecomadmin from 143.14.121.41 port 39760
Nov 29 08:00:18 compute-0 nova_compute[256729]: 2025-11-29 08:00:18.827 256736 DEBUG oslo_concurrency.lockutils [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:18 compute-0 nova_compute[256729]: 2025-11-29 08:00:18.827 256736 DEBUG oslo_concurrency.lockutils [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:18 compute-0 nova_compute[256729]: 2025-11-29 08:00:18.846 256736 INFO nova.compute.manager [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Detaching volume 920fb6db-90c7-4d52-a55d-fc5cbefa1dde
Nov 29 08:00:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 6.5 KiB/s wr, 123 op/s
Nov 29 08:00:18 compute-0 ceph-mon[75050]: osdmap e303: 3 total, 3 up, 3 in
Nov 29 08:00:18 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/617239115' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:18 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/617239115' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:18 compute-0 nova_compute[256729]: 2025-11-29 08:00:18.960 256736 INFO nova.virt.block_device [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Attempting to driver detach volume 920fb6db-90c7-4d52-a55d-fc5cbefa1dde from mountpoint /dev/vdb
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.088 256736 DEBUG os_brick.encryptors [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Using volume encryption metadata '{'encryption_key_id': '7c280c45-acfb-49ad-8e02-fb24434c8d22', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-920fb6db-90c7-4d52-a55d-fc5cbefa1dde', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '920fb6db-90c7-4d52-a55d-fc5cbefa1dde', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'b7d73f17-a739-4ace-8e3a-00050fcea21c', 'attached_at': '', 'detached_at': '', 'volume_id': '920fb6db-90c7-4d52-a55d-fc5cbefa1dde', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.100 256736 DEBUG nova.virt.libvirt.driver [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Attempting to detach device vdb from instance b7d73f17-a739-4ace-8e3a-00050fcea21c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.102 256736 DEBUG nova.virt.libvirt.guest [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-920fb6db-90c7-4d52-a55d-fc5cbefa1dde">
Nov 29 08:00:19 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   </source>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <serial>920fb6db-90c7-4d52-a55d-fc5cbefa1dde</serial>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <encryption format="luks">
Nov 29 08:00:19 compute-0 nova_compute[256729]:     <secret type="passphrase" uuid="ce84a6a0-fd73-43ce-b89c-430482719434"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   </encryption>
Nov 29 08:00:19 compute-0 nova_compute[256729]: </disk>
Nov 29 08:00:19 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.110 256736 INFO nova.virt.libvirt.driver [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Successfully detached device vdb from instance b7d73f17-a739-4ace-8e3a-00050fcea21c from the persistent domain config.
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.110 256736 DEBUG nova.virt.libvirt.driver [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b7d73f17-a739-4ace-8e3a-00050fcea21c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.111 256736 DEBUG nova.virt.libvirt.guest [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-920fb6db-90c7-4d52-a55d-fc5cbefa1dde">
Nov 29 08:00:19 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   </source>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <serial>920fb6db-90c7-4d52-a55d-fc5cbefa1dde</serial>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   <encryption format="luks">
Nov 29 08:00:19 compute-0 nova_compute[256729]:     <secret type="passphrase" uuid="ce84a6a0-fd73-43ce-b89c-430482719434"/>
Nov 29 08:00:19 compute-0 nova_compute[256729]:   </encryption>
Nov 29 08:00:19 compute-0 nova_compute[256729]: </disk>
Nov 29 08:00:19 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:00:19 compute-0 sshd-session[286328]: Connection closed by invalid user telecomadmin 143.14.121.41 port 39760 [preauth]
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.239 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764403219.2390943, b7d73f17-a739-4ace-8e3a-00050fcea21c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.241 256736 DEBUG nova.virt.libvirt.driver [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b7d73f17-a739-4ace-8e3a-00050fcea21c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.244 256736 INFO nova.virt.libvirt.driver [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Successfully detached device vdb from instance b7d73f17-a739-4ace-8e3a-00050fcea21c from the live domain config.
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.469 256736 DEBUG nova.objects.instance [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lazy-loading 'flavor' on Instance uuid b7d73f17-a739-4ace-8e3a-00050fcea21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.520 256736 DEBUG oslo_concurrency.lockutils [None req-f022116d-2c71-424b-9782-d1992bfaff75 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:19 compute-0 nova_compute[256729]: 2025-11-29 08:00:19.765 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:20 compute-0 ceph-mon[75050]: pgmap v1740: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 6.5 KiB/s wr, 123 op/s
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.600 256736 DEBUG oslo_concurrency.lockutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.600 256736 DEBUG oslo_concurrency.lockutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.601 256736 DEBUG oslo_concurrency.lockutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.602 256736 DEBUG oslo_concurrency.lockutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.603 256736 DEBUG oslo_concurrency.lockutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.604 256736 INFO nova.compute.manager [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Terminating instance
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.605 256736 DEBUG nova.compute.manager [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:00:20 compute-0 kernel: tapb47fb61f-f9 (unregistering): left promiscuous mode
Nov 29 08:00:20 compute-0 NetworkManager[48962]: <info>  [1764403220.6629] device (tapb47fb61f-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:00:20 compute-0 ovn_controller[153383]: 2025-11-29T08:00:20Z|00152|binding|INFO|Releasing lport b47fb61f-f97f-4119-b96f-3ba939ef6867 from this chassis (sb_readonly=0)
Nov 29 08:00:20 compute-0 ovn_controller[153383]: 2025-11-29T08:00:20Z|00153|binding|INFO|Setting lport b47fb61f-f97f-4119-b96f-3ba939ef6867 down in Southbound
Nov 29 08:00:20 compute-0 ovn_controller[153383]: 2025-11-29T08:00:20Z|00154|binding|INFO|Removing iface tapb47fb61f-f9 ovn-installed in OVS
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.679 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:20 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:20.691 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:99:e9 10.100.0.8'], port_security=['fa:16:3e:e2:99:e9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b7d73f17-a739-4ace-8e3a-00050fcea21c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bedd0aba-5435-430a-9787-eb355b452278', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fcc62171a1a3439e8156931de2a25f02', 'neutron:revision_number': '4', 'neutron:security_group_ids': '881ec646-08bd-40a9-899f-8c2bf5255189', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=539ec311-4ad9-4a00-b251-38a0a82587c0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=b47fb61f-f97f-4119-b96f-3ba939ef6867) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:00:20 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:20.694 163655 INFO neutron.agent.ovn.metadata.agent [-] Port b47fb61f-f97f-4119-b96f-3ba939ef6867 in datapath bedd0aba-5435-430a-9787-eb355b452278 unbound from our chassis
Nov 29 08:00:20 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:20.696 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bedd0aba-5435-430a-9787-eb355b452278, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:00:20 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:20.698 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d981638c-815c-47b5-a1df-11acc0ba419f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:20 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:20.699 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bedd0aba-5435-430a-9787-eb355b452278 namespace which is not needed anymore
Nov 29 08:00:20 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 29 08:00:20 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 17.388s CPU time.
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.721 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:20 compute-0 systemd-machined[217781]: Machine qemu-14-instance-0000000e terminated.
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.845 256736 INFO nova.virt.libvirt.driver [-] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Instance destroyed successfully.
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.846 256736 DEBUG nova.objects.instance [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lazy-loading 'resources' on Instance uuid b7d73f17-a739-4ace-8e3a-00050fcea21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.867 256736 DEBUG nova.virt.libvirt.vif [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:59:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1381471149',display_name='tempest-TestEncryptedCinderVolumes-server-1381471149',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1381471149',id=14,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEoRpe6JudF8DL3YPpG4xQlF0vmXbah+aakKvQkXx0vN4UE4/OrFObDfMHltj6lE4DUsspUgVRZMFZxOYhHVrh4+CzV296LnoZu7BzFkKEj4ePlqyWTpPKligP/ipXKSjw==',key_name='tempest-keypair-1205044998',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:59:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fcc62171a1a3439e8156931de2a25f02',ramdisk_id='',reservation_id='r-1qif0ojv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1760439909',owner_user_name='tempest-TestEncryptedCinderVolumes-1760439909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:59:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4dd11438bdce4fc7982e86e6bc9fbf46',uuid=b7d73f17-a739-4ace-8e3a-00050fcea21c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.867 256736 DEBUG nova.network.os_vif_util [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Converting VIF {"id": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "address": "fa:16:3e:e2:99:e9", "network": {"id": "bedd0aba-5435-430a-9787-eb355b452278", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-592424582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fcc62171a1a3439e8156931de2a25f02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb47fb61f-f9", "ovs_interfaceid": "b47fb61f-f97f-4119-b96f-3ba939ef6867", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.868 256736 DEBUG nova.network.os_vif_util [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:99:e9,bridge_name='br-int',has_traffic_filtering=True,id=b47fb61f-f97f-4119-b96f-3ba939ef6867,network=Network(bedd0aba-5435-430a-9787-eb355b452278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb47fb61f-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.869 256736 DEBUG os_vif [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:99:e9,bridge_name='br-int',has_traffic_filtering=True,id=b47fb61f-f97f-4119-b96f-3ba939ef6867,network=Network(bedd0aba-5435-430a-9787-eb355b452278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb47fb61f-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.872 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.872 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb47fb61f-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.873 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.875 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.876 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:20 compute-0 nova_compute[256729]: 2025-11-29 08:00:20.879 256736 INFO os_vif [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:99:e9,bridge_name='br-int',has_traffic_filtering=True,id=b47fb61f-f97f-4119-b96f-3ba939ef6867,network=Network(bedd0aba-5435-430a-9787-eb355b452278),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb47fb61f-f9')
Nov 29 08:00:20 compute-0 neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278[286077]: [NOTICE]   (286094) : haproxy version is 2.8.14-c23fe91
Nov 29 08:00:20 compute-0 neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278[286077]: [NOTICE]   (286094) : path to executable is /usr/sbin/haproxy
Nov 29 08:00:20 compute-0 neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278[286077]: [WARNING]  (286094) : Exiting Master process...
Nov 29 08:00:20 compute-0 neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278[286077]: [ALERT]    (286094) : Current worker (286096) exited with code 143 (Terminated)
Nov 29 08:00:20 compute-0 neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278[286077]: [WARNING]  (286094) : All workers exited. Exiting... (0)
Nov 29 08:00:20 compute-0 systemd[1]: libpod-fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45.scope: Deactivated successfully.
Nov 29 08:00:20 compute-0 podman[286381]: 2025-11-29 08:00:20.909564562 +0000 UTC m=+0.067278948 container died fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:00:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 6.5 KiB/s wr, 95 op/s
Nov 29 08:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45-userdata-shm.mount: Deactivated successfully.
Nov 29 08:00:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c46c7b2d2c249d8233dd735a15d9723e365a5bf4b6e8b949ca5ed5d914853e27-merged.mount: Deactivated successfully.
Nov 29 08:00:20 compute-0 podman[286381]: 2025-11-29 08:00:20.959295549 +0000 UTC m=+0.117009965 container cleanup fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:00:20 compute-0 systemd[1]: libpod-conmon-fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45.scope: Deactivated successfully.
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.044 256736 DEBUG nova.compute.manager [req-522424ba-332f-42e4-9e0b-ee79e322b725 req-0484e972-30a2-4af9-b989-1f17df56d30a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received event network-vif-unplugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.044 256736 DEBUG oslo_concurrency.lockutils [req-522424ba-332f-42e4-9e0b-ee79e322b725 req-0484e972-30a2-4af9-b989-1f17df56d30a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.045 256736 DEBUG oslo_concurrency.lockutils [req-522424ba-332f-42e4-9e0b-ee79e322b725 req-0484e972-30a2-4af9-b989-1f17df56d30a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.045 256736 DEBUG oslo_concurrency.lockutils [req-522424ba-332f-42e4-9e0b-ee79e322b725 req-0484e972-30a2-4af9-b989-1f17df56d30a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.045 256736 DEBUG nova.compute.manager [req-522424ba-332f-42e4-9e0b-ee79e322b725 req-0484e972-30a2-4af9-b989-1f17df56d30a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] No waiting events found dispatching network-vif-unplugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.045 256736 DEBUG nova.compute.manager [req-522424ba-332f-42e4-9e0b-ee79e322b725 req-0484e972-30a2-4af9-b989-1f17df56d30a ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received event network-vif-unplugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:00:21 compute-0 podman[286440]: 2025-11-29 08:00:21.04584808 +0000 UTC m=+0.055087732 container remove fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:00:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:21.054 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a98b95-eb88-4491-bd58-aed45192fd6b]: (4, ('Sat Nov 29 08:00:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278 (fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45)\nfca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45\nSat Nov 29 08:00:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bedd0aba-5435-430a-9787-eb355b452278 (fca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45)\nfca5af67992e059076de7ba23f3ccf743c7b207bef5ce3996020209a9198fb45\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:21.056 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b361c725-a3a9-4ca8-9f9a-113525f9954f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:21.057 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbedd0aba-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.059 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:21 compute-0 kernel: tapbedd0aba-50: left promiscuous mode
Nov 29 08:00:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:21.066 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb04f69-5db2-41ff-bfa6-b71a00ed9d0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.077 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:21.081 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6de88556-9d10-4021-996e-3f7ff3e237e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:21.082 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bc7cf9-b97f-4853-966a-aa3010e54e53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:21.096 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[91be2da9-eeb2-45a0-94f2-ff4d034ceac6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548882, 'reachable_time': 36781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286455, 'error': None, 'target': 'ovnmeta-bedd0aba-5435-430a-9787-eb355b452278', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:21.099 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bedd0aba-5435-430a-9787-eb355b452278 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:00:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:21.099 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0aa919-1d98-4b56-aa1d-7d34ac84504c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:00:21 compute-0 systemd[1]: run-netns-ovnmeta\x2dbedd0aba\x2d5435\x2d430a\x2d9787\x2deb355b452278.mount: Deactivated successfully.
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.281 256736 INFO nova.virt.libvirt.driver [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Deleting instance files /var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c_del
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.282 256736 INFO nova.virt.libvirt.driver [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Deletion of /var/lib/nova/instances/b7d73f17-a739-4ace-8e3a-00050fcea21c_del complete
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.342 256736 INFO nova.compute.manager [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.343 256736 DEBUG oslo.service.loopingcall [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.344 256736 DEBUG nova.compute.manager [-] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:00:21 compute-0 nova_compute[256729]: 2025-11-29 08:00:21.344 256736 DEBUG nova.network.neutron [-] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:00:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.272 256736 DEBUG nova.network.neutron [-] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.290 256736 INFO nova.compute.manager [-] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Took 0.95 seconds to deallocate network for instance.
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.322 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.333 256736 DEBUG oslo_concurrency.lockutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.334 256736 DEBUG oslo_concurrency.lockutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.387 256736 DEBUG oslo_concurrency.processutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.496 256736 DEBUG nova.compute.manager [req-ae588cc8-fabe-40de-9dad-8735ecbdf6c2 req-49da8ef1-a8a5-4373-a65c-dfadec8de01f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received event network-vif-deleted-b47fb61f-f97f-4119-b96f-3ba939ef6867 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Nov 29 08:00:22 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Nov 29 08:00:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:00:22 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883345397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 119 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 4.2 KiB/s wr, 93 op/s
Nov 29 08:00:22 compute-0 ceph-mon[75050]: pgmap v1741: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 6.5 KiB/s wr, 95 op/s
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.954 256736 DEBUG oslo_concurrency.processutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.964 256736 DEBUG nova.compute.provider_tree [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:00:22 compute-0 nova_compute[256729]: 2025-11-29 08:00:22.984 256736 DEBUG nova.scheduler.client.report [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:00:23 compute-0 nova_compute[256729]: 2025-11-29 08:00:23.014 256736 DEBUG oslo_concurrency.lockutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:23 compute-0 nova_compute[256729]: 2025-11-29 08:00:23.043 256736 INFO nova.scheduler.client.report [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Deleted allocations for instance b7d73f17-a739-4ace-8e3a-00050fcea21c
Nov 29 08:00:23 compute-0 nova_compute[256729]: 2025-11-29 08:00:23.123 256736 DEBUG oslo_concurrency.lockutils [None req-6635e31f-6420-4e2f-8935-ee7c18b3c0e9 4dd11438bdce4fc7982e86e6bc9fbf46 fcc62171a1a3439e8156931de2a25f02 - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:23 compute-0 sshd-session[286352]: Connection closed by authenticating user root 143.14.121.41 port 39770 [preauth]
Nov 29 08:00:23 compute-0 nova_compute[256729]: 2025-11-29 08:00:23.270 256736 DEBUG nova.compute.manager [req-2c3e5a50-8a2c-4ec7-8a0a-f23c854cb156 req-5f104c6c-5728-4e2a-85a6-72bb06d0dc7e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received event network-vif-plugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:00:23 compute-0 nova_compute[256729]: 2025-11-29 08:00:23.270 256736 DEBUG oslo_concurrency.lockutils [req-2c3e5a50-8a2c-4ec7-8a0a-f23c854cb156 req-5f104c6c-5728-4e2a-85a6-72bb06d0dc7e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:23 compute-0 nova_compute[256729]: 2025-11-29 08:00:23.271 256736 DEBUG oslo_concurrency.lockutils [req-2c3e5a50-8a2c-4ec7-8a0a-f23c854cb156 req-5f104c6c-5728-4e2a-85a6-72bb06d0dc7e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:23 compute-0 nova_compute[256729]: 2025-11-29 08:00:23.271 256736 DEBUG oslo_concurrency.lockutils [req-2c3e5a50-8a2c-4ec7-8a0a-f23c854cb156 req-5f104c6c-5728-4e2a-85a6-72bb06d0dc7e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "b7d73f17-a739-4ace-8e3a-00050fcea21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:23 compute-0 nova_compute[256729]: 2025-11-29 08:00:23.271 256736 DEBUG nova.compute.manager [req-2c3e5a50-8a2c-4ec7-8a0a-f23c854cb156 req-5f104c6c-5728-4e2a-85a6-72bb06d0dc7e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] No waiting events found dispatching network-vif-plugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:00:23 compute-0 nova_compute[256729]: 2025-11-29 08:00:23.271 256736 WARNING nova.compute.manager [req-2c3e5a50-8a2c-4ec7-8a0a-f23c854cb156 req-5f104c6c-5728-4e2a-85a6-72bb06d0dc7e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Received unexpected event network-vif-plugged-b47fb61f-f97f-4119-b96f-3ba939ef6867 for instance with vm_state deleted and task_state None.
Nov 29 08:00:24 compute-0 ceph-mon[75050]: osdmap e304: 3 total, 3 up, 3 in
Nov 29 08:00:24 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/883345397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:00:24 compute-0 ceph-mon[75050]: pgmap v1743: 305 pgs: 305 active+clean; 119 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 4.2 KiB/s wr, 93 op/s
Nov 29 08:00:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2860125408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2860125408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 5.5 KiB/s wr, 81 op/s
Nov 29 08:00:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2860125408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2860125408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2368390653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2368390653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:25 compute-0 nova_compute[256729]: 2025-11-29 08:00:25.874 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:26 compute-0 ceph-mon[75050]: pgmap v1744: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 5.5 KiB/s wr, 81 op/s
Nov 29 08:00:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2368390653' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2368390653' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:26 compute-0 sshd-session[286479]: Connection closed by authenticating user root 143.14.121.41 port 34324 [preauth]
Nov 29 08:00:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 6.3 KiB/s wr, 113 op/s
Nov 29 08:00:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2001629306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2001629306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2362625543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2362625543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:27 compute-0 nova_compute[256729]: 2025-11-29 08:00:27.323 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1778786440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1778786440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Nov 29 08:00:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Nov 29 08:00:27 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Nov 29 08:00:28 compute-0 ceph-mon[75050]: pgmap v1745: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 6.3 KiB/s wr, 113 op/s
Nov 29 08:00:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2001629306' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2001629306' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2362625543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2362625543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1778786440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1778786440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 ceph-mon[75050]: osdmap e305: 3 total, 3 up, 3 in
Nov 29 08:00:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2142217437' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2142217437' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 podman[286485]: 2025-11-29 08:00:28.697216012 +0000 UTC m=+0.062158310 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:00:28 compute-0 podman[286486]: 2025-11-29 08:00:28.698706983 +0000 UTC m=+0.056761966 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 08:00:28 compute-0 podman[286484]: 2025-11-29 08:00:28.72222901 +0000 UTC m=+0.090798714 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:00:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1797347359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1797347359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 9.4 KiB/s wr, 160 op/s
Nov 29 08:00:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2142217437' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2142217437' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1797347359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1797347359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2380862125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2380862125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:30 compute-0 nova_compute[256729]: 2025-11-29 08:00:30.076 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:30 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/192568608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:30 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/192568608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:30 compute-0 nova_compute[256729]: 2025-11-29 08:00:30.170 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:30 compute-0 ceph-mon[75050]: pgmap v1747: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 9.4 KiB/s wr, 160 op/s
Nov 29 08:00:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2380862125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2380862125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/192568608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/192568608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:30 compute-0 nova_compute[256729]: 2025-11-29 08:00:30.876 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 6.2 KiB/s wr, 94 op/s
Nov 29 08:00:31 compute-0 sshd-session[286482]: Connection closed by authenticating user root 143.14.121.41 port 34336 [preauth]
Nov 29 08:00:32 compute-0 nova_compute[256729]: 2025-11-29 08:00:32.325 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:32 compute-0 ceph-mon[75050]: pgmap v1748: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 6.2 KiB/s wr, 94 op/s
Nov 29 08:00:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270472417' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270472417' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 7.1 KiB/s wr, 125 op/s
Nov 29 08:00:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1661769347' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1661769347' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3270472417' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3270472417' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1661769347' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1661769347' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664701782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664701782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:34 compute-0 ceph-mon[75050]: pgmap v1749: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 7.1 KiB/s wr, 125 op/s
Nov 29 08:00:34 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2664701782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:34 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2664701782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 5.2 KiB/s wr, 178 op/s
Nov 29 08:00:35 compute-0 sudo[286547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:35 compute-0 sudo[286547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:35 compute-0 sudo[286547]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:35 compute-0 sudo[286572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:00:35 compute-0 sudo[286572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:35 compute-0 sudo[286572]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:35 compute-0 sudo[286597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:35 compute-0 sudo[286597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:35 compute-0 sudo[286597]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:35 compute-0 sudo[286622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:00:35 compute-0 sudo[286622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:00:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:00:35 compute-0 sshd-session[286545]: Connection closed by authenticating user root 143.14.121.41 port 34338 [preauth]
Nov 29 08:00:35 compute-0 nova_compute[256729]: 2025-11-29 08:00:35.843 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403220.8410277, b7d73f17-a739-4ace-8e3a-00050fcea21c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:00:35 compute-0 nova_compute[256729]: 2025-11-29 08:00:35.844 256736 INFO nova.compute.manager [-] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] VM Stopped (Lifecycle Event)
Nov 29 08:00:35 compute-0 nova_compute[256729]: 2025-11-29 08:00:35.866 256736 DEBUG nova.compute.manager [None req-1a5d643f-a59d-45ed-8fc3-46bbf5bea13f - - - - - -] [instance: b7d73f17-a739-4ace-8e3a-00050fcea21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:00:35 compute-0 nova_compute[256729]: 2025-11-29 08:00:35.904 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:35 compute-0 sudo[286622]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:00:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:00:36 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:00:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:00:36 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:00:36 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3ec95704-d524-4e1b-8ee7-6f7d0387758d does not exist
Nov 29 08:00:36 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b22c459e-100a-4691-bd02-e6b32380a2b0 does not exist
Nov 29 08:00:36 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 520ab2c4-d840-482a-bb7c-2400ef0c1d21 does not exist
Nov 29 08:00:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:00:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:00:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:00:36 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:00:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:00:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:36 compute-0 sudo[286679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:36 compute-0 sudo[286679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:36 compute-0 sudo[286679]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:36 compute-0 sudo[286704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:00:36 compute-0 sudo[286704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:36 compute-0 sudo[286704]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:36 compute-0 sudo[286729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:36 compute-0 sudo[286729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:36 compute-0 sudo[286729]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:36 compute-0 ceph-mon[75050]: pgmap v1750: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 5.2 KiB/s wr, 178 op/s
Nov 29 08:00:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:00:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:00:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:00:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:00:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:00:36 compute-0 sudo[286754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:00:36 compute-0 sudo[286754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 4.8 KiB/s wr, 160 op/s
Nov 29 08:00:36 compute-0 podman[286819]: 2025-11-29 08:00:36.84957755 +0000 UTC m=+0.024046523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:00:36 compute-0 podman[286819]: 2025-11-29 08:00:36.994711934 +0000 UTC m=+0.169180897 container create 273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 29 08:00:37 compute-0 systemd[1]: Started libpod-conmon-273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313.scope.
Nov 29 08:00:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:00:37 compute-0 podman[286819]: 2025-11-29 08:00:37.252055054 +0000 UTC m=+0.426524037 container init 273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 08:00:37 compute-0 podman[286819]: 2025-11-29 08:00:37.261864966 +0000 UTC m=+0.436333929 container start 273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 08:00:37 compute-0 podman[286819]: 2025-11-29 08:00:37.265270557 +0000 UTC m=+0.439739570 container attach 273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:00:37 compute-0 systemd[1]: libpod-273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313.scope: Deactivated successfully.
Nov 29 08:00:37 compute-0 nostalgic_brown[286836]: 167 167
Nov 29 08:00:37 compute-0 conmon[286836]: conmon 273a990efb91f5874444 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313.scope/container/memory.events
Nov 29 08:00:37 compute-0 podman[286819]: 2025-11-29 08:00:37.272351605 +0000 UTC m=+0.446820568 container died 273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 08:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a56f862682827164d9d2ac068e56bc3391c81c0d5e8fbfd0773cb3dbed03197-merged.mount: Deactivated successfully.
Nov 29 08:00:37 compute-0 podman[286819]: 2025-11-29 08:00:37.310539545 +0000 UTC m=+0.485008508 container remove 273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:00:37 compute-0 systemd[1]: libpod-conmon-273a990efb91f5874444b4fcc87df280e32c8804a8ea0612917ce2819d89c313.scope: Deactivated successfully.
Nov 29 08:00:37 compute-0 nova_compute[256729]: 2025-11-29 08:00:37.327 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:37 compute-0 podman[286858]: 2025-11-29 08:00:37.487087208 +0000 UTC m=+0.040622995 container create b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lalande, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:00:37 compute-0 systemd[1]: Started libpod-conmon-b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d.scope.
Nov 29 08:00:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:00:37 compute-0 podman[286858]: 2025-11-29 08:00:37.469750826 +0000 UTC m=+0.023286603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023e6d205f8fc52d888b1150e9c9446986c567f573b058d14ec3a3ecc366e69c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023e6d205f8fc52d888b1150e9c9446986c567f573b058d14ec3a3ecc366e69c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023e6d205f8fc52d888b1150e9c9446986c567f573b058d14ec3a3ecc366e69c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023e6d205f8fc52d888b1150e9c9446986c567f573b058d14ec3a3ecc366e69c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/023e6d205f8fc52d888b1150e9c9446986c567f573b058d14ec3a3ecc366e69c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:37 compute-0 podman[286858]: 2025-11-29 08:00:37.589679667 +0000 UTC m=+0.143215504 container init b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lalande, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:00:37 compute-0 podman[286858]: 2025-11-29 08:00:37.604238156 +0000 UTC m=+0.157773913 container start b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:00:37 compute-0 podman[286858]: 2025-11-29 08:00:37.608623423 +0000 UTC m=+0.162159210 container attach b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:00:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Nov 29 08:00:38 compute-0 ceph-mon[75050]: pgmap v1751: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 4.8 KiB/s wr, 160 op/s
Nov 29 08:00:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Nov 29 08:00:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Nov 29 08:00:38 compute-0 focused_lalande[286874]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:00:38 compute-0 focused_lalande[286874]: --> relative data size: 1.0
Nov 29 08:00:38 compute-0 focused_lalande[286874]: --> All data devices are unavailable
Nov 29 08:00:38 compute-0 systemd[1]: libpod-b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d.scope: Deactivated successfully.
Nov 29 08:00:38 compute-0 podman[286858]: 2025-11-29 08:00:38.779223762 +0000 UTC m=+1.332759499 container died b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 08:00:38 compute-0 systemd[1]: libpod-b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d.scope: Consumed 1.112s CPU time.
Nov 29 08:00:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-023e6d205f8fc52d888b1150e9c9446986c567f573b058d14ec3a3ecc366e69c-merged.mount: Deactivated successfully.
Nov 29 08:00:38 compute-0 podman[286858]: 2025-11-29 08:00:38.841028011 +0000 UTC m=+1.394563758 container remove b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:00:38 compute-0 systemd[1]: libpod-conmon-b2e353ba4cabc81f6a561d60efeb3c2d018b8983391eeba1c7de49d0dae0082d.scope: Deactivated successfully.
Nov 29 08:00:38 compute-0 sudo[286754]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 3.3 KiB/s wr, 140 op/s
Nov 29 08:00:38 compute-0 sudo[286914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:38 compute-0 sudo[286914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:38 compute-0 sudo[286914]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:39 compute-0 sudo[286939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:00:39 compute-0 sudo[286939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:39 compute-0 sudo[286939]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:39 compute-0 sudo[286964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:39 compute-0 sudo[286964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:39 compute-0 sudo[286964]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:39 compute-0 sudo[286989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:00:39 compute-0 sudo[286989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:39 compute-0 ceph-mon[75050]: osdmap e306: 3 total, 3 up, 3 in
Nov 29 08:00:39 compute-0 podman[287054]: 2025-11-29 08:00:39.69806463 +0000 UTC m=+0.050342145 container create 6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 08:00:39 compute-0 systemd[1]: Started libpod-conmon-6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354.scope.
Nov 29 08:00:39 compute-0 podman[287054]: 2025-11-29 08:00:39.674031889 +0000 UTC m=+0.026309464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:00:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:00:39 compute-0 podman[287054]: 2025-11-29 08:00:39.785407242 +0000 UTC m=+0.137684787 container init 6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:00:39 compute-0 podman[287054]: 2025-11-29 08:00:39.792405989 +0000 UTC m=+0.144683514 container start 6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_faraday, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:00:39 compute-0 podman[287054]: 2025-11-29 08:00:39.795125121 +0000 UTC m=+0.147402676 container attach 6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_faraday, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 08:00:39 compute-0 systemd[1]: libpod-6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354.scope: Deactivated successfully.
Nov 29 08:00:39 compute-0 wonderful_faraday[287071]: 167 167
Nov 29 08:00:39 compute-0 conmon[287071]: conmon 6e7b48d01f80cbb99ac7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354.scope/container/memory.events
Nov 29 08:00:39 compute-0 podman[287054]: 2025-11-29 08:00:39.799110377 +0000 UTC m=+0.151387892 container died 6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_faraday, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b126235577fbc707836b0061454882c59725475710a7856717affce7ab19f8ea-merged.mount: Deactivated successfully.
Nov 29 08:00:39 compute-0 podman[287054]: 2025-11-29 08:00:39.842329872 +0000 UTC m=+0.194607397 container remove 6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 08:00:39 compute-0 systemd[1]: libpod-conmon-6e7b48d01f80cbb99ac79835b8381753d08c5564f676ab4d5f2700744265e354.scope: Deactivated successfully.
Nov 29 08:00:40 compute-0 podman[287094]: 2025-11-29 08:00:40.025336557 +0000 UTC m=+0.047213972 container create 45a3172a893ea2db2cf69187ccb0b2c6aad2a7fe40bafb9dd360f76addd6868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 08:00:40 compute-0 podman[287094]: 2025-11-29 08:00:40.00524159 +0000 UTC m=+0.027119005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:00:40 compute-0 sshd-session[286678]: Connection closed by authenticating user root 143.14.121.41 port 38380 [preauth]
Nov 29 08:00:40 compute-0 nova_compute[256729]: 2025-11-29 08:00:40.907 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 3.3 KiB/s wr, 140 op/s
Nov 29 08:00:41 compute-0 systemd[1]: Started libpod-conmon-45a3172a893ea2db2cf69187ccb0b2c6aad2a7fe40bafb9dd360f76addd6868e.scope.
Nov 29 08:00:41 compute-0 ceph-mon[75050]: pgmap v1753: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 3.3 KiB/s wr, 140 op/s
Nov 29 08:00:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf0ecf3e18d512ad15013077a0c269cca73038d4ccb247e8b26a246d0ba7344/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf0ecf3e18d512ad15013077a0c269cca73038d4ccb247e8b26a246d0ba7344/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf0ecf3e18d512ad15013077a0c269cca73038d4ccb247e8b26a246d0ba7344/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf0ecf3e18d512ad15013077a0c269cca73038d4ccb247e8b26a246d0ba7344/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:41 compute-0 podman[287094]: 2025-11-29 08:00:41.196746388 +0000 UTC m=+1.218623873 container init 45a3172a893ea2db2cf69187ccb0b2c6aad2a7fe40bafb9dd360f76addd6868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:00:41 compute-0 podman[287094]: 2025-11-29 08:00:41.206497148 +0000 UTC m=+1.228374533 container start 45a3172a893ea2db2cf69187ccb0b2c6aad2a7fe40bafb9dd360f76addd6868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:00:41 compute-0 podman[287094]: 2025-11-29 08:00:41.217175474 +0000 UTC m=+1.239052899 container attach 45a3172a893ea2db2cf69187ccb0b2c6aad2a7fe40bafb9dd360f76addd6868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 08:00:41 compute-0 infallible_buck[287111]: {
Nov 29 08:00:41 compute-0 infallible_buck[287111]:     "0": [
Nov 29 08:00:41 compute-0 infallible_buck[287111]:         {
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "devices": [
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "/dev/loop3"
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             ],
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_name": "ceph_lv0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_size": "21470642176",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "name": "ceph_lv0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "tags": {
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.cluster_name": "ceph",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.crush_device_class": "",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.encrypted": "0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.osd_id": "0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.type": "block",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.vdo": "0"
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             },
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "type": "block",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "vg_name": "ceph_vg0"
Nov 29 08:00:41 compute-0 infallible_buck[287111]:         }
Nov 29 08:00:41 compute-0 infallible_buck[287111]:     ],
Nov 29 08:00:41 compute-0 infallible_buck[287111]:     "1": [
Nov 29 08:00:41 compute-0 infallible_buck[287111]:         {
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "devices": [
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "/dev/loop4"
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             ],
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_name": "ceph_lv1",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_size": "21470642176",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "name": "ceph_lv1",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "tags": {
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.cluster_name": "ceph",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.crush_device_class": "",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.encrypted": "0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.osd_id": "1",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.type": "block",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.vdo": "0"
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             },
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "type": "block",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "vg_name": "ceph_vg1"
Nov 29 08:00:41 compute-0 infallible_buck[287111]:         }
Nov 29 08:00:41 compute-0 infallible_buck[287111]:     ],
Nov 29 08:00:41 compute-0 infallible_buck[287111]:     "2": [
Nov 29 08:00:41 compute-0 infallible_buck[287111]:         {
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "devices": [
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "/dev/loop5"
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             ],
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_name": "ceph_lv2",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_size": "21470642176",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "name": "ceph_lv2",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "tags": {
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.cluster_name": "ceph",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.crush_device_class": "",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.encrypted": "0",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.osd_id": "2",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.type": "block",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:                 "ceph.vdo": "0"
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             },
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "type": "block",
Nov 29 08:00:41 compute-0 infallible_buck[287111]:             "vg_name": "ceph_vg2"
Nov 29 08:00:41 compute-0 infallible_buck[287111]:         }
Nov 29 08:00:41 compute-0 infallible_buck[287111]:     ]
Nov 29 08:00:41 compute-0 infallible_buck[287111]: }
Nov 29 08:00:42 compute-0 systemd[1]: libpod-45a3172a893ea2db2cf69187ccb0b2c6aad2a7fe40bafb9dd360f76addd6868e.scope: Deactivated successfully.
Nov 29 08:00:42 compute-0 podman[287094]: 2025-11-29 08:00:42.00969005 +0000 UTC m=+2.031567465 container died 45a3172a893ea2db2cf69187ccb0b2c6aad2a7fe40bafb9dd360f76addd6868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 08:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bf0ecf3e18d512ad15013077a0c269cca73038d4ccb247e8b26a246d0ba7344-merged.mount: Deactivated successfully.
Nov 29 08:00:42 compute-0 podman[287094]: 2025-11-29 08:00:42.080023027 +0000 UTC m=+2.101900422 container remove 45a3172a893ea2db2cf69187ccb0b2c6aad2a7fe40bafb9dd360f76addd6868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:00:42 compute-0 systemd[1]: libpod-conmon-45a3172a893ea2db2cf69187ccb0b2c6aad2a7fe40bafb9dd360f76addd6868e.scope: Deactivated successfully.
Nov 29 08:00:42 compute-0 sudo[286989]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:42 compute-0 sudo[287135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:42 compute-0 sudo[287135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:42 compute-0 sudo[287135]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:42 compute-0 ceph-mon[75050]: pgmap v1754: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 3.3 KiB/s wr, 140 op/s
Nov 29 08:00:42 compute-0 sudo[287160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:00:42 compute-0 sudo[287160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:42 compute-0 sudo[287160]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:42 compute-0 nova_compute[256729]: 2025-11-29 08:00:42.330 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:42 compute-0 sudo[287185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:42 compute-0 sudo[287185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:42 compute-0 sudo[287185]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:42 compute-0 sudo[287210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:00:42 compute-0 sudo[287210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:42 compute-0 podman[287275]: 2025-11-29 08:00:42.873852438 +0000 UTC m=+0.065203821 container create b2b8584623c1b9906343fc12831f6d0f61d346310b3db8f0a9b8e927ec723402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cartwright, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:00:42 compute-0 systemd[1]: Started libpod-conmon-b2b8584623c1b9906343fc12831f6d0f61d346310b3db8f0a9b8e927ec723402.scope.
Nov 29 08:00:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:00:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 2.2 KiB/s wr, 95 op/s
Nov 29 08:00:42 compute-0 podman[287275]: 2025-11-29 08:00:42.846462327 +0000 UTC m=+0.037813790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:00:42 compute-0 podman[287275]: 2025-11-29 08:00:42.964661192 +0000 UTC m=+0.156012665 container init b2b8584623c1b9906343fc12831f6d0f61d346310b3db8f0a9b8e927ec723402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cartwright, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:00:42 compute-0 podman[287275]: 2025-11-29 08:00:42.976872699 +0000 UTC m=+0.168224112 container start b2b8584623c1b9906343fc12831f6d0f61d346310b3db8f0a9b8e927ec723402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:00:42 compute-0 silly_cartwright[287291]: 167 167
Nov 29 08:00:42 compute-0 systemd[1]: libpod-b2b8584623c1b9906343fc12831f6d0f61d346310b3db8f0a9b8e927ec723402.scope: Deactivated successfully.
Nov 29 08:00:43 compute-0 podman[287275]: 2025-11-29 08:00:43.752605107 +0000 UTC m=+0.943956570 container attach b2b8584623c1b9906343fc12831f6d0f61d346310b3db8f0a9b8e927ec723402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 08:00:43 compute-0 podman[287275]: 2025-11-29 08:00:43.753236793 +0000 UTC m=+0.944588236 container died b2b8584623c1b9906343fc12831f6d0f61d346310b3db8f0a9b8e927ec723402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:00:44 compute-0 ceph-mon[75050]: pgmap v1755: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 2.2 KiB/s wr, 95 op/s
Nov 29 08:00:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.7 KiB/s wr, 59 op/s
Nov 29 08:00:45 compute-0 sshd-session[287108]: Connection closed by authenticating user root 143.14.121.41 port 38384 [preauth]
Nov 29 08:00:45 compute-0 nova_compute[256729]: 2025-11-29 08:00:45.910 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e27de5aafae1da7d0332f65dc3b8402913178c640b496b271c28e87f3d7ac2c3-merged.mount: Deactivated successfully.
Nov 29 08:00:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.9 KiB/s wr, 43 op/s
Nov 29 08:00:47 compute-0 nova_compute[256729]: 2025-11-29 08:00:47.332 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 40 op/s
Nov 29 08:00:49 compute-0 ceph-mon[75050]: pgmap v1756: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.7 KiB/s wr, 59 op/s
Nov 29 08:00:49 compute-0 ceph-mon[75050]: pgmap v1757: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.9 KiB/s wr, 43 op/s
Nov 29 08:00:49 compute-0 sshd-session[287310]: Connection closed by authenticating user root 143.14.121.41 port 33818 [preauth]
Nov 29 08:00:50 compute-0 podman[287275]: 2025-11-29 08:00:50.156275682 +0000 UTC m=+7.347627105 container remove b2b8584623c1b9906343fc12831f6d0f61d346310b3db8f0a9b8e927ec723402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cartwright, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 08:00:50 compute-0 systemd[1]: libpod-conmon-b2b8584623c1b9906343fc12831f6d0f61d346310b3db8f0a9b8e927ec723402.scope: Deactivated successfully.
Nov 29 08:00:50 compute-0 podman[287320]: 2025-11-29 08:00:50.355034838 +0000 UTC m=+0.032696844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:00:50 compute-0 podman[287320]: 2025-11-29 08:00:50.91740304 +0000 UTC m=+0.595064946 container create f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:00:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 35 op/s
Nov 29 08:00:50 compute-0 nova_compute[256729]: 2025-11-29 08:00:50.958 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:51 compute-0 ceph-mon[75050]: pgmap v1758: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 40 op/s
Nov 29 08:00:51 compute-0 systemd[1]: Started libpod-conmon-f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91.scope.
Nov 29 08:00:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/355d5cc38049ddc65e5e82338990e951682ca16b9a688f04cb96dc7e07251e6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/355d5cc38049ddc65e5e82338990e951682ca16b9a688f04cb96dc7e07251e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/355d5cc38049ddc65e5e82338990e951682ca16b9a688f04cb96dc7e07251e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/355d5cc38049ddc65e5e82338990e951682ca16b9a688f04cb96dc7e07251e6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:00:51 compute-0 podman[287320]: 2025-11-29 08:00:51.983607842 +0000 UTC m=+1.661269838 container init f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 08:00:51 compute-0 podman[287320]: 2025-11-29 08:00:51.99590271 +0000 UTC m=+1.673564616 container start f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 08:00:52 compute-0 nova_compute[256729]: 2025-11-29 08:00:52.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:52 compute-0 nova_compute[256729]: 2025-11-29 08:00:52.333 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:52 compute-0 podman[287320]: 2025-11-29 08:00:52.504394425 +0000 UTC m=+2.182056441 container attach f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:00:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 35 op/s
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]: {
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "osd_id": 2,
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "type": "bluestore"
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:     },
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "osd_id": 1,
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "type": "bluestore"
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:     },
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "osd_id": 0,
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:         "type": "bluestore"
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]:     }
Nov 29 08:00:53 compute-0 dazzling_hellman[287338]: }
Nov 29 08:00:53 compute-0 systemd[1]: libpod-f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91.scope: Deactivated successfully.
Nov 29 08:00:53 compute-0 podman[287320]: 2025-11-29 08:00:53.086117324 +0000 UTC m=+2.763779240 container died f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 08:00:53 compute-0 systemd[1]: libpod-f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91.scope: Consumed 1.097s CPU time.
Nov 29 08:00:54 compute-0 sshd-session[287312]: Connection closed by authenticating user root 143.14.121.41 port 33820 [preauth]
Nov 29 08:00:54 compute-0 ceph-mon[75050]: pgmap v1759: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 35 op/s
Nov 29 08:00:54 compute-0 nova_compute[256729]: 2025-11-29 08:00:54.181 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-355d5cc38049ddc65e5e82338990e951682ca16b9a688f04cb96dc7e07251e6c-merged.mount: Deactivated successfully.
Nov 29 08:00:54 compute-0 podman[287320]: 2025-11-29 08:00:54.583461255 +0000 UTC m=+4.261123161 container remove f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:00:54 compute-0 sudo[287210]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:00:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:00:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:00:54 compute-0 systemd[1]: libpod-conmon-f8b94b76a68a528fcbf728ae0b5ffdcc087cca1b619e08af8a3da76a10888b91.scope: Deactivated successfully.
Nov 29 08:00:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:00:54 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a1c2079a-e621-4eb5-ae96-2ebd902d3ffc does not exist
Nov 29 08:00:54 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ce91e015-208e-4f7d-945e-db839d05dbbd does not exist
Nov 29 08:00:54 compute-0 sudo[287386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:00:54 compute-0 sudo[287386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:54 compute-0 sudo[287386]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:54 compute-0 sudo[287411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:00:54 compute-0 sudo[287411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:00:54 compute-0 sudo[287411]: pam_unix(sudo:session): session closed for user root
Nov 29 08:00:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.3 KiB/s wr, 34 op/s
Nov 29 08:00:55 compute-0 ceph-mon[75050]: pgmap v1760: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 35 op/s
Nov 29 08:00:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:00:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:00:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:00:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554901999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:00:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/554901999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:55 compute-0 nova_compute[256729]: 2025-11-29 08:00:55.960 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:56 compute-0 nova_compute[256729]: 2025-11-29 08:00:56.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:56 compute-0 nova_compute[256729]: 2025-11-29 08:00:56.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 340 B/s wr, 14 op/s
Nov 29 08:00:57 compute-0 nova_compute[256729]: 2025-11-29 08:00:57.335 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:00:57 compute-0 sshd-session[287384]: Connection closed by authenticating user root 143.14.121.41 port 44616 [preauth]
Nov 29 08:00:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:00:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/554901999' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:00:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/554901999' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:00:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 425 B/s wr, 19 op/s
Nov 29 08:00:59 compute-0 nova_compute[256729]: 2025-11-29 08:00:59.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:00:59 compute-0 podman[287441]: 2025-11-29 08:00:59.73588946 +0000 UTC m=+0.086972293 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 08:00:59 compute-0 podman[287440]: 2025-11-29 08:00:59.745274471 +0000 UTC m=+0.105452787 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:00:59 compute-0 podman[287439]: 2025-11-29 08:00:59.777372267 +0000 UTC m=+0.137447060 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:00:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:59.780 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:00:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:59.780 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:00:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:00:59.781 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:00:59 compute-0 ceph-mon[75050]: pgmap v1761: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.3 KiB/s wr, 34 op/s
Nov 29 08:00:59 compute-0 ceph-mon[75050]: pgmap v1762: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 340 B/s wr, 14 op/s
Nov 29 08:01:00 compute-0 nova_compute[256729]: 2025-11-29 08:01:00.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:00 compute-0 nova_compute[256729]: 2025-11-29 08:01:00.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:00 compute-0 nova_compute[256729]: 2025-11-29 08:01:00.282 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:00 compute-0 nova_compute[256729]: 2025-11-29 08:01:00.283 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:00 compute-0 nova_compute[256729]: 2025-11-29 08:01:00.283 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:00 compute-0 nova_compute[256729]: 2025-11-29 08:01:00.283 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:01:00 compute-0 nova_compute[256729]: 2025-11-29 08:01:00.284 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 425 B/s wr, 12 op/s
Nov 29 08:01:00 compute-0 nova_compute[256729]: 2025-11-29 08:01:00.963 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Nov 29 08:01:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469700239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:01 compute-0 nova_compute[256729]: 2025-11-29 08:01:01.343 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:01 compute-0 nova_compute[256729]: 2025-11-29 08:01:01.548 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:01:01 compute-0 nova_compute[256729]: 2025-11-29 08:01:01.549 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4518MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:01:01 compute-0 nova_compute[256729]: 2025-11-29 08:01:01.550 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:01 compute-0 nova_compute[256729]: 2025-11-29 08:01:01.550 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:01 compute-0 CROND[287529]: (root) CMD (run-parts /etc/cron.hourly)
Nov 29 08:01:01 compute-0 run-parts[287532]: (/etc/cron.hourly) starting 0anacron
Nov 29 08:01:01 compute-0 run-parts[287538]: (/etc/cron.hourly) finished 0anacron
Nov 29 08:01:01 compute-0 CROND[287528]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 29 08:01:01 compute-0 sshd-session[287437]: Connection closed by authenticating user root 143.14.121.41 port 44624 [preauth]
Nov 29 08:01:01 compute-0 nova_compute[256729]: 2025-11-29 08:01:01.646 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:01:01 compute-0 nova_compute[256729]: 2025-11-29 08:01:01.646 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:01:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Nov 29 08:01:01 compute-0 ceph-mon[75050]: pgmap v1763: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 425 B/s wr, 19 op/s
Nov 29 08:01:01 compute-0 nova_compute[256729]: 2025-11-29 08:01:01.666 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:01 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Nov 29 08:01:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764539679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:02 compute-0 nova_compute[256729]: 2025-11-29 08:01:02.172 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:02 compute-0 nova_compute[256729]: 2025-11-29 08:01:02.178 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:02 compute-0 nova_compute[256729]: 2025-11-29 08:01:02.230 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:02 compute-0 nova_compute[256729]: 2025-11-29 08:01:02.262 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:01:02 compute-0 nova_compute[256729]: 2025-11-29 08:01:02.262 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:02 compute-0 nova_compute[256729]: 2025-11-29 08:01:02.339 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:02 compute-0 ceph-mon[75050]: pgmap v1764: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 425 B/s wr, 12 op/s
Nov 29 08:01:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/469700239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:02 compute-0 ceph-mon[75050]: osdmap e307: 3 total, 3 up, 3 in
Nov 29 08:01:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2764539679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 614 B/s wr, 15 op/s
Nov 29 08:01:03 compute-0 nova_compute[256729]: 2025-11-29 08:01:03.263 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:03 compute-0 nova_compute[256729]: 2025-11-29 08:01:03.263 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:01:03 compute-0 nova_compute[256729]: 2025-11-29 08:01:03.263 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:01:03 compute-0 nova_compute[256729]: 2025-11-29 08:01:03.292 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:01:03 compute-0 nova_compute[256729]: 2025-11-29 08:01:03.292 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:03 compute-0 nova_compute[256729]: 2025-11-29 08:01:03.292 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:01:03 compute-0 ceph-mon[75050]: pgmap v1766: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 614 B/s wr, 15 op/s
Nov 29 08:01:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:01:05
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'vms']
Nov 29 08:01:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:01:05 compute-0 nova_compute[256729]: 2025-11-29 08:01:05.965 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:06 compute-0 nova_compute[256729]: 2025-11-29 08:01:06.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:06 compute-0 sshd-session[287559]: Connection closed by authenticating user root 143.14.121.41 port 34372 [preauth]
Nov 29 08:01:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Nov 29 08:01:06 compute-0 ceph-mon[75050]: pgmap v1767: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Nov 29 08:01:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Nov 29 08:01:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Nov 29 08:01:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.2 KiB/s wr, 36 op/s
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:01:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:01:07 compute-0 nova_compute[256729]: 2025-11-29 08:01:07.341 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385341000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385341000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:08 compute-0 ceph-mon[75050]: osdmap e308: 3 total, 3 up, 3 in
Nov 29 08:01:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2385341000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2385341000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3755886474' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3755886474' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.6 KiB/s wr, 62 op/s
Nov 29 08:01:09 compute-0 ceph-mon[75050]: pgmap v1769: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.2 KiB/s wr, 36 op/s
Nov 29 08:01:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3755886474' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3755886474' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505457668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505457668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:10 compute-0 ceph-mon[75050]: pgmap v1770: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.6 KiB/s wr, 62 op/s
Nov 29 08:01:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1505457668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1505457668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:10 compute-0 sshd-session[287565]: Connection closed by authenticating user root 143.14.121.41 port 34374 [preauth]
Nov 29 08:01:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.9 KiB/s wr, 52 op/s
Nov 29 08:01:10 compute-0 nova_compute[256729]: 2025-11-29 08:01:10.967 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:12 compute-0 ceph-mon[75050]: pgmap v1771: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.9 KiB/s wr, 52 op/s
Nov 29 08:01:12 compute-0 nova_compute[256729]: 2025-11-29 08:01:12.343 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Nov 29 08:01:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Nov 29 08:01:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Nov 29 08:01:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 38 KiB/s wr, 73 op/s
Nov 29 08:01:13 compute-0 nova_compute[256729]: 2025-11-29 08:01:13.529 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "f24807bf-3456-4b8f-b20e-c823d78b9e63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:13 compute-0 nova_compute[256729]: 2025-11-29 08:01:13.529 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:13 compute-0 nova_compute[256729]: 2025-11-29 08:01:13.560 256736 DEBUG nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:01:13 compute-0 nova_compute[256729]: 2025-11-29 08:01:13.695 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:13 compute-0 nova_compute[256729]: 2025-11-29 08:01:13.695 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:13 compute-0 nova_compute[256729]: 2025-11-29 08:01:13.713 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:01:13 compute-0 nova_compute[256729]: 2025-11-29 08:01:13.714 256736 INFO nova.compute.claims [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:01:13 compute-0 nova_compute[256729]: 2025-11-29 08:01:13.833 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:14 compute-0 ceph-mon[75050]: osdmap e309: 3 total, 3 up, 3 in
Nov 29 08:01:14 compute-0 ceph-mon[75050]: pgmap v1773: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 38 KiB/s wr, 73 op/s
Nov 29 08:01:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2323701037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.502 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.514 256736 DEBUG nova.compute.provider_tree [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.549 256736 DEBUG nova.scheduler.client.report [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.581 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.583 256736 DEBUG nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.668 256736 DEBUG nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.669 256736 DEBUG nova.network.neutron [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.700 256736 INFO nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.724 256736 DEBUG nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.768 256736 INFO nova.virt.block_device [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Booting with volume f9b1b503-df94-43de-8a56-2afc8e227c45 at /dev/vda
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.905 256736 DEBUG nova.policy [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9664e420085d412aae898a6ec021b24f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dfb6854e99614af5b8df420841fde0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.938 256736 DEBUG os_brick.utils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.940 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.960 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.960 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[1060fb4a-5ca6-4cf1-bba0-0ac911196054]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.962 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 36 KiB/s wr, 118 op/s
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.975 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.976 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[8a837a2b-07d9-4d0b-9906-423ce352d89c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.979 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.994 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.995 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[8d302443-938e-41a2-aa6a-7e3ab91bea0d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.996 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[f7dc57a5-b7eb-4fed-af82-8a7bc3d7165f]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:14 compute-0 nova_compute[256729]: 2025-11-29 08:01:14.997 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:15 compute-0 nova_compute[256729]: 2025-11-29 08:01:15.033 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:15 compute-0 nova_compute[256729]: 2025-11-29 08:01:15.038 256736 DEBUG os_brick.initiator.connectors.lightos [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:01:15 compute-0 nova_compute[256729]: 2025-11-29 08:01:15.039 256736 DEBUG os_brick.initiator.connectors.lightos [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:01:15 compute-0 nova_compute[256729]: 2025-11-29 08:01:15.040 256736 DEBUG os_brick.initiator.connectors.lightos [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:01:15 compute-0 nova_compute[256729]: 2025-11-29 08:01:15.040 256736 DEBUG os_brick.utils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] <== get_connector_properties: return (102ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:01:15 compute-0 nova_compute[256729]: 2025-11-29 08:01:15.041 256736 DEBUG nova.virt.block_device [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Updating existing volume attachment record: 4e8bfeab-81c8-4085-8209-3baf97af09e2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:01:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2323701037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000351020707169369 of space, bias 1.0, pg target 0.10530621215081071 quantized to 32 (current 32)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:01:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:01:15 compute-0 nova_compute[256729]: 2025-11-29 08:01:15.969 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:16 compute-0 sshd-session[287567]: Connection closed by authenticating user root 143.14.121.41 port 34388 [preauth]
Nov 29 08:01:16 compute-0 ovn_controller[153383]: 2025-11-29T08:01:16Z|00155|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 08:01:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:01:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4218697793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:16 compute-0 ceph-mon[75050]: pgmap v1774: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 36 KiB/s wr, 118 op/s
Nov 29 08:01:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4218697793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:16 compute-0 nova_compute[256729]: 2025-11-29 08:01:16.437 256736 DEBUG nova.network.neutron [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Successfully created port: 2e35c440-53b6-4135-ad88-d06069087778 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:01:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/950677272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/950677272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 31 KiB/s wr, 107 op/s
Nov 29 08:01:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:01:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 6698 writes, 30K keys, 6698 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s
                                           Cumulative WAL: 6698 writes, 6698 syncs, 1.00 writes per sync, written: 0.04 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1689 writes, 8053 keys, 1689 commit groups, 1.0 writes per commit group, ingest: 10.44 MB, 0.02 MB/s
                                           Interval WAL: 1690 writes, 1690 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      7.3      4.92              0.16        16    0.308       0      0       0.0       0.0
                                             L6      1/0    9.57 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     29.2     23.9      4.93              0.53        15    0.329     74K   8450       0.0       0.0
                                            Sum      1/0    9.57 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.3     14.6     15.6      9.85              0.68        31    0.318     74K   8450       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7     48.1     49.6      0.91              0.21         8    0.114     24K   2635       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     29.2     23.9      4.93              0.53        15    0.329     74K   8450       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      7.3      4.92              0.16        15    0.328       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.035, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.15 GB write, 0.05 MB/s write, 0.14 GB read, 0.05 MB/s read, 9.9 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdb5ecb1f0#2 capacity: 304.00 MB usage: 15.95 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000212 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1060,15.34 MB,5.04456%) FilterBlock(32,217.36 KB,0.069824%) IndexBlock(32,407.45 KB,0.130889%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.316 256736 DEBUG nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.319 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.320 256736 INFO nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Creating image(s)
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.320 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.321 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Ensure instance console log exists: /var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.321 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.322 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.322 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.345 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/950677272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/950677272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:17 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:17.399 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.400 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:17 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:17.400 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.687 256736 DEBUG nova.network.neutron [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Successfully updated port: 2e35c440-53b6-4135-ad88-d06069087778 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.703 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "refresh_cache-f24807bf-3456-4b8f-b20e-c823d78b9e63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.703 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquired lock "refresh_cache-f24807bf-3456-4b8f-b20e-c823d78b9e63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.703 256736 DEBUG nova.network.neutron [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.872 256736 DEBUG nova.compute.manager [req-fcf8b5b9-4495-493d-a1a6-72635f8578aa req-ee645987-7cfc-4d96-b2de-e71ac29d8886 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Received event network-changed-2e35c440-53b6-4135-ad88-d06069087778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.873 256736 DEBUG nova.compute.manager [req-fcf8b5b9-4495-493d-a1a6-72635f8578aa req-ee645987-7cfc-4d96-b2de-e71ac29d8886 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Refreshing instance network info cache due to event network-changed-2e35c440-53b6-4135-ad88-d06069087778. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:01:17 compute-0 nova_compute[256729]: 2025-11-29 08:01:17.873 256736 DEBUG oslo_concurrency.lockutils [req-fcf8b5b9-4495-493d-a1a6-72635f8578aa req-ee645987-7cfc-4d96-b2de-e71ac29d8886 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-f24807bf-3456-4b8f-b20e-c823d78b9e63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.087 256736 DEBUG nova.network.neutron [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:01:18 compute-0 ceph-mon[75050]: pgmap v1775: 305 pgs: 305 active+clean; 88 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 31 KiB/s wr, 107 op/s
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.780 256736 DEBUG nova.network.neutron [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Updating instance_info_cache with network_info: [{"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.803 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Releasing lock "refresh_cache-f24807bf-3456-4b8f-b20e-c823d78b9e63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.804 256736 DEBUG nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Instance network_info: |[{"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.804 256736 DEBUG oslo_concurrency.lockutils [req-fcf8b5b9-4495-493d-a1a6-72635f8578aa req-ee645987-7cfc-4d96-b2de-e71ac29d8886 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-f24807bf-3456-4b8f-b20e-c823d78b9e63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.805 256736 DEBUG nova.network.neutron [req-fcf8b5b9-4495-493d-a1a6-72635f8578aa req-ee645987-7cfc-4d96-b2de-e71ac29d8886 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Refreshing network info cache for port 2e35c440-53b6-4135-ad88-d06069087778 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.811 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Start _get_guest_xml network_info=[{"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f9b1b503-df94-43de-8a56-2afc8e227c45', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f9b1b503-df94-43de-8a56-2afc8e227c45', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'f24807bf-3456-4b8f-b20e-c823d78b9e63', 'attached_at': '', 'detached_at': '', 'volume_id': 'f9b1b503-df94-43de-8a56-2afc8e227c45', 'serial': 'f9b1b503-df94-43de-8a56-2afc8e227c45'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': '4e8bfeab-81c8-4085-8209-3baf97af09e2', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.821 256736 WARNING nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.827 256736 DEBUG nova.virt.libvirt.host [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.828 256736 DEBUG nova.virt.libvirt.host [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.840 256736 DEBUG nova.virt.libvirt.host [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.841 256736 DEBUG nova.virt.libvirt.host [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.842 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.842 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.843 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.844 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.844 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.844 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.845 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.845 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.845 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.846 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.846 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.846 256736 DEBUG nova.virt.hardware [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.882 256736 DEBUG nova.storage.rbd_utils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image f24807bf-3456-4b8f-b20e-c823d78b9e63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:18 compute-0 nova_compute[256729]: 2025-11-29 08:01:18.886 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 30 KiB/s wr, 120 op/s
Nov 29 08:01:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Nov 29 08:01:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Nov 29 08:01:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Nov 29 08:01:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:01:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147486863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.387 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.701 256736 DEBUG os_brick.encryptors [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Using volume encryption metadata '{'encryption_key_id': 'e2a40d37-6c64-48c7-8748-3a7a97752214', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f9b1b503-df94-43de-8a56-2afc8e227c45', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f9b1b503-df94-43de-8a56-2afc8e227c45', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'f24807bf-3456-4b8f-b20e-c823d78b9e63', 'attached_at': '', 'detached_at': '', 'volume_id': 'f9b1b503-df94-43de-8a56-2afc8e227c45', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.703 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.721 256736 DEBUG barbicanclient.v1.secrets [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/e2a40d37-6c64-48c7-8748-3a7a97752214 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.721 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.756 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.757 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.783 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.784 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.815 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.816 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.852 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.853 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.882 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.883 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.906 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.906 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.930 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.931 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.958 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.959 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.990 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:19 compute-0 nova_compute[256729]: 2025-11-29 08:01:19.991 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.021 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.022 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:20 compute-0 sshd-session[287598]: Connection closed by authenticating user root 143.14.121.41 port 48820 [preauth]
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.049 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.050 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.071 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.072 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.092 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.093 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.124 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.125 256736 INFO barbicanclient.base [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Calculated Secrets uuid ref: secrets/e2a40d37-6c64-48c7-8748-3a7a97752214
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.153 256736 DEBUG barbicanclient.client [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.154 256736 DEBUG nova.virt.libvirt.host [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <volume>f9b1b503-df94-43de-8a56-2afc8e227c45</volume>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   </usage>
Nov 29 08:01:20 compute-0 nova_compute[256729]: </secret>
Nov 29 08:01:20 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.191 256736 DEBUG nova.virt.libvirt.vif [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2107928797',display_name='tempest-TestVolumeBootPattern-server-2107928797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2107928797',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-vjjt7v2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:14Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=f24807bf-3456-4b8f-b20e-c823d78b9e63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.193 256736 DEBUG nova.network.os_vif_util [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.195 256736 DEBUG nova.network.os_vif_util [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1a:67,bridge_name='br-int',has_traffic_filtering=True,id=2e35c440-53b6-4135-ad88-d06069087778,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e35c440-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.198 256736 DEBUG nova.objects.instance [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'pci_devices' on Instance uuid f24807bf-3456-4b8f-b20e-c823d78b9e63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.217 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <uuid>f24807bf-3456-4b8f-b20e-c823d78b9e63</uuid>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <name>instance-0000000f</name>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <nova:name>tempest-TestVolumeBootPattern-server-2107928797</nova:name>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:01:18</nova:creationTime>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <nova:user uuid="9664e420085d412aae898a6ec021b24f">tempest-TestVolumeBootPattern-776329285-project-member</nova:user>
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <nova:project uuid="dfb6854e99614af5b8df420841fde0db">tempest-TestVolumeBootPattern-776329285</nova:project>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <nova:port uuid="2e35c440-53b6-4135-ad88-d06069087778">
Nov 29 08:01:20 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <system>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <entry name="serial">f24807bf-3456-4b8f-b20e-c823d78b9e63</entry>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <entry name="uuid">f24807bf-3456-4b8f-b20e-c823d78b9e63</entry>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     </system>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <os>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   </os>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <features>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   </features>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/f24807bf-3456-4b8f-b20e-c823d78b9e63_disk.config">
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       </source>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-f9b1b503-df94-43de-8a56-2afc8e227c45">
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       </source>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <serial>f9b1b503-df94-43de-8a56-2afc8e227c45</serial>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <encryption format="luks">
Nov 29 08:01:20 compute-0 nova_compute[256729]:         <secret type="passphrase" uuid="254c1cf8-d6c0-49b9-b512-364a127915ec"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       </encryption>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:6b:1a:67"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <target dev="tap2e35c440-53"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63/console.log" append="off"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <video>
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     </video>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:01:20 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:01:20 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:01:20 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:01:20 compute-0 nova_compute[256729]: </domain>
Nov 29 08:01:20 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.218 256736 DEBUG nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Preparing to wait for external event network-vif-plugged-2e35c440-53b6-4135-ad88-d06069087778 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.218 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.219 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.219 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.220 256736 DEBUG nova.virt.libvirt.vif [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2107928797',display_name='tempest-TestVolumeBootPattern-server-2107928797',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2107928797',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-vjjt7v2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:14Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=f24807bf-3456-4b8f-b20e-c823d78b9e63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.220 256736 DEBUG nova.network.os_vif_util [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.221 256736 DEBUG nova.network.os_vif_util [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1a:67,bridge_name='br-int',has_traffic_filtering=True,id=2e35c440-53b6-4135-ad88-d06069087778,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e35c440-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.221 256736 DEBUG os_vif [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1a:67,bridge_name='br-int',has_traffic_filtering=True,id=2e35c440-53b6-4135-ad88-d06069087778,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e35c440-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.222 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.223 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.224 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.228 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.229 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e35c440-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.229 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e35c440-53, col_values=(('external_ids', {'iface-id': '2e35c440-53b6-4135-ad88-d06069087778', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:1a:67', 'vm-uuid': 'f24807bf-3456-4b8f-b20e-c823d78b9e63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.284 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:20 compute-0 NetworkManager[48962]: <info>  [1764403280.2857] manager: (tap2e35c440-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.287 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.292 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.293 256736 INFO os_vif [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1a:67,bridge_name='br-int',has_traffic_filtering=True,id=2e35c440-53b6-4135-ad88-d06069087778,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e35c440-53')
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.351 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.353 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.353 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No VIF found with MAC fa:16:3e:6b:1a:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.354 256736 INFO nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Using config drive
Nov 29 08:01:20 compute-0 ceph-mon[75050]: pgmap v1776: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 30 KiB/s wr, 120 op/s
Nov 29 08:01:20 compute-0 ceph-mon[75050]: osdmap e310: 3 total, 3 up, 3 in
Nov 29 08:01:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2147486863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.374 256736 DEBUG nova.storage.rbd_utils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image f24807bf-3456-4b8f-b20e-c823d78b9e63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.638 256736 DEBUG nova.network.neutron [req-fcf8b5b9-4495-493d-a1a6-72635f8578aa req-ee645987-7cfc-4d96-b2de-e71ac29d8886 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Updated VIF entry in instance network info cache for port 2e35c440-53b6-4135-ad88-d06069087778. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.638 256736 DEBUG nova.network.neutron [req-fcf8b5b9-4495-493d-a1a6-72635f8578aa req-ee645987-7cfc-4d96-b2de-e71ac29d8886 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Updating instance_info_cache with network_info: [{"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.667 256736 DEBUG oslo_concurrency.lockutils [req-fcf8b5b9-4495-493d-a1a6-72635f8578aa req-ee645987-7cfc-4d96-b2de-e71ac29d8886 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-f24807bf-3456-4b8f-b20e-c823d78b9e63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.829 256736 INFO nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Creating config drive at /var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63/disk.config
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.834 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp92ou6ml8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 3.7 KiB/s wr, 126 op/s
Nov 29 08:01:20 compute-0 nova_compute[256729]: 2025-11-29 08:01:20.973 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp92ou6ml8" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:21 compute-0 nova_compute[256729]: 2025-11-29 08:01:21.005 256736 DEBUG nova.storage.rbd_utils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image f24807bf-3456-4b8f-b20e-c823d78b9e63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:01:21 compute-0 nova_compute[256729]: 2025-11-29 08:01:21.010 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63/disk.config f24807bf-3456-4b8f-b20e-c823d78b9e63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:21 compute-0 nova_compute[256729]: 2025-11-29 08:01:21.480 256736 DEBUG oslo_concurrency.processutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63/disk.config f24807bf-3456-4b8f-b20e-c823d78b9e63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:21 compute-0 nova_compute[256729]: 2025-11-29 08:01:21.482 256736 INFO nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Deleting local config drive /var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63/disk.config because it was imported into RBD.
Nov 29 08:01:21 compute-0 kernel: tap2e35c440-53: entered promiscuous mode
Nov 29 08:01:21 compute-0 ovn_controller[153383]: 2025-11-29T08:01:21Z|00156|binding|INFO|Claiming lport 2e35c440-53b6-4135-ad88-d06069087778 for this chassis.
Nov 29 08:01:21 compute-0 ovn_controller[153383]: 2025-11-29T08:01:21Z|00157|binding|INFO|2e35c440-53b6-4135-ad88-d06069087778: Claiming fa:16:3e:6b:1a:67 10.100.0.14
Nov 29 08:01:21 compute-0 nova_compute[256729]: 2025-11-29 08:01:21.561 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:21 compute-0 NetworkManager[48962]: <info>  [1764403281.5622] manager: (tap2e35c440-53): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Nov 29 08:01:21 compute-0 nova_compute[256729]: 2025-11-29 08:01:21.571 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.589 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:1a:67 10.100.0.14'], port_security=['fa:16:3e:6b:1a:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f24807bf-3456-4b8f-b20e-c823d78b9e63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9afb6ce-053d-473f-aaad-13f25a9ecb58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=2e35c440-53b6-4135-ad88-d06069087778) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.591 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 2e35c440-53b6-4135-ad88-d06069087778 in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 bound to our chassis
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.592 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:01:21 compute-0 systemd-udevd[287715]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:01:21 compute-0 systemd-machined[217781]: New machine qemu-15-instance-0000000f.
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.612 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ceeff151-acde-487e-bd65-c2106fd6c234]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.613 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d9c390c-31 in ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:01:21 compute-0 NetworkManager[48962]: <info>  [1764403281.6175] device (tap2e35c440-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:01:21 compute-0 NetworkManager[48962]: <info>  [1764403281.6182] device (tap2e35c440-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.617 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d9c390c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.617 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[420aa5f2-3a2f-4ac7-82e4-5b25fb997ce3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.620 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f883f2-773b-46a3-a99a-18828c847378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.646 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[ca06cce5-d312-4848-b144-79beaf411b66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 nova_compute[256729]: 2025-11-29 08:01:21.665 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:21 compute-0 ovn_controller[153383]: 2025-11-29T08:01:21Z|00158|binding|INFO|Setting lport 2e35c440-53b6-4135-ad88-d06069087778 ovn-installed in OVS
Nov 29 08:01:21 compute-0 ovn_controller[153383]: 2025-11-29T08:01:21Z|00159|binding|INFO|Setting lport 2e35c440-53b6-4135-ad88-d06069087778 up in Southbound
Nov 29 08:01:21 compute-0 nova_compute[256729]: 2025-11-29 08:01:21.672 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.672 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc43a76-5732-474b-a9ed-76c9c67c32f2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.713 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[c791cd1c-eeef-4cab-8dd2-1cf8aa1c0746]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 NetworkManager[48962]: <info>  [1764403281.7212] manager: (tap2d9c390c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.720 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[55f712fd-1174-46c3-bfc1-4276f135dc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.754 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[3efe09df-0ab4-42b9-b0c8-ac9bd4408c44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.756 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[0e69b28f-81d5-43c3-9297-d542c4bc0be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 NetworkManager[48962]: <info>  [1764403281.7774] device (tap2d9c390c-30): carrier: link connected
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.787 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[16d52bae-a309-4499-803b-569f1e4b8c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.814 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d0bbc22d-ae0f-4609-a750-d0e4c8c319ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559684, 'reachable_time': 34583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287748, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.837 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b139f9a6-92fb-42c0-bcb7-b3ce1cbf9d97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:2407'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559684, 'tstamp': 559684}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287749, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.863 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[690ac217-1d9c-42d0-a101-a09f8a4b13a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559684, 'reachable_time': 34583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287750, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.910 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1c092c-3e82-4b3b-80f8-d1e9789236ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.992 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9cdbe3-d89a-4b52-9319-4a6a8ed4e6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.993 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.994 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:01:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:21.994 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d9c390c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:21 compute-0 nova_compute[256729]: 2025-11-29 08:01:21.996 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:21 compute-0 NetworkManager[48962]: <info>  [1764403281.9972] manager: (tap2d9c390c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Nov 29 08:01:21 compute-0 kernel: tap2d9c390c-30: entered promiscuous mode
Nov 29 08:01:22 compute-0 nova_compute[256729]: 2025-11-29 08:01:22.000 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:22.001 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d9c390c-30, col_values=(('external_ids', {'iface-id': '30965993-2787-409a-9e74-8cf68d39c3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:22 compute-0 nova_compute[256729]: 2025-11-29 08:01:22.003 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:22 compute-0 ovn_controller[153383]: 2025-11-29T08:01:22Z|00160|binding|INFO|Releasing lport 30965993-2787-409a-9e74-8cf68d39c3b3 from this chassis (sb_readonly=0)
Nov 29 08:01:22 compute-0 nova_compute[256729]: 2025-11-29 08:01:22.022 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:22.023 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:22.024 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2f238ae1-c1e3-4fbe-8d39-613a9ae70f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:22.026 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:01:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:22.027 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'env', 'PROCESS_TAG=haproxy-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d9c390c-362a-41a5-93b0-23344eb99ae5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:01:22 compute-0 nova_compute[256729]: 2025-11-29 08:01:22.076 256736 DEBUG nova.compute.manager [req-1dca2767-40a9-437e-b31c-d8f2a2ff9998 req-4d8f7e50-057d-4b6e-b918-cac07d47eba9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Received event network-vif-plugged-2e35c440-53b6-4135-ad88-d06069087778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:22 compute-0 nova_compute[256729]: 2025-11-29 08:01:22.077 256736 DEBUG oslo_concurrency.lockutils [req-1dca2767-40a9-437e-b31c-d8f2a2ff9998 req-4d8f7e50-057d-4b6e-b918-cac07d47eba9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:22 compute-0 nova_compute[256729]: 2025-11-29 08:01:22.078 256736 DEBUG oslo_concurrency.lockutils [req-1dca2767-40a9-437e-b31c-d8f2a2ff9998 req-4d8f7e50-057d-4b6e-b918-cac07d47eba9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:22 compute-0 nova_compute[256729]: 2025-11-29 08:01:22.078 256736 DEBUG oslo_concurrency.lockutils [req-1dca2767-40a9-437e-b31c-d8f2a2ff9998 req-4d8f7e50-057d-4b6e-b918-cac07d47eba9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:22 compute-0 nova_compute[256729]: 2025-11-29 08:01:22.079 256736 DEBUG nova.compute.manager [req-1dca2767-40a9-437e-b31c-d8f2a2ff9998 req-4d8f7e50-057d-4b6e-b918-cac07d47eba9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Processing event network-vif-plugged-2e35c440-53b6-4135-ad88-d06069087778 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:01:22 compute-0 nova_compute[256729]: 2025-11-29 08:01:22.346 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:22 compute-0 podman[287818]: 2025-11-29 08:01:22.384263367 +0000 UTC m=+0.053079308 container create cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 08:01:22 compute-0 ceph-mon[75050]: pgmap v1778: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 3.7 KiB/s wr, 126 op/s
Nov 29 08:01:22 compute-0 systemd[1]: Started libpod-conmon-cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4.scope.
Nov 29 08:01:22 compute-0 podman[287818]: 2025-11-29 08:01:22.356943737 +0000 UTC m=+0.025759708 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:01:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98e5df34cdcd0fa2d56fdd190ebf2e7fce50e14d7f719e073ee58a839c98f2a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:22 compute-0 podman[287818]: 2025-11-29 08:01:22.486358762 +0000 UTC m=+0.155174793 container init cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 08:01:22 compute-0 podman[287818]: 2025-11-29 08:01:22.492479786 +0000 UTC m=+0.161295767 container start cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:01:22 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[287833]: [NOTICE]   (287837) : New worker (287839) forked
Nov 29 08:01:22 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[287833]: [NOTICE]   (287837) : Loading success.
Nov 29 08:01:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 4.4 KiB/s wr, 113 op/s
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.058 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403284.0580857, f24807bf-3456-4b8f-b20e-c823d78b9e63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.059 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] VM Started (Lifecycle Event)
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.063 256736 DEBUG nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.069 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.074 256736 INFO nova.virt.libvirt.driver [-] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Instance spawned successfully.
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.074 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.091 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.099 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.106 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.107 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.108 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.109 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.110 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.110 256736 DEBUG nova.virt.libvirt.driver [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.120 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.121 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403284.0629773, f24807bf-3456-4b8f-b20e-c823d78b9e63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.121 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] VM Paused (Lifecycle Event)
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.152 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.158 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403284.0683212, f24807bf-3456-4b8f-b20e-c823d78b9e63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.158 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] VM Resumed (Lifecycle Event)
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.187 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.192 256736 DEBUG nova.compute.manager [req-82da46ec-18f2-413b-bca1-d18d55d259cc req-f38dbec4-87be-4221-984d-18412badd861 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Received event network-vif-plugged-2e35c440-53b6-4135-ad88-d06069087778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.193 256736 DEBUG oslo_concurrency.lockutils [req-82da46ec-18f2-413b-bca1-d18d55d259cc req-f38dbec4-87be-4221-984d-18412badd861 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.194 256736 DEBUG oslo_concurrency.lockutils [req-82da46ec-18f2-413b-bca1-d18d55d259cc req-f38dbec4-87be-4221-984d-18412badd861 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.194 256736 DEBUG oslo_concurrency.lockutils [req-82da46ec-18f2-413b-bca1-d18d55d259cc req-f38dbec4-87be-4221-984d-18412badd861 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.195 256736 DEBUG nova.compute.manager [req-82da46ec-18f2-413b-bca1-d18d55d259cc req-f38dbec4-87be-4221-984d-18412badd861 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] No waiting events found dispatching network-vif-plugged-2e35c440-53b6-4135-ad88-d06069087778 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.195 256736 WARNING nova.compute.manager [req-82da46ec-18f2-413b-bca1-d18d55d259cc req-f38dbec4-87be-4221-984d-18412badd861 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Received unexpected event network-vif-plugged-2e35c440-53b6-4135-ad88-d06069087778 for instance with vm_state building and task_state spawning.
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.200 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.207 256736 INFO nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Took 6.89 seconds to spawn the instance on the hypervisor.
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.207 256736 DEBUG nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.223 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.303 256736 INFO nova.compute.manager [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Took 10.66 seconds to build instance.
Nov 29 08:01:24 compute-0 nova_compute[256729]: 2025-11-29 08:01:24.352 256736 DEBUG oslo_concurrency.lockutils [None req-d8c31c97-db3f-40f5-9ff5-f32570bf1c42 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:24.403 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:24 compute-0 ceph-mon[75050]: pgmap v1779: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 4.4 KiB/s wr, 113 op/s
Nov 29 08:01:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769970031' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1769970031' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:24 compute-0 sshd-session[287642]: Connection closed by authenticating user root 143.14.121.41 port 48836 [preauth]
Nov 29 08:01:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 19 KiB/s wr, 91 op/s
Nov 29 08:01:25 compute-0 nova_compute[256729]: 2025-11-29 08:01:25.286 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1769970031' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1769970031' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:26 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 88 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 19 KiB/s wr, 93 op/s
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.029 256736 DEBUG oslo_concurrency.lockutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "f24807bf-3456-4b8f-b20e-c823d78b9e63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.029 256736 DEBUG oslo_concurrency.lockutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.030 256736 DEBUG oslo_concurrency.lockutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.030 256736 DEBUG oslo_concurrency.lockutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.030 256736 DEBUG oslo_concurrency.lockutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.031 256736 INFO nova.compute.manager [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Terminating instance
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.032 256736 DEBUG nova.compute.manager [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:01:27 compute-0 kernel: tap2e35c440-53 (unregistering): left promiscuous mode
Nov 29 08:01:27 compute-0 NetworkManager[48962]: <info>  [1764403287.1204] device (tap2e35c440-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:01:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Nov 29 08:01:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.131 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:27 compute-0 ovn_controller[153383]: 2025-11-29T08:01:27Z|00161|binding|INFO|Releasing lport 2e35c440-53b6-4135-ad88-d06069087778 from this chassis (sb_readonly=0)
Nov 29 08:01:27 compute-0 ovn_controller[153383]: 2025-11-29T08:01:27Z|00162|binding|INFO|Setting lport 2e35c440-53b6-4135-ad88-d06069087778 down in Southbound
Nov 29 08:01:27 compute-0 ovn_controller[153383]: 2025-11-29T08:01:27Z|00163|binding|INFO|Removing iface tap2e35c440-53 ovn-installed in OVS
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.134 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:27 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.139 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:1a:67 10.100.0.14'], port_security=['fa:16:3e:6b:1a:67 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f24807bf-3456-4b8f-b20e-c823d78b9e63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9afb6ce-053d-473f-aaad-13f25a9ecb58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=2e35c440-53b6-4135-ad88-d06069087778) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.140 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 2e35c440-53b6-4135-ad88-d06069087778 in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 unbound from our chassis
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.142 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d9c390c-362a-41a5-93b0-23344eb99ae5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:01:27 compute-0 ceph-mon[75050]: pgmap v1780: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 19 KiB/s wr, 91 op/s
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.145 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7de7e086-37c2-4324-bbbb-86ce0bc17d11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.145 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace which is not needed anymore
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.171 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:27 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 29 08:01:27 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 2.910s CPU time.
Nov 29 08:01:27 compute-0 systemd-machined[217781]: Machine qemu-15-instance-0000000f terminated.
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.268 256736 INFO nova.virt.libvirt.driver [-] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Instance destroyed successfully.
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.268 256736 DEBUG nova.objects.instance [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'resources' on Instance uuid f24807bf-3456-4b8f-b20e-c823d78b9e63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.284 256736 DEBUG nova.virt.libvirt.vif [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:01:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2107928797',display_name='tempest-TestVolumeBootPattern-server-2107928797',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2107928797',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:01:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-vjjt7v2x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:24Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=f24807bf-3456-4b8f-b20e-c823d78b9e63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.284 256736 DEBUG nova.network.os_vif_util [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "2e35c440-53b6-4135-ad88-d06069087778", "address": "fa:16:3e:6b:1a:67", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e35c440-53", "ovs_interfaceid": "2e35c440-53b6-4135-ad88-d06069087778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.285 256736 DEBUG nova.network.os_vif_util [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1a:67,bridge_name='br-int',has_traffic_filtering=True,id=2e35c440-53b6-4135-ad88-d06069087778,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e35c440-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.285 256736 DEBUG os_vif [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1a:67,bridge_name='br-int',has_traffic_filtering=True,id=2e35c440-53b6-4135-ad88-d06069087778,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e35c440-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.288 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.288 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e35c440-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.290 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.292 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:01:27 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[287833]: [NOTICE]   (287837) : haproxy version is 2.8.14-c23fe91
Nov 29 08:01:27 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[287833]: [NOTICE]   (287837) : path to executable is /usr/sbin/haproxy
Nov 29 08:01:27 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[287833]: [WARNING]  (287837) : Exiting Master process...
Nov 29 08:01:27 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[287833]: [WARNING]  (287837) : Exiting Master process...
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.294 256736 INFO os_vif [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1a:67,bridge_name='br-int',has_traffic_filtering=True,id=2e35c440-53b6-4135-ad88-d06069087778,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e35c440-53')
Nov 29 08:01:27 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[287833]: [ALERT]    (287837) : Current worker (287839) exited with code 143 (Terminated)
Nov 29 08:01:27 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[287833]: [WARNING]  (287837) : All workers exited. Exiting... (0)
Nov 29 08:01:27 compute-0 systemd[1]: libpod-cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4.scope: Deactivated successfully.
Nov 29 08:01:27 compute-0 podman[287880]: 2025-11-29 08:01:27.307730308 +0000 UTC m=+0.062217502 container died cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4-userdata-shm.mount: Deactivated successfully.
Nov 29 08:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d98e5df34cdcd0fa2d56fdd190ebf2e7fce50e14d7f719e073ee58a839c98f2a-merged.mount: Deactivated successfully.
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.347 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:27 compute-0 podman[287880]: 2025-11-29 08:01:27.353937972 +0000 UTC m=+0.108425176 container cleanup cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:01:27 compute-0 systemd[1]: libpod-conmon-cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4.scope: Deactivated successfully.
Nov 29 08:01:27 compute-0 podman[287935]: 2025-11-29 08:01:27.422584954 +0000 UTC m=+0.041958581 container remove cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.432 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f040855f-d024-419c-9036-7ab47e6f4856]: (4, ('Sat Nov 29 08:01:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4)\ncafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4\nSat Nov 29 08:01:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (cafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4)\ncafe3bf00510796f16f163f7dba27131dd9bce8f532fd81c841c2ee3f9c0dba4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.434 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[be0e6259-d95c-4517-933e-e5b5880a91be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.435 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:27 compute-0 kernel: tap2d9c390c-30: left promiscuous mode
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.437 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.451 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.453 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0610603d-2b43-477c-ac5e-f19cc9f7613e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.472 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[95f87fb3-28a6-4abf-8e7f-c819a236ebe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.473 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[df251115-5563-481f-bbc7-710778152e87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.479 256736 INFO nova.virt.libvirt.driver [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Deleting instance files /var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63_del
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.480 256736 INFO nova.virt.libvirt.driver [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Deletion of /var/lib/nova/instances/f24807bf-3456-4b8f-b20e-c823d78b9e63_del complete
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.487 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[592a14f7-8823-431a-930a-ba2326cd59cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559677, 'reachable_time': 25520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287951, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d9c390c\x2d362a\x2d41a5\x2d93b0\x2d23344eb99ae5.mount: Deactivated successfully.
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.491 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:01:27 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:27.491 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[cd53a048-1da1-401b-96f2-5a9e9ecd3d2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.530 256736 DEBUG nova.compute.manager [req-82946041-cbba-443d-bb69-56774ac39562 req-b8eb234d-41e8-49c6-97ef-fa7fd7f37c5d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Received event network-vif-unplugged-2e35c440-53b6-4135-ad88-d06069087778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.531 256736 DEBUG oslo_concurrency.lockutils [req-82946041-cbba-443d-bb69-56774ac39562 req-b8eb234d-41e8-49c6-97ef-fa7fd7f37c5d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.531 256736 DEBUG oslo_concurrency.lockutils [req-82946041-cbba-443d-bb69-56774ac39562 req-b8eb234d-41e8-49c6-97ef-fa7fd7f37c5d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.531 256736 DEBUG oslo_concurrency.lockutils [req-82946041-cbba-443d-bb69-56774ac39562 req-b8eb234d-41e8-49c6-97ef-fa7fd7f37c5d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.532 256736 DEBUG nova.compute.manager [req-82946041-cbba-443d-bb69-56774ac39562 req-b8eb234d-41e8-49c6-97ef-fa7fd7f37c5d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] No waiting events found dispatching network-vif-unplugged-2e35c440-53b6-4135-ad88-d06069087778 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.533 256736 DEBUG nova.compute.manager [req-82946041-cbba-443d-bb69-56774ac39562 req-b8eb234d-41e8-49c6-97ef-fa7fd7f37c5d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Received event network-vif-unplugged-2e35c440-53b6-4135-ad88-d06069087778 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.538 256736 INFO nova.compute.manager [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Took 0.51 seconds to destroy the instance on the hypervisor.
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.538 256736 DEBUG oslo.service.loopingcall [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.539 256736 DEBUG nova.compute.manager [-] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:01:27 compute-0 nova_compute[256729]: 2025-11-29 08:01:27.539 256736 DEBUG nova.network.neutron [-] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:01:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:28 compute-0 nova_compute[256729]: 2025-11-29 08:01:28.135 256736 DEBUG nova.network.neutron [-] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:28 compute-0 ceph-mon[75050]: pgmap v1781: 305 pgs: 305 active+clean; 88 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 19 KiB/s wr, 93 op/s
Nov 29 08:01:28 compute-0 ceph-mon[75050]: osdmap e311: 3 total, 3 up, 3 in
Nov 29 08:01:28 compute-0 nova_compute[256729]: 2025-11-29 08:01:28.167 256736 INFO nova.compute.manager [-] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Took 0.63 seconds to deallocate network for instance.
Nov 29 08:01:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1027134720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1027134720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:28 compute-0 nova_compute[256729]: 2025-11-29 08:01:28.370 256736 INFO nova.compute.manager [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Took 0.20 seconds to detach 1 volumes for instance.
Nov 29 08:01:28 compute-0 nova_compute[256729]: 2025-11-29 08:01:28.439 256736 DEBUG oslo_concurrency.lockutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:28 compute-0 nova_compute[256729]: 2025-11-29 08:01:28.440 256736 DEBUG oslo_concurrency.lockutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:28 compute-0 nova_compute[256729]: 2025-11-29 08:01:28.516 256736 DEBUG oslo_concurrency.processutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:28 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 21 KiB/s wr, 98 op/s
Nov 29 08:01:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760644191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:28 compute-0 nova_compute[256729]: 2025-11-29 08:01:28.999 256736 DEBUG oslo_concurrency.processutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.006 256736 DEBUG nova.compute.provider_tree [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.027 256736 DEBUG nova.scheduler.client.report [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.050 256736 DEBUG oslo_concurrency.lockutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.079 256736 INFO nova.scheduler.client.report [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Deleted allocations for instance f24807bf-3456-4b8f-b20e-c823d78b9e63
Nov 29 08:01:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1027134720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1027134720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1760644191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.187 256736 DEBUG oslo_concurrency.lockutils [None req-31653517-e414-457e-b8cb-b0714de9ce4b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.627 256736 DEBUG nova.compute.manager [req-949a9db1-c12b-4483-bbc8-9b49ebacf4b6 req-dd20c60c-a4ef-46cc-a32a-859b7028a1cb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Received event network-vif-plugged-2e35c440-53b6-4135-ad88-d06069087778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.627 256736 DEBUG oslo_concurrency.lockutils [req-949a9db1-c12b-4483-bbc8-9b49ebacf4b6 req-dd20c60c-a4ef-46cc-a32a-859b7028a1cb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.628 256736 DEBUG oslo_concurrency.lockutils [req-949a9db1-c12b-4483-bbc8-9b49ebacf4b6 req-dd20c60c-a4ef-46cc-a32a-859b7028a1cb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.628 256736 DEBUG oslo_concurrency.lockutils [req-949a9db1-c12b-4483-bbc8-9b49ebacf4b6 req-dd20c60c-a4ef-46cc-a32a-859b7028a1cb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f24807bf-3456-4b8f-b20e-c823d78b9e63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.628 256736 DEBUG nova.compute.manager [req-949a9db1-c12b-4483-bbc8-9b49ebacf4b6 req-dd20c60c-a4ef-46cc-a32a-859b7028a1cb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] No waiting events found dispatching network-vif-plugged-2e35c440-53b6-4135-ad88-d06069087778 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.628 256736 WARNING nova.compute.manager [req-949a9db1-c12b-4483-bbc8-9b49ebacf4b6 req-dd20c60c-a4ef-46cc-a32a-859b7028a1cb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Received unexpected event network-vif-plugged-2e35c440-53b6-4135-ad88-d06069087778 for instance with vm_state deleted and task_state None.
Nov 29 08:01:29 compute-0 nova_compute[256729]: 2025-11-29 08:01:29.628 256736 DEBUG nova.compute.manager [req-949a9db1-c12b-4483-bbc8-9b49ebacf4b6 req-dd20c60c-a4ef-46cc-a32a-859b7028a1cb ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Received event network-vif-deleted-2e35c440-53b6-4135-ad88-d06069087778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:30 compute-0 ceph-mon[75050]: pgmap v1783: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 21 KiB/s wr, 98 op/s
Nov 29 08:01:30 compute-0 podman[287977]: 2025-11-29 08:01:30.72231384 +0000 UTC m=+0.075150567 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:01:30 compute-0 podman[287976]: 2025-11-29 08:01:30.737119286 +0000 UTC m=+0.089516610 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:01:30 compute-0 podman[287975]: 2025-11-29 08:01:30.768812982 +0000 UTC m=+0.125128532 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:01:30 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 20 KiB/s wr, 94 op/s
Nov 29 08:01:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120009392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120009392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4120009392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4120009392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:31 compute-0 sshd-session[287871]: Connection closed by authenticating user root 143.14.121.41 port 56640 [preauth]
Nov 29 08:01:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2525397828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2525397828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:32 compute-0 nova_compute[256729]: 2025-11-29 08:01:32.327 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:32 compute-0 nova_compute[256729]: 2025-11-29 08:01:32.349 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:32 compute-0 ceph-mon[75050]: pgmap v1784: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 20 KiB/s wr, 94 op/s
Nov 29 08:01:32 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2525397828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:32 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2525397828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:32 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 21 KiB/s wr, 150 op/s
Nov 29 08:01:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Nov 29 08:01:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Nov 29 08:01:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Nov 29 08:01:34 compute-0 ceph-mon[75050]: pgmap v1785: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 21 KiB/s wr, 150 op/s
Nov 29 08:01:34 compute-0 ceph-mon[75050]: osdmap e312: 3 total, 3 up, 3 in
Nov 29 08:01:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:34 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/902291735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:34 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/902291735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:34 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 6.4 KiB/s wr, 187 op/s
Nov 29 08:01:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:01:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:01:36 compute-0 sshd-session[288040]: Connection closed by authenticating user root 143.14.121.41 port 56652 [preauth]
Nov 29 08:01:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/902291735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/902291735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:36 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 964 KiB/s rd, 6.0 KiB/s wr, 173 op/s
Nov 29 08:01:37 compute-0 nova_compute[256729]: 2025-11-29 08:01:37.331 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:37 compute-0 nova_compute[256729]: 2025-11-29 08:01:37.351 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:37 compute-0 ceph-mon[75050]: pgmap v1787: 305 pgs: 305 active+clean; 88 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 6.4 KiB/s wr, 187 op/s
Nov 29 08:01:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3220442081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3220442081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:38 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.1 KiB/s wr, 170 op/s
Nov 29 08:01:39 compute-0 ceph-mon[75050]: pgmap v1788: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 964 KiB/s rd, 6.0 KiB/s wr, 173 op/s
Nov 29 08:01:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3220442081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3220442081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:40 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.1 KiB/s wr, 170 op/s
Nov 29 08:01:41 compute-0 ceph-mon[75050]: pgmap v1789: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.1 KiB/s wr, 170 op/s
Nov 29 08:01:42 compute-0 nova_compute[256729]: 2025-11-29 08:01:42.266 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403287.2650409, f24807bf-3456-4b8f-b20e-c823d78b9e63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:01:42 compute-0 nova_compute[256729]: 2025-11-29 08:01:42.267 256736 INFO nova.compute.manager [-] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] VM Stopped (Lifecycle Event)
Nov 29 08:01:42 compute-0 nova_compute[256729]: 2025-11-29 08:01:42.302 256736 DEBUG nova.compute.manager [None req-e31df6a2-4bb7-49ef-a9e9-529a085f7189 - - - - - -] [instance: f24807bf-3456-4b8f-b20e-c823d78b9e63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:01:42 compute-0 nova_compute[256729]: 2025-11-29 08:01:42.334 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:42 compute-0 nova_compute[256729]: 2025-11-29 08:01:42.352 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Nov 29 08:01:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Nov 29 08:01:42 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Nov 29 08:01:42 compute-0 sshd-session[288042]: Connection closed by authenticating user root 143.14.121.41 port 38358 [preauth]
Nov 29 08:01:42 compute-0 ceph-mon[75050]: pgmap v1790: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.1 KiB/s wr, 170 op/s
Nov 29 08:01:42 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 KiB/s wr, 65 op/s
Nov 29 08:01:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Nov 29 08:01:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Nov 29 08:01:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.053282) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403303053343, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2458, "num_deletes": 526, "total_data_size": 3250827, "memory_usage": 3316336, "flush_reason": "Manual Compaction"}
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403303073931, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 2198094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29119, "largest_seqno": 31576, "table_properties": {"data_size": 2188788, "index_size": 5096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 25931, "raw_average_key_size": 20, "raw_value_size": 2166858, "raw_average_value_size": 1754, "num_data_blocks": 225, "num_entries": 1235, "num_filter_entries": 1235, "num_deletions": 526, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403128, "oldest_key_time": 1764403128, "file_creation_time": 1764403303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 20766 microseconds, and 7901 cpu microseconds.
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.074048) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 2198094 bytes OK
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.074074) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.077010) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.077031) EVENT_LOG_v1 {"time_micros": 1764403303077025, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.077056) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3239059, prev total WAL file size 3239059, number of live WAL files 2.
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.078215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(2146KB)], [62(9800KB)]
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403303078270, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12233641, "oldest_snapshot_seqno": -1}
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6162 keys, 9792702 bytes, temperature: kUnknown
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403303159363, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 9792702, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9748270, "index_size": 27938, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 155849, "raw_average_key_size": 25, "raw_value_size": 9634061, "raw_average_value_size": 1563, "num_data_blocks": 1136, "num_entries": 6162, "num_filter_entries": 6162, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764403303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.159847) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9792702 bytes
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.161900) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.5 rd, 120.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(10.0) write-amplify(4.5) OK, records in: 7143, records dropped: 981 output_compression: NoCompression
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.161932) EVENT_LOG_v1 {"time_micros": 1764403303161918, "job": 34, "event": "compaction_finished", "compaction_time_micros": 81275, "compaction_time_cpu_micros": 28645, "output_level": 6, "num_output_files": 1, "total_output_size": 9792702, "num_input_records": 7143, "num_output_records": 6162, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403303162658, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403303165158, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.078118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.165238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.165245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.165247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.165248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:43 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:01:43.165250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:01:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2408313736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2408313736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:43 compute-0 ceph-mon[75050]: osdmap e313: 3 total, 3 up, 3 in
Nov 29 08:01:43 compute-0 ceph-mon[75050]: pgmap v1792: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 KiB/s wr, 65 op/s
Nov 29 08:01:43 compute-0 ceph-mon[75050]: osdmap e314: 3 total, 3 up, 3 in
Nov 29 08:01:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2408313736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2408313736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:44 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 KiB/s wr, 81 op/s
Nov 29 08:01:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:01:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3262022153' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:01:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3262022153' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:46 compute-0 ceph-mon[75050]: pgmap v1794: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 KiB/s wr, 81 op/s
Nov 29 08:01:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3262022153' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:01:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3262022153' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:01:46 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 125 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 29 08:01:47 compute-0 nova_compute[256729]: 2025-11-29 08:01:47.337 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:47 compute-0 nova_compute[256729]: 2025-11-29 08:01:47.354 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:47 compute-0 sshd-session[288044]: Connection closed by authenticating user root 143.14.121.41 port 40232 [preauth]
Nov 29 08:01:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Nov 29 08:01:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Nov 29 08:01:47 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Nov 29 08:01:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Nov 29 08:01:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Nov 29 08:01:48 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Nov 29 08:01:48 compute-0 ceph-mon[75050]: pgmap v1795: 305 pgs: 305 active+clean; 125 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Nov 29 08:01:48 compute-0 ceph-mon[75050]: osdmap e315: 3 total, 3 up, 3 in
Nov 29 08:01:48 compute-0 ceph-mon[75050]: osdmap e316: 3 total, 3 up, 3 in
Nov 29 08:01:48 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 3.5 MiB/s wr, 164 op/s
Nov 29 08:01:49 compute-0 ceph-mon[75050]: pgmap v1798: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 3.5 MiB/s wr, 164 op/s
Nov 29 08:01:50 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 2.7 MiB/s wr, 124 op/s
Nov 29 08:01:52 compute-0 ceph-mon[75050]: pgmap v1799: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 2.7 MiB/s wr, 124 op/s
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.341 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.357 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.444 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.444 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.464 256736 DEBUG nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.562 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.563 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.573 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.573 256736 INFO nova.compute.claims [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:01:52 compute-0 sshd-session[288046]: Connection closed by authenticating user root 143.14.121.41 port 40248 [preauth]
Nov 29 08:01:52 compute-0 nova_compute[256729]: 2025-11-29 08:01:52.699 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:52 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Nov 29 08:01:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:01:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74705622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.360 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.368 256736 DEBUG nova.compute.provider_tree [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.393 256736 DEBUG nova.scheduler.client.report [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.423 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.424 256736 DEBUG nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.482 256736 DEBUG nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.483 256736 DEBUG nova.network.neutron [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.508 256736 INFO nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.584 256736 DEBUG nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:01:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/74705622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.651 256736 INFO nova.virt.block_device [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Booting with volume snapshot b3a1f7da-916c-46b7-81b2-5cfb9138bb53 at /dev/vda
Nov 29 08:01:53 compute-0 nova_compute[256729]: 2025-11-29 08:01:53.993 256736 DEBUG nova.policy [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9664e420085d412aae898a6ec021b24f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dfb6854e99614af5b8df420841fde0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:01:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Nov 29 08:01:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Nov 29 08:01:54 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Nov 29 08:01:54 compute-0 ceph-mon[75050]: pgmap v1800: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Nov 29 08:01:54 compute-0 nova_compute[256729]: 2025-11-29 08:01:54.745 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:54.745 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:01:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:54.747 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:01:54 compute-0 nova_compute[256729]: 2025-11-29 08:01:54.895 256736 DEBUG nova.network.neutron [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Successfully created port: b5aaef17-6224-4c2c-9b27-678fa0931cc8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:01:54 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 972 KiB/s wr, 63 op/s
Nov 29 08:01:54 compute-0 sudo[288073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:54 compute-0 sudo[288073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:54 compute-0 sudo[288073]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:55 compute-0 sudo[288098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:01:55 compute-0 sudo[288098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:55 compute-0 sudo[288098]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:55 compute-0 nova_compute[256729]: 2025-11-29 08:01:55.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:55 compute-0 sudo[288123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:55 compute-0 sudo[288123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:55 compute-0 sudo[288123]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:55 compute-0 sudo[288148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:01:55 compute-0 sudo[288148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:55 compute-0 ceph-mon[75050]: osdmap e317: 3 total, 3 up, 3 in
Nov 29 08:01:55 compute-0 sudo[288148]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:01:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:01:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:01:55 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:01:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:01:55 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:01:55 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev faa29297-b521-474a-b5b1-133b96399971 does not exist
Nov 29 08:01:55 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a5954372-9b3e-491e-9e27-50c85ea775d1 does not exist
Nov 29 08:01:55 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 9a3e6a59-34d9-4929-9aed-b239f243e101 does not exist
Nov 29 08:01:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:01:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:01:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:01:55 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:01:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:01:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:01:55 compute-0 sudo[288204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:55 compute-0 sudo[288204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:55 compute-0 sudo[288204]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:55 compute-0 sudo[288229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:01:55 compute-0 sudo[288229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:55 compute-0 sudo[288229]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:55 compute-0 sudo[288254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:55 compute-0 sudo[288254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:55 compute-0 sudo[288254]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:56 compute-0 sudo[288279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:01:56 compute-0 sudo[288279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:56 compute-0 nova_compute[256729]: 2025-11-29 08:01:56.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:56 compute-0 podman[288344]: 2025-11-29 08:01:56.409628891 +0000 UTC m=+0.043163884 container create a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_darwin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:01:56 compute-0 systemd[1]: Started libpod-conmon-a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7.scope.
Nov 29 08:01:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:56 compute-0 nova_compute[256729]: 2025-11-29 08:01:56.475 256736 DEBUG nova.network.neutron [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Successfully updated port: b5aaef17-6224-4c2c-9b27-678fa0931cc8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:01:56 compute-0 podman[288344]: 2025-11-29 08:01:56.388806825 +0000 UTC m=+0.022341878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:01:56 compute-0 podman[288344]: 2025-11-29 08:01:56.490239923 +0000 UTC m=+0.123774966 container init a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_darwin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:01:56 compute-0 nova_compute[256729]: 2025-11-29 08:01:56.496 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "refresh_cache-a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:56 compute-0 nova_compute[256729]: 2025-11-29 08:01:56.497 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquired lock "refresh_cache-a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:01:56 compute-0 nova_compute[256729]: 2025-11-29 08:01:56.497 256736 DEBUG nova.network.neutron [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:01:56 compute-0 podman[288344]: 2025-11-29 08:01:56.501041251 +0000 UTC m=+0.134576294 container start a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:01:56 compute-0 podman[288344]: 2025-11-29 08:01:56.504433382 +0000 UTC m=+0.137968405 container attach a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:01:56 compute-0 systemd[1]: libpod-a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7.scope: Deactivated successfully.
Nov 29 08:01:56 compute-0 elegant_darwin[288360]: 167 167
Nov 29 08:01:56 compute-0 conmon[288360]: conmon a06bc585125cbc634c10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7.scope/container/memory.events
Nov 29 08:01:56 compute-0 podman[288365]: 2025-11-29 08:01:56.540924246 +0000 UTC m=+0.023956021 container died a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_darwin, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 08:01:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1916f0a43bd89cc9dbbb504982491fda2df844d7eb820f09802fb51da541f5a-merged.mount: Deactivated successfully.
Nov 29 08:01:56 compute-0 nova_compute[256729]: 2025-11-29 08:01:56.578 256736 DEBUG nova.compute.manager [req-b137ec8c-f344-454c-abc3-86ee520d2431 req-5d160289-eb9b-40fb-9e44-e7d7ce631b36 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Received event network-changed-b5aaef17-6224-4c2c-9b27-678fa0931cc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:01:56 compute-0 nova_compute[256729]: 2025-11-29 08:01:56.580 256736 DEBUG nova.compute.manager [req-b137ec8c-f344-454c-abc3-86ee520d2431 req-5d160289-eb9b-40fb-9e44-e7d7ce631b36 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Refreshing instance network info cache due to event network-changed-b5aaef17-6224-4c2c-9b27-678fa0931cc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:01:56 compute-0 nova_compute[256729]: 2025-11-29 08:01:56.580 256736 DEBUG oslo_concurrency.lockutils [req-b137ec8c-f344-454c-abc3-86ee520d2431 req-5d160289-eb9b-40fb-9e44-e7d7ce631b36 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:01:56 compute-0 podman[288365]: 2025-11-29 08:01:56.582211438 +0000 UTC m=+0.065243223 container remove a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:01:56 compute-0 systemd[1]: libpod-conmon-a06bc585125cbc634c1020ee6169115062c1ba1328efa5974f39373b2f30c3d7.scope: Deactivated successfully.
Nov 29 08:01:56 compute-0 ceph-mon[75050]: pgmap v1802: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 972 KiB/s wr, 63 op/s
Nov 29 08:01:56 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:01:56 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:01:56 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:01:56 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:01:56 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:01:56 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:01:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:56.749 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:01:56 compute-0 podman[288385]: 2025-11-29 08:01:56.795006528 +0000 UTC m=+0.076292707 container create 01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 08:01:56 compute-0 systemd[1]: Started libpod-conmon-01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056.scope.
Nov 29 08:01:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:56 compute-0 podman[288385]: 2025-11-29 08:01:56.768877381 +0000 UTC m=+0.050163630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b3547efed394d557d1a54d6e041f8f1e6444407b1d1d8887f9a108fdc1115e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b3547efed394d557d1a54d6e041f8f1e6444407b1d1d8887f9a108fdc1115e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b3547efed394d557d1a54d6e041f8f1e6444407b1d1d8887f9a108fdc1115e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b3547efed394d557d1a54d6e041f8f1e6444407b1d1d8887f9a108fdc1115e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b3547efed394d557d1a54d6e041f8f1e6444407b1d1d8887f9a108fdc1115e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:01:56 compute-0 podman[288385]: 2025-11-29 08:01:56.88644615 +0000 UTC m=+0.167732399 container init 01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:01:56 compute-0 podman[288385]: 2025-11-29 08:01:56.899917509 +0000 UTC m=+0.181203718 container start 01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:01:56 compute-0 podman[288385]: 2025-11-29 08:01:56.903798453 +0000 UTC m=+0.185084632 container attach 01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:01:56 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 787 KiB/s wr, 59 op/s
Nov 29 08:01:57 compute-0 sshd-session[288068]: Connection closed by authenticating user root 143.14.121.41 port 35262 [preauth]
Nov 29 08:01:57 compute-0 nova_compute[256729]: 2025-11-29 08:01:57.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:57 compute-0 nova_compute[256729]: 2025-11-29 08:01:57.344 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:57 compute-0 nova_compute[256729]: 2025-11-29 08:01:57.360 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:01:57 compute-0 nova_compute[256729]: 2025-11-29 08:01:57.381 256736 DEBUG nova.network.neutron [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:01:57 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:01:58 compute-0 lucid_benz[288402]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:01:58 compute-0 lucid_benz[288402]: --> relative data size: 1.0
Nov 29 08:01:58 compute-0 lucid_benz[288402]: --> All data devices are unavailable
Nov 29 08:01:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:01:58 compute-0 systemd[1]: libpod-01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056.scope: Deactivated successfully.
Nov 29 08:01:58 compute-0 systemd[1]: libpod-01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056.scope: Consumed 1.106s CPU time.
Nov 29 08:01:58 compute-0 podman[288385]: 2025-11-29 08:01:58.073783406 +0000 UTC m=+1.355069615 container died 01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 08:01:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-98b3547efed394d557d1a54d6e041f8f1e6444407b1d1d8887f9a108fdc1115e-merged.mount: Deactivated successfully.
Nov 29 08:01:58 compute-0 podman[288385]: 2025-11-29 08:01:58.153649427 +0000 UTC m=+1.434935596 container remove 01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 08:01:58 compute-0 systemd[1]: libpod-conmon-01a615abfc11f3351ebd5ecc600b309219e6b4d282472d5ab1eb949e070a0056.scope: Deactivated successfully.
Nov 29 08:01:58 compute-0 sudo[288279]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:58 compute-0 sudo[288444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:58 compute-0 sudo[288444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:58 compute-0 sudo[288444]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:58 compute-0 sudo[288469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:01:58 compute-0 sudo[288469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:58 compute-0 sudo[288469]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:58 compute-0 sudo[288494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:01:58 compute-0 sudo[288494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:58 compute-0 sudo[288494]: pam_unix(sudo:session): session closed for user root
Nov 29 08:01:58 compute-0 sudo[288519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:01:58 compute-0 sudo[288519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.486 256736 DEBUG nova.network.neutron [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Updating instance_info_cache with network_info: [{"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.553 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Releasing lock "refresh_cache-a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.553 256736 DEBUG nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Instance network_info: |[{"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.553 256736 DEBUG oslo_concurrency.lockutils [req-b137ec8c-f344-454c-abc3-86ee520d2431 req-5d160289-eb9b-40fb-9e44-e7d7ce631b36 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.554 256736 DEBUG nova.network.neutron [req-b137ec8c-f344-454c-abc3-86ee520d2431 req-5d160289-eb9b-40fb-9e44-e7d7ce631b36 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Refreshing network info cache for port b5aaef17-6224-4c2c-9b27-678fa0931cc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.680 256736 DEBUG os_brick.utils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.682 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.702 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.702 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb21262-5ce9-49ea-a2ef-f2fc269d6a38]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.704 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:58 compute-0 ceph-mon[75050]: pgmap v1803: 305 pgs: 305 active+clean; 134 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 787 KiB/s wr, 59 op/s
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.717 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.717 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[548074c0-020c-4542-8dee-4da659d4f23c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.719 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.733 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.733 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[2731c8f3-24a2-422b-9a5e-96f739cfcf67]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.735 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4a321b-6e69-4545-93b6-afe0e6d8b4cf]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.736 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.769 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.774 256736 DEBUG os_brick.initiator.connectors.lightos [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.774 256736 DEBUG os_brick.initiator.connectors.lightos [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.774 256736 DEBUG os_brick.initiator.connectors.lightos [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.775 256736 DEBUG os_brick.utils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] <== get_connector_properties: return (93ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:01:58 compute-0 nova_compute[256729]: 2025-11-29 08:01:58.776 256736 DEBUG nova.virt.block_device [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Updating existing volume attachment record: f9577627-b5da-4705-ab4e-6378fc6bf125 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:01:58 compute-0 podman[288589]: 2025-11-29 08:01:58.906741091 +0000 UTC m=+0.074856789 container create 111bcad9bba4532fb7ea715ec1bf9c66b23105e41b29066f44e94bee8bf2cbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:01:58 compute-0 systemd[1]: Started libpod-conmon-111bcad9bba4532fb7ea715ec1bf9c66b23105e41b29066f44e94bee8bf2cbf7.scope.
Nov 29 08:01:58 compute-0 podman[288589]: 2025-11-29 08:01:58.87709266 +0000 UTC m=+0.045208398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:01:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:01:58 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.4 KiB/s wr, 28 op/s
Nov 29 08:01:59 compute-0 podman[288589]: 2025-11-29 08:01:59.005857148 +0000 UTC m=+0.173972816 container init 111bcad9bba4532fb7ea715ec1bf9c66b23105e41b29066f44e94bee8bf2cbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 08:01:59 compute-0 podman[288589]: 2025-11-29 08:01:59.020012956 +0000 UTC m=+0.188128654 container start 111bcad9bba4532fb7ea715ec1bf9c66b23105e41b29066f44e94bee8bf2cbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 08:01:59 compute-0 podman[288589]: 2025-11-29 08:01:59.024193307 +0000 UTC m=+0.192308975 container attach 111bcad9bba4532fb7ea715ec1bf9c66b23105e41b29066f44e94bee8bf2cbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:01:59 compute-0 elegant_golick[288605]: 167 167
Nov 29 08:01:59 compute-0 systemd[1]: libpod-111bcad9bba4532fb7ea715ec1bf9c66b23105e41b29066f44e94bee8bf2cbf7.scope: Deactivated successfully.
Nov 29 08:01:59 compute-0 podman[288589]: 2025-11-29 08:01:59.028216425 +0000 UTC m=+0.196332093 container died 111bcad9bba4532fb7ea715ec1bf9c66b23105e41b29066f44e94bee8bf2cbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 08:01:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9cca755168b759c792135036c5b8e7a4e05d69ba212187f82709ae1706ff80e-merged.mount: Deactivated successfully.
Nov 29 08:01:59 compute-0 podman[288589]: 2025-11-29 08:01:59.07751682 +0000 UTC m=+0.245632478 container remove 111bcad9bba4532fb7ea715ec1bf9c66b23105e41b29066f44e94bee8bf2cbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 08:01:59 compute-0 systemd[1]: libpod-conmon-111bcad9bba4532fb7ea715ec1bf9c66b23105e41b29066f44e94bee8bf2cbf7.scope: Deactivated successfully.
Nov 29 08:01:59 compute-0 nova_compute[256729]: 2025-11-29 08:01:59.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:01:59 compute-0 podman[288630]: 2025-11-29 08:01:59.310624503 +0000 UTC m=+0.046532733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:01:59 compute-0 podman[288630]: 2025-11-29 08:01:59.404186961 +0000 UTC m=+0.140095111 container create 7b8dd924dcee386ba3427a1814c3821637bb316b1813c8e4363cf80f65b6d1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:01:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:01:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2283327629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:01:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Nov 29 08:01:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:59.781 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:01:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:59.781 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:01:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:01:59.782 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.023 256736 DEBUG nova.network.neutron [req-b137ec8c-f344-454c-abc3-86ee520d2431 req-5d160289-eb9b-40fb-9e44-e7d7ce631b36 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Updated VIF entry in instance network info cache for port b5aaef17-6224-4c2c-9b27-678fa0931cc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.024 256736 DEBUG nova.network.neutron [req-b137ec8c-f344-454c-abc3-86ee520d2431 req-5d160289-eb9b-40fb-9e44-e7d7ce631b36 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Updating instance_info_cache with network_info: [{"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.092 256736 DEBUG oslo_concurrency.lockutils [req-b137ec8c-f344-454c-abc3-86ee520d2431 req-5d160289-eb9b-40fb-9e44-e7d7ce631b36 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:00 compute-0 systemd[1]: Started libpod-conmon-7b8dd924dcee386ba3427a1814c3821637bb316b1813c8e4363cf80f65b6d1bb.scope.
Nov 29 08:02:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559134b3797d989a86a321ef789cb982ac663e319c808f2316c3528e9329efea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559134b3797d989a86a321ef789cb982ac663e319c808f2316c3528e9329efea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559134b3797d989a86a321ef789cb982ac663e319c808f2316c3528e9329efea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559134b3797d989a86a321ef789cb982ac663e319c808f2316c3528e9329efea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Nov 29 08:02:00 compute-0 podman[288630]: 2025-11-29 08:02:00.229457971 +0000 UTC m=+0.965366201 container init 7b8dd924dcee386ba3427a1814c3821637bb316b1813c8e4363cf80f65b6d1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:02:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Nov 29 08:02:00 compute-0 podman[288630]: 2025-11-29 08:02:00.247266117 +0000 UTC m=+0.983174297 container start 7b8dd924dcee386ba3427a1814c3821637bb316b1813c8e4363cf80f65b6d1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.248660) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403320248727, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 451, "num_deletes": 251, "total_data_size": 341184, "memory_usage": 350704, "flush_reason": "Manual Compaction"}
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 29 08:02:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2283327629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403320254934, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 337763, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31577, "largest_seqno": 32027, "table_properties": {"data_size": 335089, "index_size": 710, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6648, "raw_average_key_size": 19, "raw_value_size": 329637, "raw_average_value_size": 958, "num_data_blocks": 31, "num_entries": 344, "num_filter_entries": 344, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403303, "oldest_key_time": 1764403303, "file_creation_time": 1764403320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 6365 microseconds, and 1939 cpu microseconds.
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:02:00 compute-0 podman[288630]: 2025-11-29 08:02:00.256792421 +0000 UTC m=+0.992700631 container attach 7b8dd924dcee386ba3427a1814c3821637bb316b1813c8e4363cf80f65b6d1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.255027) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 337763 bytes OK
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.255053) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.257287) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.257300) EVENT_LOG_v1 {"time_micros": 1764403320257296, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.257310) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 338400, prev total WAL file size 338400, number of live WAL files 2.
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.257702) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(329KB)], [65(9563KB)]
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403320257746, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10130465, "oldest_snapshot_seqno": -1}
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5990 keys, 8428552 bytes, temperature: kUnknown
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403320332714, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8428552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8386766, "index_size": 25740, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 153047, "raw_average_key_size": 25, "raw_value_size": 8277014, "raw_average_value_size": 1381, "num_data_blocks": 1033, "num_entries": 5990, "num_filter_entries": 5990, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764403320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.338192) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8428552 bytes
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.339628) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.4 rd, 105.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.3 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(54.9) write-amplify(25.0) OK, records in: 6506, records dropped: 516 output_compression: NoCompression
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.339658) EVENT_LOG_v1 {"time_micros": 1764403320339645, "job": 36, "event": "compaction_finished", "compaction_time_micros": 80174, "compaction_time_cpu_micros": 24697, "output_level": 6, "num_output_files": 1, "total_output_size": 8428552, "num_input_records": 6506, "num_output_records": 5990, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403320339900, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403320343035, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.257616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.343105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.343112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.343115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.343118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:02:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:02:00.343121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.461 256736 DEBUG nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.463 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.464 256736 INFO nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Creating image(s)
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.464 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.464 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Ensure instance console log exists: /var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.465 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.465 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.465 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.469 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Start _get_guest_xml network_info=[{"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-15dc2077-785b-42ed-a481-8ec5b0a20a91', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '15dc2077-785b-42ed-a481-8ec5b0a20a91', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51', 'attached_at': '', 'detached_at': '', 'volume_id': '15dc2077-785b-42ed-a481-8ec5b0a20a91', 'serial': '15dc2077-785b-42ed-a481-8ec5b0a20a91'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': 'f9577627-b5da-4705-ab4e-6378fc6bf125', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.476 256736 WARNING nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.482 256736 DEBUG nova.virt.libvirt.host [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.484 256736 DEBUG nova.virt.libvirt.host [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.487 256736 DEBUG nova.virt.libvirt.host [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.488 256736 DEBUG nova.virt.libvirt.host [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.489 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.489 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.490 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.490 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.490 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.490 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.491 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.491 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.491 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.491 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.492 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.492 256736 DEBUG nova.virt.hardware [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.521 256736 DEBUG nova.storage.rbd_utils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.525 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2570698125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:00 compute-0 nova_compute[256729]: 2025-11-29 08:02:00.946 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:00 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.9 KiB/s wr, 35 op/s
Nov 29 08:02:01 compute-0 gallant_kalam[288647]: {
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:     "0": [
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:         {
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "devices": [
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "/dev/loop3"
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             ],
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_name": "ceph_lv0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_size": "21470642176",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "name": "ceph_lv0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "tags": {
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.cluster_name": "ceph",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.crush_device_class": "",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.encrypted": "0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.osd_id": "0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.type": "block",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.vdo": "0"
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             },
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "type": "block",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "vg_name": "ceph_vg0"
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:         }
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:     ],
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:     "1": [
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:         {
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "devices": [
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "/dev/loop4"
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             ],
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_name": "ceph_lv1",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_size": "21470642176",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "name": "ceph_lv1",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "tags": {
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.cluster_name": "ceph",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.crush_device_class": "",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.encrypted": "0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.osd_id": "1",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.type": "block",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.vdo": "0"
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             },
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "type": "block",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "vg_name": "ceph_vg1"
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:         }
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:     ],
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:     "2": [
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:         {
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "devices": [
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "/dev/loop5"
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             ],
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_name": "ceph_lv2",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_size": "21470642176",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "name": "ceph_lv2",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "tags": {
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.cluster_name": "ceph",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.crush_device_class": "",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.encrypted": "0",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.osd_id": "2",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.type": "block",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:                 "ceph.vdo": "0"
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             },
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "type": "block",
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:             "vg_name": "ceph_vg2"
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:         }
Nov 29 08:02:01 compute-0 gallant_kalam[288647]:     ]
Nov 29 08:02:01 compute-0 gallant_kalam[288647]: }
Nov 29 08:02:01 compute-0 systemd[1]: libpod-7b8dd924dcee386ba3427a1814c3821637bb316b1813c8e4363cf80f65b6d1bb.scope: Deactivated successfully.
Nov 29 08:02:01 compute-0 podman[288630]: 2025-11-29 08:02:01.069224659 +0000 UTC m=+1.805132819 container died 7b8dd924dcee386ba3427a1814c3821637bb316b1813c8e4363cf80f65b6d1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.101 256736 DEBUG nova.virt.libvirt.vif [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1088579147',display_name='tempest-TestVolumeBootPattern-server-1088579147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1088579147',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-fz6zcm46',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:53Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.102 256736 DEBUG nova.network.os_vif_util [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.103 256736 DEBUG nova.network.os_vif_util [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2b:8d,bridge_name='br-int',has_traffic_filtering=True,id=b5aaef17-6224-4c2c-9b27-678fa0931cc8,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5aaef17-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.104 256736 DEBUG nova.objects.instance [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'pci_devices' on Instance uuid a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-559134b3797d989a86a321ef789cb982ac663e319c808f2316c3528e9329efea-merged.mount: Deactivated successfully.
Nov 29 08:02:01 compute-0 podman[288630]: 2025-11-29 08:02:01.134638435 +0000 UTC m=+1.870546625 container remove 7b8dd924dcee386ba3427a1814c3821637bb316b1813c8e4363cf80f65b6d1bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.147 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.148 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:01 compute-0 systemd[1]: libpod-conmon-7b8dd924dcee386ba3427a1814c3821637bb316b1813c8e4363cf80f65b6d1bb.scope: Deactivated successfully.
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.153 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <uuid>a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51</uuid>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <name>instance-00000010</name>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <nova:name>tempest-TestVolumeBootPattern-server-1088579147</nova:name>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:02:00</nova:creationTime>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <nova:user uuid="9664e420085d412aae898a6ec021b24f">tempest-TestVolumeBootPattern-776329285-project-member</nova:user>
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <nova:project uuid="dfb6854e99614af5b8df420841fde0db">tempest-TestVolumeBootPattern-776329285</nova:project>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <nova:port uuid="b5aaef17-6224-4c2c-9b27-678fa0931cc8">
Nov 29 08:02:01 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <system>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <entry name="serial">a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51</entry>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <entry name="uuid">a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51</entry>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     </system>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <os>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   </os>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <features>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   </features>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51_disk.config">
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       </source>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-15dc2077-785b-42ed-a481-8ec5b0a20a91">
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       </source>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:02:01 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <serial>15dc2077-785b-42ed-a481-8ec5b0a20a91</serial>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:62:2b:8d"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <target dev="tapb5aaef17-62"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51/console.log" append="off"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <video>
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     </video>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:02:01 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:02:01 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:02:01 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:02:01 compute-0 nova_compute[256729]: </domain>
Nov 29 08:02:01 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.154 256736 DEBUG nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Preparing to wait for external event network-vif-plugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.154 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.154 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.155 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.155 256736 DEBUG nova.virt.libvirt.vif [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1088579147',display_name='tempest-TestVolumeBootPattern-server-1088579147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1088579147',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-fz6zcm46',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:53Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.156 256736 DEBUG nova.network.os_vif_util [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.156 256736 DEBUG nova.network.os_vif_util [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2b:8d,bridge_name='br-int',has_traffic_filtering=True,id=b5aaef17-6224-4c2c-9b27-678fa0931cc8,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5aaef17-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.157 256736 DEBUG os_vif [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2b:8d,bridge_name='br-int',has_traffic_filtering=True,id=b5aaef17-6224-4c2c-9b27-678fa0931cc8,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5aaef17-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.158 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.158 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.159 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.163 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.163 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5aaef17-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.164 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5aaef17-62, col_values=(('external_ids', {'iface-id': 'b5aaef17-6224-4c2c-9b27-678fa0931cc8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:2b:8d', 'vm-uuid': 'a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.166 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:01 compute-0 NetworkManager[48962]: <info>  [1764403321.1676] manager: (tapb5aaef17-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.169 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:02:01 compute-0 sudo[288519]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.179 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.180 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.180 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.180 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:02:01 compute-0 podman[288698]: 2025-11-29 08:02:01.181011123 +0000 UTC m=+0.083160741 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.180 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:01 compute-0 podman[288700]: 2025-11-29 08:02:01.195357406 +0000 UTC m=+0.086473550 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.203 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.205 256736 INFO os_vif [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2b:8d,bridge_name='br-int',has_traffic_filtering=True,id=b5aaef17-6224-4c2c-9b27-678fa0931cc8,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5aaef17-62')
Nov 29 08:02:01 compute-0 podman[288697]: 2025-11-29 08:02:01.207411967 +0000 UTC m=+0.112397081 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 08:02:01 compute-0 sudo[288767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:01 compute-0 sudo[288767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:01 compute-0 sudo[288767]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:01 compute-0 ceph-mon[75050]: pgmap v1804: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.4 KiB/s wr, 28 op/s
Nov 29 08:02:01 compute-0 ceph-mon[75050]: osdmap e318: 3 total, 3 up, 3 in
Nov 29 08:02:01 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2570698125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:01 compute-0 sudo[288797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:02:01 compute-0 sudo[288797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:01 compute-0 sudo[288797]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:01 compute-0 sudo[288841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:01 compute-0 sudo[288841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:01 compute-0 sudo[288841]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.403 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.403 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.404 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No VIF found with MAC fa:16:3e:62:2b:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.404 256736 INFO nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Using config drive
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.431 256736 DEBUG nova.storage.rbd_utils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:01 compute-0 sudo[288866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:02:01 compute-0 sudo[288866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1792981457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.598 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.729 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.729 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:02:01 compute-0 podman[288952]: 2025-11-29 08:02:01.861470938 +0000 UTC m=+0.043083401 container create a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.882 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.883 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4469MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.883 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:01 compute-0 nova_compute[256729]: 2025-11-29 08:02:01.883 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:01 compute-0 systemd[1]: Started libpod-conmon-a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1.scope.
Nov 29 08:02:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:01 compute-0 podman[288952]: 2025-11-29 08:02:01.843055726 +0000 UTC m=+0.024668209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:02:01 compute-0 podman[288952]: 2025-11-29 08:02:01.942225544 +0000 UTC m=+0.123838037 container init a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:02:01 compute-0 podman[288952]: 2025-11-29 08:02:01.953287339 +0000 UTC m=+0.134899812 container start a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ganguly, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:02:01 compute-0 podman[288952]: 2025-11-29 08:02:01.957734997 +0000 UTC m=+0.139347490 container attach a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 08:02:01 compute-0 affectionate_ganguly[288968]: 167 167
Nov 29 08:02:01 compute-0 systemd[1]: libpod-a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1.scope: Deactivated successfully.
Nov 29 08:02:01 compute-0 conmon[288968]: conmon a9ad442a007e1f76bf15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1.scope/container/memory.events
Nov 29 08:02:01 compute-0 podman[288952]: 2025-11-29 08:02:01.959886335 +0000 UTC m=+0.141498798 container died a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cb471a08ff9e43b004e71e074ff313bf13c4e69e392a8e95aa1a5052e373946-merged.mount: Deactivated successfully.
Nov 29 08:02:01 compute-0 podman[288952]: 2025-11-29 08:02:01.998013563 +0000 UTC m=+0.179626026 container remove a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ganguly, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:02:02 compute-0 systemd[1]: libpod-conmon-a9ad442a007e1f76bf15030fa1acb071c49a94a3a13fffde2952c78b1a5a10f1.scope: Deactivated successfully.
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.049 256736 INFO nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Creating config drive at /var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51/disk.config
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.054 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmgzxh29h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.074 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.074 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.075 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.096 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing inventories for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.119 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating ProviderTree inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.119 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.144 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing aggregate associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.168 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing trait associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, traits: COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NODE,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.180 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmgzxh29h" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.208 256736 DEBUG nova.storage.rbd_utils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.211 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51/disk.config a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:02 compute-0 podman[288996]: 2025-11-29 08:02:02.157341757 +0000 UTC m=+0.023417317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.267 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:02 compute-0 nova_compute[256729]: 2025-11-29 08:02:02.362 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:02 compute-0 sshd-session[288407]: Connection closed by authenticating user root 143.14.121.41 port 35272 [preauth]
Nov 29 08:02:02 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.9 KiB/s wr, 37 op/s
Nov 29 08:02:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:03 compute-0 podman[288996]: 2025-11-29 08:02:03.507439928 +0000 UTC m=+1.373515498 container create 33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 08:02:03 compute-0 ceph-mon[75050]: pgmap v1806: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.9 KiB/s wr, 35 op/s
Nov 29 08:02:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1792981457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:03 compute-0 systemd[1]: Started libpod-conmon-33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59.scope.
Nov 29 08:02:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5619ac538d64333fc34bd3d16dbc0ee333ad4301e7a8ae7d518577bf47c606d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5619ac538d64333fc34bd3d16dbc0ee333ad4301e7a8ae7d518577bf47c606d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5619ac538d64333fc34bd3d16dbc0ee333ad4301e7a8ae7d518577bf47c606d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5619ac538d64333fc34bd3d16dbc0ee333ad4301e7a8ae7d518577bf47c606d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:03 compute-0 podman[288996]: 2025-11-29 08:02:03.620365772 +0000 UTC m=+1.486441402 container init 33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:02:03 compute-0 podman[288996]: 2025-11-29 08:02:03.637296614 +0000 UTC m=+1.503372184 container start 33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 08:02:03 compute-0 podman[288996]: 2025-11-29 08:02:03.675104383 +0000 UTC m=+1.541179963 container attach 33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:02:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397564223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:03 compute-0 nova_compute[256729]: 2025-11-29 08:02:03.913 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:03 compute-0 nova_compute[256729]: 2025-11-29 08:02:03.923 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:03 compute-0 nova_compute[256729]: 2025-11-29 08:02:03.946 256736 DEBUG oslo_concurrency.processutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51/disk.config a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:03 compute-0 nova_compute[256729]: 2025-11-29 08:02:03.947 256736 INFO nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Deleting local config drive /var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51/disk.config because it was imported into RBD.
Nov 29 08:02:04 compute-0 kernel: tapb5aaef17-62: entered promiscuous mode
Nov 29 08:02:04 compute-0 NetworkManager[48962]: <info>  [1764403324.0160] manager: (tapb5aaef17-62): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Nov 29 08:02:04 compute-0 ovn_controller[153383]: 2025-11-29T08:02:04Z|00164|binding|INFO|Claiming lport b5aaef17-6224-4c2c-9b27-678fa0931cc8 for this chassis.
Nov 29 08:02:04 compute-0 ovn_controller[153383]: 2025-11-29T08:02:04Z|00165|binding|INFO|b5aaef17-6224-4c2c-9b27-678fa0931cc8: Claiming fa:16:3e:62:2b:8d 10.100.0.10
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.018 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 ovn_controller[153383]: 2025-11-29T08:02:04Z|00166|binding|INFO|Setting lport b5aaef17-6224-4c2c-9b27-678fa0931cc8 ovn-installed in OVS
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.056 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.060 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 systemd-machined[217781]: New machine qemu-16-instance-00000010.
Nov 29 08:02:04 compute-0 systemd-udevd[289093]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:02:04 compute-0 NetworkManager[48962]: <info>  [1764403324.0826] device (tapb5aaef17-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:02:04 compute-0 NetworkManager[48962]: <info>  [1764403324.0836] device (tapb5aaef17-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:02:04 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Nov 29 08:02:04 compute-0 ovn_controller[153383]: 2025-11-29T08:02:04Z|00167|binding|INFO|Setting lport b5aaef17-6224-4c2c-9b27-678fa0931cc8 up in Southbound
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.088 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2b:8d 10.100.0.10'], port_security=['fa:16:3e:62:2b:8d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9afb6ce-053d-473f-aaad-13f25a9ecb58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=b5aaef17-6224-4c2c-9b27-678fa0931cc8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.090 163655 INFO neutron.agent.ovn.metadata.agent [-] Port b5aaef17-6224-4c2c-9b27-678fa0931cc8 in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 bound to our chassis
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.090 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.093 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.108 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[99ff185b-52ab-4b01-861c-84d41d5824c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.109 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d9c390c-31 in ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.111 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d9c390c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.111 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff94494-329e-45ea-bb82-3a7b0cc090e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.112 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[72c27cf9-f271-4e23-9124-f1c2460fd4c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.126 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec65b8b-a756-468e-9599-fd3f73d7e8a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.146 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3536bfae-75fb-49d2-b26a-41ef959f6571]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.176 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[85a7f6f6-e4a0-4ec4-8eb8-57ac750e4460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.182 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e74adc7d-9ade-4fc9-ba0c-1b752a36937a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 NetworkManager[48962]: <info>  [1764403324.1835] manager: (tap2d9c390c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.199 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.200 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.213 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7bedf7-1f03-4bdb-9eef-670d3743b765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.217 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f28504-1b1a-4340-83c1-30b2a8a4f73e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 NetworkManager[48962]: <info>  [1764403324.2385] device (tap2d9c390c-30): carrier: link connected
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.242 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[74ccfa71-37dc-4458-a6b3-0cac8729a1ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.256 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[78ae3fd6-9359-4af9-898c-fd1028b093fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563930, 'reachable_time': 21289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289126, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.269 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9fbd75-9ad9-422d-93d3-77230f1e6609]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:2407'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563930, 'tstamp': 563930}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289127, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.281 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc4e558-b4d7-4598-81e9-a3db1efb0bfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563930, 'reachable_time': 21289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289128, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.310 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e81b2726-d63a-4650-aa50-b6c57f3be0da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.377 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab58a26-143b-4837-927b-6efa9206db9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.380 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.380 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.381 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d9c390c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:04 compute-0 kernel: tap2d9c390c-30: entered promiscuous mode
Nov 29 08:02:04 compute-0 NetworkManager[48962]: <info>  [1764403324.3840] manager: (tap2d9c390c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.383 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.386 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d9c390c-30, col_values=(('external_ids', {'iface-id': '30965993-2787-409a-9e74-8cf68d39c3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:04 compute-0 ovn_controller[153383]: 2025-11-29T08:02:04Z|00168|binding|INFO|Releasing lport 30965993-2787-409a-9e74-8cf68d39c3b3 from this chassis (sb_readonly=0)
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.389 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.390 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5ede6a09-e634-4757-862c-34ee9a0dae82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.391 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:02:04 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:04.391 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'env', 'PROCESS_TAG=haproxy-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d9c390c-362a-41a5-93b0-23344eb99ae5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.404 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]: {
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "osd_id": 2,
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "type": "bluestore"
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:     },
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "osd_id": 1,
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "type": "bluestore"
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:     },
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "osd_id": 0,
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:         "type": "bluestore"
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]:     }
Nov 29 08:02:04 compute-0 romantic_dhawan[289069]: }
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.663 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403324.6626256, a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.663 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] VM Started (Lifecycle Event)
Nov 29 08:02:04 compute-0 systemd[1]: libpod-33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59.scope: Deactivated successfully.
Nov 29 08:02:04 compute-0 systemd[1]: libpod-33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59.scope: Consumed 1.027s CPU time.
Nov 29 08:02:04 compute-0 conmon[289069]: conmon 33607c06b488b5907cd4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59.scope/container/memory.events
Nov 29 08:02:04 compute-0 podman[288996]: 2025-11-29 08:02:04.680293756 +0000 UTC m=+2.546369306 container died 33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:02:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Nov 29 08:02:04 compute-0 ceph-mon[75050]: pgmap v1807: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.9 KiB/s wr, 37 op/s
Nov 29 08:02:04 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/397564223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.706 256736 DEBUG nova.compute.manager [req-b0b9b50f-218b-4085-ac4d-a4102ee2e798 req-a54d5143-8cec-426e-94dd-ac88fb5f5325 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Received event network-vif-plugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.706 256736 DEBUG oslo_concurrency.lockutils [req-b0b9b50f-218b-4085-ac4d-a4102ee2e798 req-a54d5143-8cec-426e-94dd-ac88fb5f5325 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.707 256736 DEBUG oslo_concurrency.lockutils [req-b0b9b50f-218b-4085-ac4d-a4102ee2e798 req-a54d5143-8cec-426e-94dd-ac88fb5f5325 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.707 256736 DEBUG oslo_concurrency.lockutils [req-b0b9b50f-218b-4085-ac4d-a4102ee2e798 req-a54d5143-8cec-426e-94dd-ac88fb5f5325 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.707 256736 DEBUG nova.compute.manager [req-b0b9b50f-218b-4085-ac4d-a4102ee2e798 req-a54d5143-8cec-426e-94dd-ac88fb5f5325 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Processing event network-vif-plugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.708 256736 DEBUG nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.720 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.723 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.728 256736 INFO nova.virt.libvirt.driver [-] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Instance spawned successfully.
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.728 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.731 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Nov 29 08:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5619ac538d64333fc34bd3d16dbc0ee333ad4301e7a8ae7d518577bf47c606d6-merged.mount: Deactivated successfully.
Nov 29 08:02:04 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.880 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.880 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403324.6636145, a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.881 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] VM Paused (Lifecycle Event)
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.885 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.885 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.885 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.886 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.886 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.886 256736 DEBUG nova.virt.libvirt.driver [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.927 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.931 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403324.7192895, a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.931 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] VM Resumed (Lifecycle Event)
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.965 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.968 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:04 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Nov 29 08:02:04 compute-0 nova_compute[256729]: 2025-11-29 08:02:04.994 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.004 256736 INFO nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Took 4.54 seconds to spawn the instance on the hypervisor.
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.004 256736 DEBUG nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:05 compute-0 podman[288996]: 2025-11-29 08:02:05.009089924 +0000 UTC m=+2.875165464 container remove 33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:02:05 compute-0 systemd[1]: libpod-conmon-33607c06b488b5907cd4facffec862c71c330d029ac36d65a52586a8ff397e59.scope: Deactivated successfully.
Nov 29 08:02:05 compute-0 sudo[288866]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:02:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:02:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:02:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev fafe26d8-984e-41f2-b0ce-56b4467622b2 does not exist
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 807a15a3-f9a5-4766-a704-965d43f0ef3e does not exist
Nov 29 08:02:05 compute-0 sudo[289249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:02:05 compute-0 sudo[289249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:05 compute-0 sudo[289249]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:05 compute-0 podman[289238]: 2025-11-29 08:02:05.148749962 +0000 UTC m=+0.389982432 container create 4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:02:05 compute-0 systemd[1]: Started libpod-conmon-4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb.scope.
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.201 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.202 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.202 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:02:05 compute-0 podman[289238]: 2025-11-29 08:02:05.115205056 +0000 UTC m=+0.356437546 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:02:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7c9f3713dd9af6c7aa4268273aec31a762626f9399e4a047cd2db3fabbdb061/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:05 compute-0 sudo[289278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:02:05 compute-0 sudo[289278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:02:05 compute-0 podman[289238]: 2025-11-29 08:02:05.229478277 +0000 UTC m=+0.470710747 container init 4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:02:05 compute-0 sudo[289278]: pam_unix(sudo:session): session closed for user root
Nov 29 08:02:05 compute-0 podman[289238]: 2025-11-29 08:02:05.241765966 +0000 UTC m=+0.482998436 container start 4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 08:02:05 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289301]: [NOTICE]   (289309) : New worker (289311) forked
Nov 29 08:02:05 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289301]: [NOTICE]   (289309) : Loading success.
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.277 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.278 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.278 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.411 256736 INFO nova.compute.manager [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Took 12.88 seconds to build instance.
Nov 29 08:02:05 compute-0 nova_compute[256729]: 2025-11-29 08:02:05.521 256736 DEBUG oslo_concurrency.lockutils [None req-8f2a7539-a8c9-423e-8063-80c905eae550 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:02:05
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups']
Nov 29 08:02:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:02:05 compute-0 ceph-mon[75050]: osdmap e319: 3 total, 3 up, 3 in
Nov 29 08:02:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:02:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:02:06 compute-0 nova_compute[256729]: 2025-11-29 08:02:06.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:06 compute-0 nova_compute[256729]: 2025-11-29 08:02:06.167 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298576843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298576843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:06 compute-0 ceph-mon[75050]: pgmap v1809: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Nov 29 08:02:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2298576843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2298576843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:06 compute-0 nova_compute[256729]: 2025-11-29 08:02:06.870 256736 DEBUG nova.compute.manager [req-236ef710-cdbc-4375-8ce8-128fe29e4457 req-781aa0b8-9fb5-4bac-9f42-473d36473e26 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Received event network-vif-plugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:06 compute-0 nova_compute[256729]: 2025-11-29 08:02:06.871 256736 DEBUG oslo_concurrency.lockutils [req-236ef710-cdbc-4375-8ce8-128fe29e4457 req-781aa0b8-9fb5-4bac-9f42-473d36473e26 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:06 compute-0 nova_compute[256729]: 2025-11-29 08:02:06.871 256736 DEBUG oslo_concurrency.lockutils [req-236ef710-cdbc-4375-8ce8-128fe29e4457 req-781aa0b8-9fb5-4bac-9f42-473d36473e26 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:06 compute-0 nova_compute[256729]: 2025-11-29 08:02:06.871 256736 DEBUG oslo_concurrency.lockutils [req-236ef710-cdbc-4375-8ce8-128fe29e4457 req-781aa0b8-9fb5-4bac-9f42-473d36473e26 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:06 compute-0 nova_compute[256729]: 2025-11-29 08:02:06.871 256736 DEBUG nova.compute.manager [req-236ef710-cdbc-4375-8ce8-128fe29e4457 req-781aa0b8-9fb5-4bac-9f42-473d36473e26 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] No waiting events found dispatching network-vif-plugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:06 compute-0 nova_compute[256729]: 2025-11-29 08:02:06.871 256736 WARNING nova.compute.manager [req-236ef710-cdbc-4375-8ce8-128fe29e4457 req-781aa0b8-9fb5-4bac-9f42-473d36473e26 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Received unexpected event network-vif-plugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 for instance with vm_state active and task_state None.
Nov 29 08:02:06 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 120 op/s
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:02:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:02:07 compute-0 nova_compute[256729]: 2025-11-29 08:02:07.363 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:07 compute-0 sshd-session[289048]: Connection closed by authenticating user root 143.14.121.41 port 36220 [preauth]
Nov 29 08:02:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Nov 29 08:02:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Nov 29 08:02:08 compute-0 ceph-mon[75050]: pgmap v1810: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 120 op/s
Nov 29 08:02:08 compute-0 nova_compute[256729]: 2025-11-29 08:02:08.875 256736 DEBUG oslo_concurrency.lockutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:08 compute-0 nova_compute[256729]: 2025-11-29 08:02:08.875 256736 DEBUG oslo_concurrency.lockutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:08 compute-0 nova_compute[256729]: 2025-11-29 08:02:08.875 256736 DEBUG oslo_concurrency.lockutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:08 compute-0 nova_compute[256729]: 2025-11-29 08:02:08.876 256736 DEBUG oslo_concurrency.lockutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:08 compute-0 nova_compute[256729]: 2025-11-29 08:02:08.876 256736 DEBUG oslo_concurrency.lockutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:08 compute-0 nova_compute[256729]: 2025-11-29 08:02:08.877 256736 INFO nova.compute.manager [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Terminating instance
Nov 29 08:02:08 compute-0 nova_compute[256729]: 2025-11-29 08:02:08.878 256736 DEBUG nova.compute.manager [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:02:08 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 176 op/s
Nov 29 08:02:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Nov 29 08:02:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2923379644' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2923379644' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:09 compute-0 ceph-mon[75050]: osdmap e320: 3 total, 3 up, 3 in
Nov 29 08:02:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2923379644' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2923379644' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:09 compute-0 kernel: tapb5aaef17-62 (unregistering): left promiscuous mode
Nov 29 08:02:09 compute-0 NetworkManager[48962]: <info>  [1764403329.8852] device (tapb5aaef17-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:02:09 compute-0 nova_compute[256729]: 2025-11-29 08:02:09.899 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:09 compute-0 ovn_controller[153383]: 2025-11-29T08:02:09Z|00169|binding|INFO|Releasing lport b5aaef17-6224-4c2c-9b27-678fa0931cc8 from this chassis (sb_readonly=0)
Nov 29 08:02:09 compute-0 ovn_controller[153383]: 2025-11-29T08:02:09Z|00170|binding|INFO|Setting lport b5aaef17-6224-4c2c-9b27-678fa0931cc8 down in Southbound
Nov 29 08:02:09 compute-0 ovn_controller[153383]: 2025-11-29T08:02:09Z|00171|binding|INFO|Removing iface tapb5aaef17-62 ovn-installed in OVS
Nov 29 08:02:09 compute-0 nova_compute[256729]: 2025-11-29 08:02:09.902 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:09.912 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:2b:8d 10.100.0.10'], port_security=['fa:16:3e:62:2b:8d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9afb6ce-053d-473f-aaad-13f25a9ecb58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=b5aaef17-6224-4c2c-9b27-678fa0931cc8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:09.915 163655 INFO neutron.agent.ovn.metadata.agent [-] Port b5aaef17-6224-4c2c-9b27-678fa0931cc8 in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 unbound from our chassis
Nov 29 08:02:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:09.917 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d9c390c-362a-41a5-93b0-23344eb99ae5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:02:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:09.918 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[32424615-82ff-4825-b0cb-42a0b98d4c7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:09.919 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace which is not needed anymore
Nov 29 08:02:09 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 29 08:02:09 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 4.702s CPU time.
Nov 29 08:02:09 compute-0 nova_compute[256729]: 2025-11-29 08:02:09.943 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:09 compute-0 systemd-machined[217781]: Machine qemu-16-instance-00000010 terminated.
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.104 256736 DEBUG nova.compute.manager [req-59514231-8967-47a5-8ae0-b8e4ee2f8a8e req-5000afb8-2495-45a6-b058-0ad31e0dfea8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Received event network-vif-unplugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.105 256736 DEBUG oslo_concurrency.lockutils [req-59514231-8967-47a5-8ae0-b8e4ee2f8a8e req-5000afb8-2495-45a6-b058-0ad31e0dfea8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.106 256736 DEBUG oslo_concurrency.lockutils [req-59514231-8967-47a5-8ae0-b8e4ee2f8a8e req-5000afb8-2495-45a6-b058-0ad31e0dfea8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.106 256736 DEBUG oslo_concurrency.lockutils [req-59514231-8967-47a5-8ae0-b8e4ee2f8a8e req-5000afb8-2495-45a6-b058-0ad31e0dfea8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.106 256736 DEBUG nova.compute.manager [req-59514231-8967-47a5-8ae0-b8e4ee2f8a8e req-5000afb8-2495-45a6-b058-0ad31e0dfea8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] No waiting events found dispatching network-vif-unplugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.107 256736 DEBUG nova.compute.manager [req-59514231-8967-47a5-8ae0-b8e4ee2f8a8e req-5000afb8-2495-45a6-b058-0ad31e0dfea8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Received event network-vif-unplugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.125 256736 INFO nova.virt.libvirt.driver [-] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Instance destroyed successfully.
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.125 256736 DEBUG nova.objects.instance [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'resources' on Instance uuid a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:10 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289301]: [NOTICE]   (289309) : haproxy version is 2.8.14-c23fe91
Nov 29 08:02:10 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289301]: [NOTICE]   (289309) : path to executable is /usr/sbin/haproxy
Nov 29 08:02:10 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289301]: [WARNING]  (289309) : Exiting Master process...
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.149 256736 DEBUG nova.virt.libvirt.vif [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1088579147',display_name='tempest-TestVolumeBootPattern-server-1088579147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1088579147',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-fz6zcm46',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:05Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.150 256736 DEBUG nova.network.os_vif_util [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "address": "fa:16:3e:62:2b:8d", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5aaef17-62", "ovs_interfaceid": "b5aaef17-6224-4c2c-9b27-678fa0931cc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.151 256736 DEBUG nova.network.os_vif_util [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:2b:8d,bridge_name='br-int',has_traffic_filtering=True,id=b5aaef17-6224-4c2c-9b27-678fa0931cc8,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5aaef17-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.151 256736 DEBUG os_vif [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2b:8d,bridge_name='br-int',has_traffic_filtering=True,id=b5aaef17-6224-4c2c-9b27-678fa0931cc8,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5aaef17-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:02:10 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289301]: [ALERT]    (289309) : Current worker (289311) exited with code 143 (Terminated)
Nov 29 08:02:10 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289301]: [WARNING]  (289309) : All workers exited. Exiting... (0)
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.153 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.153 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5aaef17-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.155 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:10 compute-0 systemd[1]: libpod-4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb.scope: Deactivated successfully.
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.157 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.157 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:10 compute-0 podman[289347]: 2025-11-29 08:02:10.159801873 +0000 UTC m=+0.132047347 container died 4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.159 256736 INFO os_vif [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:2b:8d,bridge_name='br-int',has_traffic_filtering=True,id=b5aaef17-6224-4c2c-9b27-678fa0931cc8,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5aaef17-62')
Nov 29 08:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb-userdata-shm.mount: Deactivated successfully.
Nov 29 08:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7c9f3713dd9af6c7aa4268273aec31a762626f9399e4a047cd2db3fabbdb061-merged.mount: Deactivated successfully.
Nov 29 08:02:10 compute-0 podman[289347]: 2025-11-29 08:02:10.230754007 +0000 UTC m=+0.202999451 container cleanup 4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:02:10 compute-0 systemd[1]: libpod-conmon-4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb.scope: Deactivated successfully.
Nov 29 08:02:10 compute-0 podman[289404]: 2025-11-29 08:02:10.390479561 +0000 UTC m=+0.139266719 container remove 4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:02:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:10.402 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[234b6486-43b0-4441-8ebe-dca3effaa714]: (4, ('Sat Nov 29 08:02:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb)\n4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb\nSat Nov 29 08:02:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb)\n4e73dba5d138ec4df0a492c10bfecd565c331aa3e0a9f87424f9e1ece66a82cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:10.404 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4468a82a-7421-45aa-9066-d9aef93f71b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:10.405 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.406 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:10 compute-0 kernel: tap2d9c390c-30: left promiscuous mode
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.420 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:10.424 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[03726444-96c2-4cc1-a075-b861c28d4821]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:10.444 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[51a155d4-d2d9-482e-b301-0ae423f19ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:10.445 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f7205a70-5499-4ca4-9044-d7704e5948b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:10.462 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[007395b0-54ad-4935-a92b-1025c51aaaa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563923, 'reachable_time': 43293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289419, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:10.465 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:02:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:10.465 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[1c47692a-376e-46e0-aeda-7eefc9b776d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d9c390c\x2d362a\x2d41a5\x2d93b0\x2d23344eb99ae5.mount: Deactivated successfully.
Nov 29 08:02:10 compute-0 sshd-session[289320]: Connection closed by authenticating user root 143.14.121.41 port 36224 [preauth]
Nov 29 08:02:10 compute-0 ceph-mon[75050]: pgmap v1812: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 176 op/s
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.884 256736 INFO nova.virt.libvirt.driver [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Deleting instance files /var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51_del
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.885 256736 INFO nova.virt.libvirt.driver [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Deletion of /var/lib/nova/instances/a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51_del complete
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.959 256736 INFO nova.compute.manager [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Took 2.08 seconds to destroy the instance on the hypervisor.
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.960 256736 DEBUG oslo.service.loopingcall [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.960 256736 DEBUG nova.compute.manager [-] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:02:10 compute-0 nova_compute[256729]: 2025-11-29 08:02:10.960 256736 DEBUG nova.network.neutron [-] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:02:10 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 168 op/s
Nov 29 08:02:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532645576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532645576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:11 compute-0 nova_compute[256729]: 2025-11-29 08:02:11.811 256736 DEBUG nova.network.neutron [-] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:11 compute-0 nova_compute[256729]: 2025-11-29 08:02:11.840 256736 INFO nova.compute.manager [-] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Took 0.88 seconds to deallocate network for instance.
Nov 29 08:02:12 compute-0 ceph-mon[75050]: pgmap v1813: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 168 op/s
Nov 29 08:02:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2532645576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2532645576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.297 256736 INFO nova.compute.manager [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Took 0.46 seconds to detach 1 volumes for instance.
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.300 256736 DEBUG nova.compute.manager [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Deleting volume: 15dc2077-785b-42ed-a481-8ec5b0a20a91 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.337 256736 DEBUG nova.compute.manager [req-415efb9e-36a2-4a53-8fc0-a6aa0857a749 req-616eda16-5b8f-4670-a887-48ac88126ac7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Received event network-vif-plugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.337 256736 DEBUG oslo_concurrency.lockutils [req-415efb9e-36a2-4a53-8fc0-a6aa0857a749 req-616eda16-5b8f-4670-a887-48ac88126ac7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.338 256736 DEBUG oslo_concurrency.lockutils [req-415efb9e-36a2-4a53-8fc0-a6aa0857a749 req-616eda16-5b8f-4670-a887-48ac88126ac7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.338 256736 DEBUG oslo_concurrency.lockutils [req-415efb9e-36a2-4a53-8fc0-a6aa0857a749 req-616eda16-5b8f-4670-a887-48ac88126ac7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.338 256736 DEBUG nova.compute.manager [req-415efb9e-36a2-4a53-8fc0-a6aa0857a749 req-616eda16-5b8f-4670-a887-48ac88126ac7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] No waiting events found dispatching network-vif-plugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.339 256736 WARNING nova.compute.manager [req-415efb9e-36a2-4a53-8fc0-a6aa0857a749 req-616eda16-5b8f-4670-a887-48ac88126ac7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Received unexpected event network-vif-plugged-b5aaef17-6224-4c2c-9b27-678fa0931cc8 for instance with vm_state active and task_state deleting.
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.339 256736 DEBUG nova.compute.manager [req-415efb9e-36a2-4a53-8fc0-a6aa0857a749 req-616eda16-5b8f-4670-a887-48ac88126ac7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Received event network-vif-deleted-b5aaef17-6224-4c2c-9b27-678fa0931cc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.365 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.541 256736 DEBUG oslo_concurrency.lockutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.541 256736 DEBUG oslo_concurrency.lockutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:12 compute-0 nova_compute[256729]: 2025-11-29 08:02:12.612 256736 DEBUG oslo_concurrency.processutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:12 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 183 op/s
Nov 29 08:02:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2899141163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:13 compute-0 nova_compute[256729]: 2025-11-29 08:02:13.027 256736 DEBUG oslo_concurrency.processutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:13 compute-0 nova_compute[256729]: 2025-11-29 08:02:13.035 256736 DEBUG nova.compute.provider_tree [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:13 compute-0 nova_compute[256729]: 2025-11-29 08:02:13.065 256736 DEBUG nova.scheduler.client.report [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:13 compute-0 nova_compute[256729]: 2025-11-29 08:02:13.097 256736 DEBUG oslo_concurrency.lockutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:13 compute-0 nova_compute[256729]: 2025-11-29 08:02:13.151 256736 INFO nova.scheduler.client.report [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Deleted allocations for instance a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51
Nov 29 08:02:13 compute-0 nova_compute[256729]: 2025-11-29 08:02:13.263 256736 DEBUG oslo_concurrency.lockutils [None req-00a1f0f7-0a53-4fc6-a4d3-6d77ba37648d 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Nov 29 08:02:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2899141163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Nov 29 08:02:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Nov 29 08:02:14 compute-0 ceph-mon[75050]: pgmap v1814: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 183 op/s
Nov 29 08:02:14 compute-0 ceph-mon[75050]: osdmap e321: 3 total, 3 up, 3 in
Nov 29 08:02:14 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 20 KiB/s wr, 103 op/s
Nov 29 08:02:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2542712650' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2542712650' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:15 compute-0 nova_compute[256729]: 2025-11-29 08:02:15.157 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006929479431204011 of space, bias 1.0, pg target 0.20788438293612033 quantized to 32 (current 32)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:02:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:02:15 compute-0 sshd-session[289421]: Connection closed by authenticating user root 143.14.121.41 port 36226 [preauth]
Nov 29 08:02:16 compute-0 ceph-mon[75050]: pgmap v1816: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 20 KiB/s wr, 103 op/s
Nov 29 08:02:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2542712650' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2542712650' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:16 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 KiB/s wr, 62 op/s
Nov 29 08:02:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Nov 29 08:02:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Nov 29 08:02:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Nov 29 08:02:17 compute-0 nova_compute[256729]: 2025-11-29 08:02:17.367 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:17 compute-0 sshd-session[289447]: Connection closed by 220.250.59.155 port 34050
Nov 29 08:02:18 compute-0 ceph-mon[75050]: pgmap v1817: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 KiB/s wr, 62 op/s
Nov 29 08:02:18 compute-0 ceph-mon[75050]: osdmap e322: 3 total, 3 up, 3 in
Nov 29 08:02:18 compute-0 sshd-session[289445]: Connection closed by authenticating user root 143.14.121.41 port 46202 [preauth]
Nov 29 08:02:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Nov 29 08:02:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Nov 29 08:02:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Nov 29 08:02:18 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.3 KiB/s wr, 84 op/s
Nov 29 08:02:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Nov 29 08:02:20 compute-0 nova_compute[256729]: 2025-11-29 08:02:20.160 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Nov 29 08:02:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Nov 29 08:02:20 compute-0 ceph-mon[75050]: osdmap e323: 3 total, 3 up, 3 in
Nov 29 08:02:20 compute-0 ceph-mon[75050]: pgmap v1820: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.3 KiB/s wr, 84 op/s
Nov 29 08:02:20 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 51 op/s
Nov 29 08:02:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254064358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254064358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:21 compute-0 ceph-mon[75050]: osdmap e324: 3 total, 3 up, 3 in
Nov 29 08:02:21 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4254064358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:21 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4254064358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:22 compute-0 sshd-session[289448]: Connection closed by authenticating user root 143.14.121.41 port 46212 [preauth]
Nov 29 08:02:22 compute-0 nova_compute[256729]: 2025-11-29 08:02:22.370 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:22 compute-0 ceph-mon[75050]: pgmap v1822: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 51 op/s
Nov 29 08:02:22 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 97 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.5 KiB/s wr, 58 op/s
Nov 29 08:02:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Nov 29 08:02:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Nov 29 08:02:23 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Nov 29 08:02:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Nov 29 08:02:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Nov 29 08:02:23 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Nov 29 08:02:24 compute-0 ceph-mon[75050]: pgmap v1823: 305 pgs: 305 active+clean; 97 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.5 KiB/s wr, 58 op/s
Nov 29 08:02:24 compute-0 ceph-mon[75050]: osdmap e325: 3 total, 3 up, 3 in
Nov 29 08:02:24 compute-0 ceph-mon[75050]: osdmap e326: 3 total, 3 up, 3 in
Nov 29 08:02:24 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.5 KiB/s wr, 57 op/s
Nov 29 08:02:25 compute-0 nova_compute[256729]: 2025-11-29 08:02:25.124 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403330.122053, a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:25 compute-0 nova_compute[256729]: 2025-11-29 08:02:25.124 256736 INFO nova.compute.manager [-] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] VM Stopped (Lifecycle Event)
Nov 29 08:02:25 compute-0 nova_compute[256729]: 2025-11-29 08:02:25.146 256736 DEBUG nova.compute.manager [None req-6b93176e-4dc4-451c-a6b1-a9c222df30f7 - - - - - -] [instance: a7defb5a-37b1-4bd4-bd8d-9fe7eb15bf51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:25 compute-0 nova_compute[256729]: 2025-11-29 08:02:25.163 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1439951519' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1439951519' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:25 compute-0 sshd-session[289450]: Connection closed by authenticating user root 143.14.121.41 port 55292 [preauth]
Nov 29 08:02:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1439951519' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1439951519' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:26 compute-0 ceph-mon[75050]: pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.5 KiB/s wr, 57 op/s
Nov 29 08:02:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/913931617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/913931617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.2 KiB/s wr, 106 op/s
Nov 29 08:02:27 compute-0 nova_compute[256729]: 2025-11-29 08:02:27.372 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/913931617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/913931617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:28 compute-0 ceph-mon[75050]: pgmap v1827: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.2 KiB/s wr, 106 op/s
Nov 29 08:02:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.6 KiB/s wr, 107 op/s
Nov 29 08:02:29 compute-0 sshd-session[289452]: Connection closed by authenticating user root 143.14.121.41 port 55296 [preauth]
Nov 29 08:02:29 compute-0 ceph-mon[75050]: pgmap v1828: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.6 KiB/s wr, 107 op/s
Nov 29 08:02:30 compute-0 nova_compute[256729]: 2025-11-29 08:02:30.167 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.5 KiB/s wr, 84 op/s
Nov 29 08:02:31 compute-0 podman[289457]: 2025-11-29 08:02:31.722705443 +0000 UTC m=+0.089064188 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:02:31 compute-0 podman[289458]: 2025-11-29 08:02:31.723149126 +0000 UTC m=+0.081436166 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 08:02:31 compute-0 podman[289456]: 2025-11-29 08:02:31.755799937 +0000 UTC m=+0.119854951 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 08:02:32 compute-0 ceph-mon[75050]: pgmap v1829: 305 pgs: 305 active+clean; 88 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.5 KiB/s wr, 84 op/s
Nov 29 08:02:32 compute-0 nova_compute[256729]: 2025-11-29 08:02:32.374 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 122 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 83 op/s
Nov 29 08:02:33 compute-0 sshd-session[289454]: Connection closed by authenticating user root 143.14.121.41 port 55308 [preauth]
Nov 29 08:02:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Nov 29 08:02:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Nov 29 08:02:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Nov 29 08:02:34 compute-0 ceph-mon[75050]: pgmap v1830: 305 pgs: 305 active+clean; 122 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 83 op/s
Nov 29 08:02:34 compute-0 ceph-mon[75050]: osdmap e327: 3 total, 3 up, 3 in
Nov 29 08:02:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 98 op/s
Nov 29 08:02:35 compute-0 nova_compute[256729]: 2025-11-29 08:02:35.170 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:02:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:02:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:35 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3721573148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Nov 29 08:02:36 compute-0 ceph-mon[75050]: pgmap v1832: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 98 op/s
Nov 29 08:02:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3721573148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Nov 29 08:02:36 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Nov 29 08:02:36 compute-0 sshd-session[289521]: Connection closed by authenticating user root 143.14.121.41 port 40798 [preauth]
Nov 29 08:02:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 MiB/s wr, 62 op/s
Nov 29 08:02:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Nov 29 08:02:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Nov 29 08:02:37 compute-0 ceph-mon[75050]: osdmap e328: 3 total, 3 up, 3 in
Nov 29 08:02:37 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Nov 29 08:02:37 compute-0 nova_compute[256729]: 2025-11-29 08:02:37.387 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:38 compute-0 ceph-mon[75050]: pgmap v1834: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 MiB/s wr, 62 op/s
Nov 29 08:02:38 compute-0 ceph-mon[75050]: osdmap e329: 3 total, 3 up, 3 in
Nov 29 08:02:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.2 MiB/s wr, 65 op/s
Nov 29 08:02:39 compute-0 sshd-session[289523]: Connection closed by authenticating user root 143.14.121.41 port 40804 [preauth]
Nov 29 08:02:40 compute-0 nova_compute[256729]: 2025-11-29 08:02:40.172 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:40 compute-0 sshd-session[289525]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Nov 29 08:02:40 compute-0 ceph-mon[75050]: pgmap v1836: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.2 MiB/s wr, 65 op/s
Nov 29 08:02:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 KiB/s wr, 25 op/s
Nov 29 08:02:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/392281449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/392281449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:41 compute-0 sshd-session[289527]: Connection closed by authenticating user root 143.14.121.41 port 40812 [preauth]
Nov 29 08:02:42 compute-0 nova_compute[256729]: 2025-11-29 08:02:42.234 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "a3133710-8c54-433d-9263-c081a69bf339" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:42 compute-0 nova_compute[256729]: 2025-11-29 08:02:42.235 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Nov 29 08:02:42 compute-0 ceph-mon[75050]: pgmap v1837: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 KiB/s wr, 25 op/s
Nov 29 08:02:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Nov 29 08:02:42 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Nov 29 08:02:42 compute-0 nova_compute[256729]: 2025-11-29 08:02:42.389 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:42 compute-0 nova_compute[256729]: 2025-11-29 08:02:42.451 256736 DEBUG nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:02:42 compute-0 nova_compute[256729]: 2025-11-29 08:02:42.626 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:42 compute-0 nova_compute[256729]: 2025-11-29 08:02:42.626 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:42 compute-0 nova_compute[256729]: 2025-11-29 08:02:42.635 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:02:42 compute-0 nova_compute[256729]: 2025-11-29 08:02:42.636 256736 INFO nova.compute.claims [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:02:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.7 KiB/s wr, 29 op/s
Nov 29 08:02:43 compute-0 nova_compute[256729]: 2025-11-29 08:02:43.192 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Nov 29 08:02:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Nov 29 08:02:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Nov 29 08:02:43 compute-0 ceph-mon[75050]: osdmap e330: 3 total, 3 up, 3 in
Nov 29 08:02:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3690811353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:43 compute-0 nova_compute[256729]: 2025-11-29 08:02:43.656 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:43 compute-0 nova_compute[256729]: 2025-11-29 08:02:43.664 256736 DEBUG nova.compute.provider_tree [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:43 compute-0 nova_compute[256729]: 2025-11-29 08:02:43.894 256736 DEBUG nova.scheduler.client.report [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:44 compute-0 ceph-mon[75050]: pgmap v1839: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.7 KiB/s wr, 29 op/s
Nov 29 08:02:44 compute-0 ceph-mon[75050]: osdmap e331: 3 total, 3 up, 3 in
Nov 29 08:02:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3690811353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:44 compute-0 nova_compute[256729]: 2025-11-29 08:02:44.516 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:44 compute-0 nova_compute[256729]: 2025-11-29 08:02:44.517 256736 DEBUG nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:02:44 compute-0 nova_compute[256729]: 2025-11-29 08:02:44.847 256736 DEBUG nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:02:44 compute-0 nova_compute[256729]: 2025-11-29 08:02:44.847 256736 DEBUG nova.network.neutron [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:02:44 compute-0 ovn_controller[153383]: 2025-11-29T08:02:44Z|00172|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 29 08:02:44 compute-0 nova_compute[256729]: 2025-11-29 08:02:44.982 256736 INFO nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:02:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 KiB/s wr, 32 op/s
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.176 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.253 256736 DEBUG nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:02:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.459 256736 DEBUG nova.policy [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9664e420085d412aae898a6ec021b24f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dfb6854e99614af5b8df420841fde0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:02:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Nov 29 08:02:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.589 256736 INFO nova.virt.block_device [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Booting with volume f60c2fe3-0c52-4766-b57f-95edcd3ecac7 at /dev/vda
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.754 256736 DEBUG os_brick.utils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.755 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.774 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.774 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[af929ca9-230b-4e91-8988-7f4eeabf602c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.775 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.788 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.789 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ce1cc4-ad33-493a-932c-7c0961d39268]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.790 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.799 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.800 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[77780d82-efc6-4bdc-83a0-91e21644dd66]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.801 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[68612b8d-000a-4d2a-b5c5-e6a0a400bec9]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.801 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.827 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.832 256736 DEBUG os_brick.initiator.connectors.lightos [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.833 256736 DEBUG os_brick.initiator.connectors.lightos [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.833 256736 DEBUG os_brick.initiator.connectors.lightos [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.834 256736 DEBUG os_brick.utils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:02:45 compute-0 nova_compute[256729]: 2025-11-29 08:02:45.834 256736 DEBUG nova.virt.block_device [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Updating existing volume attachment record: a2e53140-a95e-4f66-932e-f503249493f6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:02:46 compute-0 ceph-mon[75050]: pgmap v1841: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 KiB/s wr, 32 op/s
Nov 29 08:02:46 compute-0 ceph-mon[75050]: osdmap e332: 3 total, 3 up, 3 in
Nov 29 08:02:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2081549562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 204 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 11 MiB/s wr, 151 op/s
Nov 29 08:02:47 compute-0 nova_compute[256729]: 2025-11-29 08:02:47.390 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:47 compute-0 sshd-session[289529]: Connection closed by authenticating user root 143.14.121.41 port 37006 [preauth]
Nov 29 08:02:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2081549562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:47 compute-0 nova_compute[256729]: 2025-11-29 08:02:47.712 256736 DEBUG nova.network.neutron [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Successfully created port: 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:02:48 compute-0 nova_compute[256729]: 2025-11-29 08:02:48.151 256736 DEBUG nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:02:48 compute-0 nova_compute[256729]: 2025-11-29 08:02:48.154 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:02:48 compute-0 nova_compute[256729]: 2025-11-29 08:02:48.154 256736 INFO nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Creating image(s)
Nov 29 08:02:48 compute-0 nova_compute[256729]: 2025-11-29 08:02:48.155 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:02:48 compute-0 nova_compute[256729]: 2025-11-29 08:02:48.155 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Ensure instance console log exists: /var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:02:48 compute-0 nova_compute[256729]: 2025-11-29 08:02:48.156 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:48 compute-0 nova_compute[256729]: 2025-11-29 08:02:48.156 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:48 compute-0 nova_compute[256729]: 2025-11-29 08:02:48.157 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:48 compute-0 ceph-mon[75050]: pgmap v1843: 305 pgs: 305 active+clean; 204 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 11 MiB/s wr, 151 op/s
Nov 29 08:02:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3441111699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 17 MiB/s wr, 144 op/s
Nov 29 08:02:49 compute-0 nova_compute[256729]: 2025-11-29 08:02:49.773 256736 DEBUG nova.network.neutron [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Successfully updated port: 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:02:49 compute-0 sshd-session[289525]: Connection closed by authenticating user root 139.19.117.130 port 41120 [preauth]
Nov 29 08:02:49 compute-0 nova_compute[256729]: 2025-11-29 08:02:49.830 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:49 compute-0 nova_compute[256729]: 2025-11-29 08:02:49.831 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquired lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:49 compute-0 nova_compute[256729]: 2025-11-29 08:02:49.831 256736 DEBUG nova.network.neutron [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:02:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Nov 29 08:02:49 compute-0 nova_compute[256729]: 2025-11-29 08:02:49.977 256736 DEBUG nova.compute.manager [req-3f74b1b2-6796-4c18-b5b2-fd9e57790d92 req-eeed5cbc-634b-4aa6-ab2d-f2b4feff1d3d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received event network-changed-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:49 compute-0 nova_compute[256729]: 2025-11-29 08:02:49.978 256736 DEBUG nova.compute.manager [req-3f74b1b2-6796-4c18-b5b2-fd9e57790d92 req-eeed5cbc-634b-4aa6-ab2d-f2b4feff1d3d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Refreshing instance network info cache due to event network-changed-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:02:49 compute-0 nova_compute[256729]: 2025-11-29 08:02:49.979 256736 DEBUG oslo_concurrency.lockutils [req-3f74b1b2-6796-4c18-b5b2-fd9e57790d92 req-eeed5cbc-634b-4aa6-ab2d-f2b4feff1d3d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:50 compute-0 nova_compute[256729]: 2025-11-29 08:02:50.102 256736 DEBUG nova.network.neutron [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:02:50 compute-0 nova_compute[256729]: 2025-11-29 08:02:50.180 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:50 compute-0 sshd-session[289561]: Connection closed by authenticating user root 143.14.121.41 port 37016 [preauth]
Nov 29 08:02:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 14 MiB/s wr, 120 op/s
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.185 256736 DEBUG nova.network.neutron [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Updating instance_info_cache with network_info: [{"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.405 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Releasing lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.406 256736 DEBUG nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Instance network_info: |[{"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.407 256736 DEBUG oslo_concurrency.lockutils [req-3f74b1b2-6796-4c18-b5b2-fd9e57790d92 req-eeed5cbc-634b-4aa6-ab2d-f2b4feff1d3d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.407 256736 DEBUG nova.network.neutron [req-3f74b1b2-6796-4c18-b5b2-fd9e57790d92 req-eeed5cbc-634b-4aa6-ab2d-f2b4feff1d3d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Refreshing network info cache for port 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.414 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Start _get_guest_xml network_info=[{"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f60c2fe3-0c52-4766-b57f-95edcd3ecac7', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f60c2fe3-0c52-4766-b57f-95edcd3ecac7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a3133710-8c54-433d-9263-c081a69bf339', 'attached_at': '', 'detached_at': '', 'volume_id': 'f60c2fe3-0c52-4766-b57f-95edcd3ecac7', 'serial': 'f60c2fe3-0c52-4766-b57f-95edcd3ecac7'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': 'a2e53140-a95e-4f66-932e-f503249493f6', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.425 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.425 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.438 256736 WARNING nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.450 256736 DEBUG nova.virt.libvirt.host [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.451 256736 DEBUG nova.virt.libvirt.host [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.455 256736 DEBUG nova.virt.libvirt.host [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.456 256736 DEBUG nova.virt.libvirt.host [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.457 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.457 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.458 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.459 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.459 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.459 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.460 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.461 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.461 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.462 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.462 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.462 256736 DEBUG nova.virt.hardware [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.507 256736 DEBUG nova.storage.rbd_utils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image a3133710-8c54-433d-9263-c081a69bf339_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.515 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.557 256736 DEBUG nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:02:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Nov 29 08:02:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3441111699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.843 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.843 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.851 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:02:51 compute-0 nova_compute[256729]: 2025-11-29 08:02:51.851 256736 INFO nova.compute.claims [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:02:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:52 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139376349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.067 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.393 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.497 256736 DEBUG nova.virt.libvirt.vif [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-210416915',display_name='tempest-TestVolumeBootPattern-volume-backed-server-210416915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-210416915',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZhEoyK4Ll6+AydDTrWJFv/RbMCQAtShDe5Niki6glH36XIILYUyVDKeQk/cn/o6Cwac5T8/p7rhVRDTi0GPGnurLUi2m9wKBB92zkcjtET1jLWN6TzhYb5yEgAutrqkA==',key_name='tempest-keypair-370509897',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-vw6lyc25',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9664e420085d412aae898a6ec021b24f',uuid=a3133710-8c54-433d-9263-c081a69bf339,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.498 256736 DEBUG nova.network.os_vif_util [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.499 256736 DEBUG nova.network.os_vif_util [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:bb:d9,bridge_name='br-int',has_traffic_filtering=True,id=73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b2234c-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.501 256736 DEBUG nova.objects.instance [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'pci_devices' on Instance uuid a3133710-8c54-433d-9263-c081a69bf339 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.584 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <uuid>a3133710-8c54-433d-9263-c081a69bf339</uuid>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <name>instance-00000011</name>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-210416915</nova:name>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:02:51</nova:creationTime>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <nova:user uuid="9664e420085d412aae898a6ec021b24f">tempest-TestVolumeBootPattern-776329285-project-member</nova:user>
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <nova:project uuid="dfb6854e99614af5b8df420841fde0db">tempest-TestVolumeBootPattern-776329285</nova:project>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <nova:port uuid="73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7">
Nov 29 08:02:52 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <system>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <entry name="serial">a3133710-8c54-433d-9263-c081a69bf339</entry>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <entry name="uuid">a3133710-8c54-433d-9263-c081a69bf339</entry>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     </system>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <os>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   </os>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <features>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   </features>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/a3133710-8c54-433d-9263-c081a69bf339_disk.config">
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       </source>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-f60c2fe3-0c52-4766-b57f-95edcd3ecac7">
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       </source>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:02:52 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <serial>f60c2fe3-0c52-4766-b57f-95edcd3ecac7</serial>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:91:bb:d9"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <target dev="tap73b2234c-5b"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339/console.log" append="off"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <video>
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     </video>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:02:52 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:02:52 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:02:52 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:02:52 compute-0 nova_compute[256729]: </domain>
Nov 29 08:02:52 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.586 256736 DEBUG nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Preparing to wait for external event network-vif-plugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.587 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "a3133710-8c54-433d-9263-c081a69bf339-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.587 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.587 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.588 256736 DEBUG nova.virt.libvirt.vif [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-210416915',display_name='tempest-TestVolumeBootPattern-volume-backed-server-210416915',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-210416915',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZhEoyK4Ll6+AydDTrWJFv/RbMCQAtShDe5Niki6glH36XIILYUyVDKeQk/cn/o6Cwac5T8/p7rhVRDTi0GPGnurLUi2m9wKBB92zkcjtET1jLWN6TzhYb5yEgAutrqkA==',key_name='tempest-keypair-370509897',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-vw6lyc25',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9664e420085d412aae898a6ec021b24f',uuid=a3133710-8c54-433d-9263-c081a69bf339,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.589 256736 DEBUG nova.network.os_vif_util [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.590 256736 DEBUG nova.network.os_vif_util [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:bb:d9,bridge_name='br-int',has_traffic_filtering=True,id=73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b2234c-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.590 256736 DEBUG os_vif [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:bb:d9,bridge_name='br-int',has_traffic_filtering=True,id=73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b2234c-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.592 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.593 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.593 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.602 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.602 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73b2234c-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.603 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73b2234c-5b, col_values=(('external_ids', {'iface-id': '73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:bb:d9', 'vm-uuid': 'a3133710-8c54-433d-9263-c081a69bf339'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.607 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 NetworkManager[48962]: <info>  [1764403372.6075] manager: (tap73b2234c-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.611 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.613 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.615 256736 INFO os_vif [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:bb:d9,bridge_name='br-int',has_traffic_filtering=True,id=73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b2234c-5b')
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.640 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:52 compute-0 ceph-mon[75050]: pgmap v1844: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 17 MiB/s wr, 144 op/s
Nov 29 08:02:52 compute-0 ceph-mon[75050]: pgmap v1845: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 14 MiB/s wr, 120 op/s
Nov 29 08:02:52 compute-0 ceph-mon[75050]: osdmap e333: 3 total, 3 up, 3 in
Nov 29 08:02:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/139376349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Nov 29 08:02:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Nov 29 08:02:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.761 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.762 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.762 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No VIF found with MAC fa:16:3e:91:bb:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.763 256736 INFO nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Using config drive
Nov 29 08:02:52 compute-0 nova_compute[256729]: 2025-11-29 08:02:52.801 256736 DEBUG nova.storage.rbd_utils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image a3133710-8c54-433d-9263-c081a69bf339_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 15 MiB/s wr, 118 op/s
Nov 29 08:02:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:02:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/126198690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.096 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.101 256736 DEBUG nova.compute.provider_tree [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.331 256736 DEBUG nova.scheduler.client.report [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.362 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.363 256736 DEBUG nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.427 256736 DEBUG nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.428 256736 DEBUG nova.network.neutron [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.533 256736 INFO nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Creating config drive at /var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339/disk.config
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.540 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpteb96f99 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.593 256736 INFO nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.607 256736 DEBUG nova.policy [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb2de7fb67042f89a025f1a3e872530', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:02:53 compute-0 sshd-session[289563]: Invalid user oracle from 143.14.121.41 port 37028
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.671 256736 DEBUG nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.685 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpteb96f99" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.716 256736 DEBUG nova.storage.rbd_utils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image a3133710-8c54-433d-9263-c081a69bf339_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.720 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339/disk.config a3133710-8c54-433d-9263-c081a69bf339_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.766 256736 INFO nova.virt.block_device [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Booting with volume 1e0633b8-d2a6-4f22-aa22-9308e9b3acc4 at /dev/vda
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.946 256736 DEBUG os_brick.utils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.948 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.966 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.967 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[ab79571e-c129-42c9-a396-9b4db262c588]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:53 compute-0 ceph-mon[75050]: osdmap e334: 3 total, 3 up, 3 in
Nov 29 08:02:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/126198690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.969 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.985 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.985 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[7be83d16-1723-4e0c-8d88-157864ede532]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:53 compute-0 nova_compute[256729]: 2025-11-29 08:02:53.989 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:53 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.006 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.007 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[e29e3d04-71b2-46c1-a756-19473e2ecaa1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.009 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[3db947e6-b1c1-4e54-ae7b-74175f0e271e]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.010 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.041 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.044 256736 DEBUG os_brick.initiator.connectors.lightos [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.045 256736 DEBUG os_brick.initiator.connectors.lightos [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.045 256736 DEBUG os_brick.initiator.connectors.lightos [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.046 256736 DEBUG os_brick.utils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.046 256736 DEBUG nova.virt.block_device [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Updating existing volume attachment record: c7cc696d-7ca2-46c7-8d22-03b477d6b75c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.125 256736 DEBUG oslo_concurrency.processutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339/disk.config a3133710-8c54-433d-9263-c081a69bf339_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.126 256736 INFO nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Deleting local config drive /var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339/disk.config because it was imported into RBD.
Nov 29 08:02:54 compute-0 kernel: tap73b2234c-5b: entered promiscuous mode
Nov 29 08:02:54 compute-0 NetworkManager[48962]: <info>  [1764403374.1992] manager: (tap73b2234c-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.238 256736 DEBUG nova.network.neutron [req-3f74b1b2-6796-4c18-b5b2-fd9e57790d92 req-eeed5cbc-634b-4aa6-ab2d-f2b4feff1d3d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Updated VIF entry in instance network info cache for port 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.240 256736 DEBUG nova.network.neutron [req-3f74b1b2-6796-4c18-b5b2-fd9e57790d92 req-eeed5cbc-634b-4aa6-ab2d-f2b4feff1d3d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Updating instance_info_cache with network_info: [{"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:54 compute-0 ovn_controller[153383]: 2025-11-29T08:02:54Z|00173|binding|INFO|Claiming lport 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 for this chassis.
Nov 29 08:02:54 compute-0 ovn_controller[153383]: 2025-11-29T08:02:54Z|00174|binding|INFO|73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7: Claiming fa:16:3e:91:bb:d9 10.100.0.6
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.244 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.258 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:bb:d9 10.100.0.6'], port_security=['fa:16:3e:91:bb:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a3133710-8c54-433d-9263-c081a69bf339', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a943ee2c-e86d-4c9d-b5a9-5767d5e198b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.260 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 bound to our chassis
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.263 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:02:54 compute-0 systemd-udevd[289709]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:02:54 compute-0 systemd-machined[217781]: New machine qemu-17-instance-00000011.
Nov 29 08:02:54 compute-0 ovn_controller[153383]: 2025-11-29T08:02:54Z|00175|binding|INFO|Setting lport 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 ovn-installed in OVS
Nov 29 08:02:54 compute-0 ovn_controller[153383]: 2025-11-29T08:02:54Z|00176|binding|INFO|Setting lport 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 up in Southbound
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.278 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8e093a-22a1-46e5-9ca5-8719091812f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.279 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d9c390c-31 in ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.278 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.280 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.283 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d9c390c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.284 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[048aacd0-3566-4481-89f2-2ed85b1471a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.286 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7bee1ab2-a2cd-4266-8cad-47296aa0b38d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 NetworkManager[48962]: <info>  [1764403374.2918] device (tap73b2234c-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:02:54 compute-0 NetworkManager[48962]: <info>  [1764403374.2927] device (tap73b2234c-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.294 256736 DEBUG oslo_concurrency.lockutils [req-3f74b1b2-6796-4c18-b5b2-fd9e57790d92 req-eeed5cbc-634b-4aa6-ab2d-f2b4feff1d3d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.303 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4814dc-6ba8-4703-9c65-95e5a44f5e64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.333 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ed48e48d-4520-445b-a595-5a88914e762b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.369 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8c3388-6be4-47ea-b2a8-7ddc957bcc1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.385 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[509cab90-5988-4622-b61d-29493ddd4c43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 NetworkManager[48962]: <info>  [1764403374.3868] manager: (tap2d9c390c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Nov 29 08:02:54 compute-0 systemd-udevd[289713]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.439 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[fc5e2010-af5f-4516-a9c2-71a035d98dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.446 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[270bc3fc-1ee5-4b54-8e55-b7407e68491c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 NetworkManager[48962]: <info>  [1764403374.4956] device (tap2d9c390c-30): carrier: link connected
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.505 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[f8568c67-e0f2-43b9-b870-c133767a92f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 sshd-session[289563]: Connection closed by invalid user oracle 143.14.121.41 port 37028 [preauth]
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.539 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc77d25-3804-4aa6-90c0-feb815b88139]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568955, 'reachable_time': 36277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289742, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.563 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e7faaf53-9e9a-4127-a159-0f0ce33e013b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:2407'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568955, 'tstamp': 568955}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289759, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.589 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0cff708b-db5e-4f0e-bcc0-78f363d153a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568955, 'reachable_time': 36277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289762, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.630 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[58043cb6-651f-4661-9e3b-70504b6bfbfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.659 256736 DEBUG nova.network.neutron [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Successfully created port: bf742235-ed01-4672-8d0f-37c829df931f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.696 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[da9d026f-4366-4e1a-b93a-ee106d2bc687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.697 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.697 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.698 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d9c390c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.699 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:54 compute-0 kernel: tap2d9c390c-30: entered promiscuous mode
Nov 29 08:02:54 compute-0 NetworkManager[48962]: <info>  [1764403374.7007] manager: (tap2d9c390c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.704 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d9c390c-30, col_values=(('external_ids', {'iface-id': '30965993-2787-409a-9e74-8cf68d39c3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:54 compute-0 ovn_controller[153383]: 2025-11-29T08:02:54Z|00177|binding|INFO|Releasing lport 30965993-2787-409a-9e74-8cf68d39c3b3 from this chassis (sb_readonly=0)
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.705 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.707 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.708 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4532da76-904d-43d5-8e80-7c9aedcd593e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.709 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:02:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:54.710 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'env', 'PROCESS_TAG=haproxy-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d9c390c-362a-41a5-93b0-23344eb99ae5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.721 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131574658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.865 256736 DEBUG nova.compute.manager [req-dfa9b2d7-f662-45c6-a372-66b3355dadb3 req-e2ea7558-07ea-49f6-a1c7-3e2e8c1a4b5b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received event network-vif-plugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.866 256736 DEBUG oslo_concurrency.lockutils [req-dfa9b2d7-f662-45c6-a372-66b3355dadb3 req-e2ea7558-07ea-49f6-a1c7-3e2e8c1a4b5b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a3133710-8c54-433d-9263-c081a69bf339-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.866 256736 DEBUG oslo_concurrency.lockutils [req-dfa9b2d7-f662-45c6-a372-66b3355dadb3 req-e2ea7558-07ea-49f6-a1c7-3e2e8c1a4b5b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.867 256736 DEBUG oslo_concurrency.lockutils [req-dfa9b2d7-f662-45c6-a372-66b3355dadb3 req-e2ea7558-07ea-49f6-a1c7-3e2e8c1a4b5b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:54 compute-0 nova_compute[256729]: 2025-11-29 08:02:54.867 256736 DEBUG nova.compute.manager [req-dfa9b2d7-f662-45c6-a372-66b3355dadb3 req-e2ea7558-07ea-49f6-a1c7-3e2e8c1a4b5b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Processing event network-vif-plugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:02:54 compute-0 ceph-mon[75050]: pgmap v1848: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 15 MiB/s wr, 118 op/s
Nov 29 08:02:54 compute-0 ceph-mon[75050]: osdmap e335: 3 total, 3 up, 3 in
Nov 29 08:02:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/131574658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Nov 29 08:02:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Nov 29 08:02:55 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Nov 29 08:02:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.2 KiB/s wr, 40 op/s
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.091 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403375.0901666, a3133710-8c54-433d-9263-c081a69bf339 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.091 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] VM Started (Lifecycle Event)
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.095 256736 DEBUG nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.099 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.106 256736 INFO nova.virt.libvirt.driver [-] [instance: a3133710-8c54-433d-9263-c081a69bf339] Instance spawned successfully.
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.106 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.185 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.198 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.203 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.203 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.204 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.204 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.204 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.205 256736 DEBUG nova.virt.libvirt.driver [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.237 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.237 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403375.090289, a3133710-8c54-433d-9263-c081a69bf339 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.237 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] VM Paused (Lifecycle Event)
Nov 29 08:02:55 compute-0 podman[289818]: 2025-11-29 08:02:55.176253567 +0000 UTC m=+0.029539080 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:02:55 compute-0 podman[289818]: 2025-11-29 08:02:55.279207185 +0000 UTC m=+0.132492708 container create 7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.292 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.298 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403375.0981615, a3133710-8c54-433d-9263-c081a69bf339 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.298 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] VM Resumed (Lifecycle Event)
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.336 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.338 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:02:55 compute-0 systemd[1]: Started libpod-conmon-7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778.scope.
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.343 256736 DEBUG nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.344 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.345 256736 INFO nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Creating image(s)
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.345 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.345 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Ensure instance console log exists: /var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.346 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.346 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.346 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.354 256736 INFO nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Took 7.20 seconds to spawn the instance on the hypervisor.
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.355 256736 DEBUG nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.368 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:02:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:02:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8470f44048ef3459d0b3c5d4e39829fe69df53ea8f94aece510a538145dc5c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:02:55 compute-0 podman[289818]: 2025-11-29 08:02:55.389856349 +0000 UTC m=+0.243141852 container init 7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 08:02:55 compute-0 podman[289818]: 2025-11-29 08:02:55.400537444 +0000 UTC m=+0.253822927 container start 7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:02:55 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289833]: [NOTICE]   (289838) : New worker (289840) forked
Nov 29 08:02:55 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289833]: [NOTICE]   (289838) : Loading success.
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.581 256736 INFO nova.compute.manager [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Took 12.99 seconds to build instance.
Nov 29 08:02:55 compute-0 nova_compute[256729]: 2025-11-29 08:02:55.609 256736 DEBUG oslo_concurrency.lockutils [None req-48e5c318-a7a6-4354-95c6-8b5c84635bf3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:02:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2261484492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:02:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2261484492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:56 compute-0 ceph-mon[75050]: osdmap e336: 3 total, 3 up, 3 in
Nov 29 08:02:56 compute-0 ceph-mon[75050]: pgmap v1851: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.2 KiB/s wr, 40 op/s
Nov 29 08:02:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2261484492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:02:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2261484492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:02:56 compute-0 nova_compute[256729]: 2025-11-29 08:02:56.140 256736 DEBUG nova.network.neutron [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Successfully updated port: bf742235-ed01-4672-8d0f-37c829df931f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:02:56 compute-0 nova_compute[256729]: 2025-11-29 08:02:56.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:56 compute-0 nova_compute[256729]: 2025-11-29 08:02:56.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:56 compute-0 nova_compute[256729]: 2025-11-29 08:02:56.226 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "refresh_cache-0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:56 compute-0 nova_compute[256729]: 2025-11-29 08:02:56.227 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquired lock "refresh_cache-0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:56 compute-0 nova_compute[256729]: 2025-11-29 08:02:56.227 256736 DEBUG nova.network.neutron [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:02:56 compute-0 nova_compute[256729]: 2025-11-29 08:02:56.405 256736 DEBUG nova.network.neutron [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:02:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 KiB/s wr, 150 op/s
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.026 256736 DEBUG nova.compute.manager [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received event network-vif-plugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.027 256736 DEBUG oslo_concurrency.lockutils [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a3133710-8c54-433d-9263-c081a69bf339-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.027 256736 DEBUG oslo_concurrency.lockutils [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.028 256736 DEBUG oslo_concurrency.lockutils [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.028 256736 DEBUG nova.compute.manager [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] No waiting events found dispatching network-vif-plugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.029 256736 WARNING nova.compute.manager [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received unexpected event network-vif-plugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 for instance with vm_state active and task_state None.
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.029 256736 DEBUG nova.compute.manager [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received event network-changed-bf742235-ed01-4672-8d0f-37c829df931f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.030 256736 DEBUG nova.compute.manager [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Refreshing instance network info cache due to event network-changed-bf742235-ed01-4672-8d0f-37c829df931f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.030 256736 DEBUG oslo_concurrency.lockutils [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.397 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:57 compute-0 sshd-session[289708]: Invalid user nick from 143.14.121.41 port 40174
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.511 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:57 compute-0 NetworkManager[48962]: <info>  [1764403377.5129] manager: (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Nov 29 08:02:57 compute-0 NetworkManager[48962]: <info>  [1764403377.5138] manager: (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.607 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.676 256736 DEBUG nova.network.neutron [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Updating instance_info_cache with network_info: [{"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:57 compute-0 sshd-session[289708]: Connection closed by invalid user nick 143.14.121.41 port 40174 [preauth]
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.766 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:57 compute-0 ovn_controller[153383]: 2025-11-29T08:02:57Z|00178|binding|INFO|Releasing lport 30965993-2787-409a-9e74-8cf68d39c3b3 from this chassis (sb_readonly=0)
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.790 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.884 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Releasing lock "refresh_cache-0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.885 256736 DEBUG nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Instance network_info: |[{"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.885 256736 DEBUG oslo_concurrency.lockutils [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.885 256736 DEBUG nova.network.neutron [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Refreshing network info cache for port bf742235-ed01-4672-8d0f-37c829df931f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.889 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Start _get_guest_xml network_info=[{"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d', 'attached_at': '', 'detached_at': '', 'volume_id': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'serial': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': 'c7cc696d-7ca2-46c7-8d22-03b477d6b75c', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.895 256736 WARNING nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.901 256736 DEBUG nova.virt.libvirt.host [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.901 256736 DEBUG nova.virt.libvirt.host [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.904 256736 DEBUG nova.virt.libvirt.host [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.905 256736 DEBUG nova.virt.libvirt.host [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.905 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.905 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.906 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.906 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.906 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.906 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.906 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.907 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.907 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.907 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.907 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.908 256736 DEBUG nova.virt.hardware [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.930 256736 DEBUG nova.storage.rbd_utils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.937 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:02:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:57.990 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:02:57 compute-0 nova_compute[256729]: 2025-11-29 08:02:57.990 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:57 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:57.992 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:02:58 compute-0 ceph-mon[75050]: pgmap v1852: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 KiB/s wr, 150 op/s
Nov 29 08:02:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1101954099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.376 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.517 256736 DEBUG os_brick.encryptors [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Using volume encryption metadata '{'encryption_key_id': '87b0c459-b18a-42ea-a5f2-9b90697c6081', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d', 'attached_at': '', 'detached_at': '', 'volume_id': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.521 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.548 256736 DEBUG barbicanclient.v1.secrets [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.549 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.585 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.586 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.615 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.616 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.647 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.648 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.676 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.677 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:02:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3240951371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.706 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.707 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.736 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.737 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.765 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.766 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.810 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.811 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.849 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.850 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.896 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.897 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.925 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.926 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.956 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.957 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.995 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:58 compute-0 nova_compute[256729]: 2025-11-29 08:02:58.996 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 32 KiB/s wr, 238 op/s
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.048 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.049 256736 INFO barbicanclient.base [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/87b0c459-b18a-42ea-a5f2-9b90697c6081
Nov 29 08:02:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Nov 29 08:02:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1101954099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3240951371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:02:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Nov 29 08:02:59 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.299 256736 DEBUG barbicanclient.client [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.301 256736 DEBUG nova.virt.libvirt.host [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <volume>1e0633b8-d2a6-4f22-aa22-9308e9b3acc4</volume>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   </usage>
Nov 29 08:02:59 compute-0 nova_compute[256729]: </secret>
Nov 29 08:02:59 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.348 256736 DEBUG nova.virt.libvirt.vif [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2119805276',display_name='tempest-TransferEncryptedVolumeTest-server-2119805276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2119805276',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOK0uPKk5+iwu2ACxwLiXPfPFKjqAeuswoaNdzNGpYFdv9fCZRffGqJNvJmfqnbg+KUupmPFmswjEh+khO5A2TFlJ9LMuOBogxQ7cFR7kmTFduCVQRkpWi0Jux9/KRhlg==',key_name='tempest-TransferEncryptedVolumeTest-347364744',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-3vdrc1i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:53Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.349 256736 DEBUG nova.network.os_vif_util [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.351 256736 DEBUG nova.network.os_vif_util [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:20:11,bridge_name='br-int',has_traffic_filtering=True,id=bf742235-ed01-4672-8d0f-37c829df931f,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf742235-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.354 256736 DEBUG nova.objects.instance [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.383 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <uuid>0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d</uuid>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <name>instance-00000012</name>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-2119805276</nova:name>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:02:57</nova:creationTime>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <nova:user uuid="2cb2de7fb67042f89a025f1a3e872530">tempest-TransferEncryptedVolumeTest-2049180676-project-member</nova:user>
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <nova:project uuid="00f4c1f7964a4e5fbe3db5be46b9676e">tempest-TransferEncryptedVolumeTest-2049180676</nova:project>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <nova:port uuid="bf742235-ed01-4672-8d0f-37c829df931f">
Nov 29 08:02:59 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <system>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <entry name="serial">0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d</entry>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <entry name="uuid">0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d</entry>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     </system>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <os>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   </os>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <features>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   </features>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d_disk.config">
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       </source>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-1e0633b8-d2a6-4f22-aa22-9308e9b3acc4">
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       </source>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <serial>1e0633b8-d2a6-4f22-aa22-9308e9b3acc4</serial>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <encryption format="luks">
Nov 29 08:02:59 compute-0 nova_compute[256729]:         <secret type="passphrase" uuid="2962cfb9-57e8-493d-8a98-e0f8ec9621f7"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       </encryption>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:43:20:11"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <target dev="tapbf742235-ed"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d/console.log" append="off"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <video>
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     </video>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:02:59 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:02:59 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:02:59 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:02:59 compute-0 nova_compute[256729]: </domain>
Nov 29 08:02:59 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.386 256736 DEBUG nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Preparing to wait for external event network-vif-plugged-bf742235-ed01-4672-8d0f-37c829df931f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.387 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.387 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.388 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.388 256736 DEBUG nova.virt.libvirt.vif [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:02:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2119805276',display_name='tempest-TransferEncryptedVolumeTest-server-2119805276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2119805276',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOK0uPKk5+iwu2ACxwLiXPfPFKjqAeuswoaNdzNGpYFdv9fCZRffGqJNvJmfqnbg+KUupmPFmswjEh+khO5A2TFlJ9LMuOBogxQ7cFR7kmTFduCVQRkpWi0Jux9/KRhlg==',key_name='tempest-TransferEncryptedVolumeTest-347364744',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-3vdrc1i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:02:53Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.389 256736 DEBUG nova.network.os_vif_util [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.389 256736 DEBUG nova.network.os_vif_util [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:20:11,bridge_name='br-int',has_traffic_filtering=True,id=bf742235-ed01-4672-8d0f-37c829df931f,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf742235-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.390 256736 DEBUG os_vif [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:20:11,bridge_name='br-int',has_traffic_filtering=True,id=bf742235-ed01-4672-8d0f-37c829df931f,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf742235-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.390 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.390 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.391 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.394 256736 DEBUG nova.compute.manager [req-9efef37f-d013-4980-8813-83ff99286221 req-794ed948-696f-403d-9127-2b54771fca7b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received event network-changed-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.394 256736 DEBUG nova.compute.manager [req-9efef37f-d013-4980-8813-83ff99286221 req-794ed948-696f-403d-9127-2b54771fca7b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Refreshing instance network info cache due to event network-changed-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.394 256736 DEBUG oslo_concurrency.lockutils [req-9efef37f-d013-4980-8813-83ff99286221 req-794ed948-696f-403d-9127-2b54771fca7b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.394 256736 DEBUG oslo_concurrency.lockutils [req-9efef37f-d013-4980-8813-83ff99286221 req-794ed948-696f-403d-9127-2b54771fca7b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.395 256736 DEBUG nova.network.neutron [req-9efef37f-d013-4980-8813-83ff99286221 req-794ed948-696f-403d-9127-2b54771fca7b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Refreshing network info cache for port 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.396 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.396 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf742235-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.397 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf742235-ed, col_values=(('external_ids', {'iface-id': 'bf742235-ed01-4672-8d0f-37c829df931f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:20:11', 'vm-uuid': '0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.398 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:59 compute-0 NetworkManager[48962]: <info>  [1764403379.4001] manager: (tapbf742235-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.401 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.409 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.410 256736 INFO os_vif [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:20:11,bridge_name='br-int',has_traffic_filtering=True,id=bf742235-ed01-4672-8d0f-37c829df931f,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf742235-ed')
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.479 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.480 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.480 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No VIF found with MAC fa:16:3e:43:20:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.480 256736 INFO nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Using config drive
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.513 256736 DEBUG nova.storage.rbd_utils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.573 256736 DEBUG nova.network.neutron [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Updated VIF entry in instance network info cache for port bf742235-ed01-4672-8d0f-37c829df931f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.574 256736 DEBUG nova.network.neutron [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Updating instance_info_cache with network_info: [{"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.592 256736 DEBUG oslo_concurrency.lockutils [req-8ceaf12b-1178-4c32-937d-5c31893803eb req-cf2846be-149e-46be-bbdc-6f324d17fead ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:02:59 compute-0 sshd-session[289869]: Invalid user nexus from 143.14.121.41 port 40186
Nov 29 08:02:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:59.781 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:02:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:59.782 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:02:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:02:59.783 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.898 256736 INFO nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Creating config drive at /var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d/disk.config
Nov 29 08:02:59 compute-0 nova_compute[256729]: 2025-11-29 08:02:59.904 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq0d127ib execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.042 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq0d127ib" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.082 256736 DEBUG nova.storage.rbd_utils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.088 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d/disk.config 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Nov 29 08:03:00 compute-0 ceph-mon[75050]: pgmap v1853: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 32 KiB/s wr, 238 op/s
Nov 29 08:03:00 compute-0 ceph-mon[75050]: osdmap e337: 3 total, 3 up, 3 in
Nov 29 08:03:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Nov 29 08:03:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Nov 29 08:03:00 compute-0 sshd-session[289869]: Connection closed by invalid user nexus 143.14.121.41 port 40186 [preauth]
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.281 256736 DEBUG oslo_concurrency.processutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d/disk.config 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.283 256736 INFO nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Deleting local config drive /var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d/disk.config because it was imported into RBD.
Nov 29 08:03:00 compute-0 virtqemud[256259]: End of file while reading data: Input/output error
Nov 29 08:03:00 compute-0 kernel: tapbf742235-ed: entered promiscuous mode
Nov 29 08:03:00 compute-0 NetworkManager[48962]: <info>  [1764403380.3488] manager: (tapbf742235-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Nov 29 08:03:00 compute-0 ovn_controller[153383]: 2025-11-29T08:03:00Z|00179|binding|INFO|Claiming lport bf742235-ed01-4672-8d0f-37c829df931f for this chassis.
Nov 29 08:03:00 compute-0 ovn_controller[153383]: 2025-11-29T08:03:00Z|00180|binding|INFO|bf742235-ed01-4672-8d0f-37c829df931f: Claiming fa:16:3e:43:20:11 10.100.0.7
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.350 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.362 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:20:11 10.100.0.7'], port_security=['fa:16:3e:43:20:11 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc8975a3-8b30-4fd7-b465-76d299802b38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=357216b9-f046-4273-a2c2-2385abe848ac, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=bf742235-ed01-4672-8d0f-37c829df931f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.364 163655 INFO neutron.agent.ovn.metadata.agent [-] Port bf742235-ed01-4672-8d0f-37c829df931f in datapath 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c bound to our chassis
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.366 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.375 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:00 compute-0 ovn_controller[153383]: 2025-11-29T08:03:00Z|00181|binding|INFO|Setting lport bf742235-ed01-4672-8d0f-37c829df931f ovn-installed in OVS
Nov 29 08:03:00 compute-0 ovn_controller[153383]: 2025-11-29T08:03:00Z|00182|binding|INFO|Setting lport bf742235-ed01-4672-8d0f-37c829df931f up in Southbound
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.400 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.402 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a0657740-94a1-46d2-a394-809a09a31d23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.403 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45f1bbc0-c1 in ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.404 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.408 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45f1bbc0-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.409 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[aa55723f-d98e-41d5-8874-fa4cf471e6c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.410 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f432835b-f02b-44f4-b3f3-64c47b078d73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 systemd-udevd[289963]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.426 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[87c2a5b2-c6b7-4564-a95b-457d482c48ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 systemd-machined[217781]: New machine qemu-18-instance-00000012.
Nov 29 08:03:00 compute-0 NetworkManager[48962]: <info>  [1764403380.4357] device (tapbf742235-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:03:00 compute-0 NetworkManager[48962]: <info>  [1764403380.4384] device (tapbf742235-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:03:00 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.450 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fa1df2-949d-415c-9df9-c9e3239ae9a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.495 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[823463ea-2eb1-4440-b376-686db38a4e11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 NetworkManager[48962]: <info>  [1764403380.5055] manager: (tap45f1bbc0-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.504 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6516f515-908d-4923-b1e5-2d8a4b1fc969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.544 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[17945e5c-c436-443e-baf8-be723c4436ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.547 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[e3fb4719-d5b7-41c0-8720-75df8e1e7fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 NetworkManager[48962]: <info>  [1764403380.5750] device (tap45f1bbc0-c0): carrier: link connected
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.581 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcd60a5-efb5-4e25-9dc5-2738010e600f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.606 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[aca803b9-d84f-48d0-a9b8-900156133870]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f1bbc0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:b9:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569563, 'reachable_time': 31443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289999, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.628 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3a4543-cd15-475c-ab25-93ac39df24cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:b9ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 569563, 'tstamp': 569563}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290000, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.666 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[806f22ee-1062-4b34-a916-3322fbbf8727]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f1bbc0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:b9:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569563, 'reachable_time': 31443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290008, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.684 256736 DEBUG nova.compute.manager [req-988f94bb-b7de-4b3d-9e78-9ca85a81c755 req-24e6eb72-3717-4d0a-a63e-dd7a4f7a1df5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received event network-vif-plugged-bf742235-ed01-4672-8d0f-37c829df931f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.685 256736 DEBUG oslo_concurrency.lockutils [req-988f94bb-b7de-4b3d-9e78-9ca85a81c755 req-24e6eb72-3717-4d0a-a63e-dd7a4f7a1df5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.685 256736 DEBUG oslo_concurrency.lockutils [req-988f94bb-b7de-4b3d-9e78-9ca85a81c755 req-24e6eb72-3717-4d0a-a63e-dd7a4f7a1df5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.686 256736 DEBUG oslo_concurrency.lockutils [req-988f94bb-b7de-4b3d-9e78-9ca85a81c755 req-24e6eb72-3717-4d0a-a63e-dd7a4f7a1df5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.686 256736 DEBUG nova.compute.manager [req-988f94bb-b7de-4b3d-9e78-9ca85a81c755 req-24e6eb72-3717-4d0a-a63e-dd7a4f7a1df5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Processing event network-vif-plugged-bf742235-ed01-4672-8d0f-37c829df931f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.709 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5f33eb00-a3bf-4192-b41a-076ebebbd97c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.808 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[84df60f7-bb5e-4415-a17f-0cec965901e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.809 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f1bbc0-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.810 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.810 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45f1bbc0-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.812 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:00 compute-0 kernel: tap45f1bbc0-c0: entered promiscuous mode
Nov 29 08:03:00 compute-0 NetworkManager[48962]: <info>  [1764403380.8140] manager: (tap45f1bbc0-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.816 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.825 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45f1bbc0-c0, col_values=(('external_ids', {'iface-id': '1506b576-854d-4118-b808-0e5e32d85d28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:00 compute-0 ovn_controller[153383]: 2025-11-29T08:03:00Z|00183|binding|INFO|Releasing lport 1506b576-854d-4118-b808-0e5e32d85d28 from this chassis (sb_readonly=0)
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.827 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.834 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.835 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a8fe6a-f1f2-44c8-a0ec-b98e27569739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.840 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:03:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:00.841 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'env', 'PROCESS_TAG=haproxy-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.859 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.931 256736 DEBUG nova.network.neutron [req-9efef37f-d013-4980-8813-83ff99286221 req-794ed948-696f-403d-9127-2b54771fca7b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Updated VIF entry in instance network info cache for port 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.933 256736 DEBUG nova.network.neutron [req-9efef37f-d013-4980-8813-83ff99286221 req-794ed948-696f-403d-9127-2b54771fca7b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Updating instance_info_cache with network_info: [{"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:00 compute-0 nova_compute[256729]: 2025-11-29 08:03:00.960 256736 DEBUG oslo_concurrency.lockutils [req-9efef37f-d013-4980-8813-83ff99286221 req-794ed948-696f-403d-9127-2b54771fca7b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 223 op/s
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.175 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.176 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.176 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.177 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.178 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:01 compute-0 podman[290050]: 2025-11-29 08:03:01.303086983 +0000 UTC m=+0.036204378 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:03:01 compute-0 ceph-mon[75050]: osdmap e338: 3 total, 3 up, 3 in
Nov 29 08:03:01 compute-0 podman[290050]: 2025-11-29 08:03:01.473555613 +0000 UTC m=+0.206672988 container create b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 08:03:01 compute-0 systemd[1]: Started libpod-conmon-b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f.scope.
Nov 29 08:03:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349c6c3816f9e7582eb3b9a279cf4b949df0c0ab238debfc3c264391e2bfa0b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:01 compute-0 podman[290050]: 2025-11-29 08:03:01.582728637 +0000 UTC m=+0.315846032 container init b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:01 compute-0 podman[290050]: 2025-11-29 08:03:01.589660993 +0000 UTC m=+0.322778358 container start b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:03:01 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[290102]: [NOTICE]   (290106) : New worker (290108) forked
Nov 29 08:03:01 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[290102]: [NOTICE]   (290106) : Loading success.
Nov 29 08:03:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482723151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.778 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.898 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.899 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.908 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:03:01 compute-0 nova_compute[256729]: 2025-11-29 08:03:01.908 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.134 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.136 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4263MB free_disk=59.98810958862305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.136 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.136 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.232 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance a3133710-8c54-433d-9263-c081a69bf339 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.233 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.233 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.233 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.299 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.402 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:02 compute-0 podman[290139]: 2025-11-29 08:03:02.72608786 +0000 UTC m=+0.082036082 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:02 compute-0 podman[290138]: 2025-11-29 08:03:02.73170713 +0000 UTC m=+0.092599073 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 08:03:02 compute-0 podman[290137]: 2025-11-29 08:03:02.75385491 +0000 UTC m=+0.124948716 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.789 256736 DEBUG nova.compute.manager [req-7f0d203d-8a43-48a2-94cd-443ca5fe094f req-ab41614a-717e-42b4-9a6e-c5bc70add334 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received event network-vif-plugged-bf742235-ed01-4672-8d0f-37c829df931f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.790 256736 DEBUG oslo_concurrency.lockutils [req-7f0d203d-8a43-48a2-94cd-443ca5fe094f req-ab41614a-717e-42b4-9a6e-c5bc70add334 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.791 256736 DEBUG oslo_concurrency.lockutils [req-7f0d203d-8a43-48a2-94cd-443ca5fe094f req-ab41614a-717e-42b4-9a6e-c5bc70add334 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.791 256736 DEBUG oslo_concurrency.lockutils [req-7f0d203d-8a43-48a2-94cd-443ca5fe094f req-ab41614a-717e-42b4-9a6e-c5bc70add334 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.792 256736 DEBUG nova.compute.manager [req-7f0d203d-8a43-48a2-94cd-443ca5fe094f req-ab41614a-717e-42b4-9a6e-c5bc70add334 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] No waiting events found dispatching network-vif-plugged-bf742235-ed01-4672-8d0f-37c829df931f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:03:02 compute-0 nova_compute[256729]: 2025-11-29 08:03:02.792 256736 WARNING nova.compute.manager [req-7f0d203d-8a43-48a2-94cd-443ca5fe094f req-ab41614a-717e-42b4-9a6e-c5bc70add334 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received unexpected event network-vif-plugged-bf742235-ed01-4672-8d0f-37c829df931f for instance with vm_state building and task_state spawning.
Nov 29 08:03:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:02.994 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 179 op/s
Nov 29 08:03:03 compute-0 ceph-mon[75050]: pgmap v1856: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 223 op/s
Nov 29 08:03:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1482723151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1409835172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.489 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.494 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.550 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:03:03 compute-0 sshd-session[289978]: Invalid user git from 143.14.121.41 port 40200
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.711 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.711 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.882 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403383.8817415, 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.882 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] VM Started (Lifecycle Event)
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.885 256736 DEBUG nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.889 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.892 256736 INFO nova.virt.libvirt.driver [-] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Instance spawned successfully.
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.893 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.920 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.921 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.921 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.922 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.924 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.924 256736 DEBUG nova.virt.libvirt.driver [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.929 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:03 compute-0 nova_compute[256729]: 2025-11-29 08:03:03.935 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:03:03 compute-0 sshd-session[289978]: Connection closed by invalid user git 143.14.121.41 port 40200 [preauth]
Nov 29 08:03:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Nov 29 08:03:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Nov 29 08:03:03 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.018 256736 INFO nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Took 8.67 seconds to spawn the instance on the hypervisor.
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.018 256736 DEBUG nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.027 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.027 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403383.8842003, 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.028 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] VM Paused (Lifecycle Event)
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.109 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.114 256736 INFO nova.compute.manager [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Took 12.30 seconds to build instance.
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.119 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403383.88848, 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.119 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] VM Resumed (Lifecycle Event)
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.212 256736 DEBUG oslo_concurrency.lockutils [None req-fdd856ae-01b3-438a-8cb8-01ad0155af8e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.219 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.221 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:03:04 compute-0 ceph-mon[75050]: pgmap v1857: 305 pgs: 305 active+clean; 248 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 179 op/s
Nov 29 08:03:04 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1409835172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:04 compute-0 ceph-mon[75050]: osdmap e339: 3 total, 3 up, 3 in
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.400 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.712 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.713 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:03:04 compute-0 nova_compute[256729]: 2025-11-29 08:03:04.713 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:03:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:04 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4208008205' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:04 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4208008205' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 248 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 27 KiB/s wr, 62 op/s
Nov 29 08:03:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:03:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 19K writes, 70K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 19K writes, 6532 syncs, 2.97 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8126 writes, 29K keys, 8126 commit groups, 1.0 writes per commit group, ingest: 21.48 MB, 0.04 MB/s
                                           Interval WAL: 8126 writes, 3262 syncs, 2.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:03:05 compute-0 nova_compute[256729]: 2025-11-29 08:03:05.292 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:05 compute-0 nova_compute[256729]: 2025-11-29 08:03:05.293 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:05 compute-0 nova_compute[256729]: 2025-11-29 08:03:05.293 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:03:05 compute-0 nova_compute[256729]: 2025-11-29 08:03:05.294 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid a3133710-8c54-433d-9263-c081a69bf339 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:03:05 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4208008205' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:05 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4208008205' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:05 compute-0 sudo[290209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:05 compute-0 sudo[290209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:05 compute-0 sudo[290209]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:05 compute-0 sudo[290234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:05 compute-0 sudo[290234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:05 compute-0 sudo[290234]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:05 compute-0 sudo[290259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:05 compute-0 sudo[290259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:05 compute-0 sudo[290259]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:05 compute-0 sudo[290284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:03:05 compute-0 sudo[290284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:03:05
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'vms', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups']
Nov 29 08:03:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:03:06 compute-0 sudo[290284]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:03:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:03:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:03:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:03:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:03:06 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 90115299-8d7c-4a1d-9fce-90ddb7275578 does not exist
Nov 29 08:03:06 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 9312960f-1c4a-4d7a-a8aa-1915ecb01ea3 does not exist
Nov 29 08:03:06 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 41abad0e-e66f-4d13-89a6-79c6e7529216 does not exist
Nov 29 08:03:06 compute-0 ceph-mon[75050]: pgmap v1859: 305 pgs: 305 active+clean; 248 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 27 KiB/s wr, 62 op/s
Nov 29 08:03:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:03:06 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:03:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:03:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:03:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:03:06 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:03:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:03:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:06 compute-0 sudo[290340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:06 compute-0 sudo[290340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:06 compute-0 sudo[290340]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:06 compute-0 sudo[290365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:06 compute-0 sudo[290365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:06 compute-0 sudo[290365]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:06 compute-0 sudo[290390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:06 compute-0 sudo[290390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:06 compute-0 sudo[290390]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:06 compute-0 sudo[290415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:03:06 compute-0 sudo[290415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 248 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 21 KiB/s wr, 110 op/s
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:03:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:03:07 compute-0 podman[290479]: 2025-11-29 08:03:07.099431286 +0000 UTC m=+0.044048837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:07 compute-0 nova_compute[256729]: 2025-11-29 08:03:07.405 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:07 compute-0 nova_compute[256729]: 2025-11-29 08:03:07.997 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] Updating instance_info_cache with network_info: [{"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:08 compute-0 sshd-session[290207]: Invalid user es from 143.14.121.41 port 51344
Nov 29 08:03:08 compute-0 nova_compute[256729]: 2025-11-29 08:03:08.043 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-a3133710-8c54-433d-9263-c081a69bf339" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:08 compute-0 nova_compute[256729]: 2025-11-29 08:03:08.044 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:03:08 compute-0 nova_compute[256729]: 2025-11-29 08:03:08.044 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:08 compute-0 nova_compute[256729]: 2025-11-29 08:03:08.045 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:08 compute-0 nova_compute[256729]: 2025-11-29 08:03:08.045 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:03:08 compute-0 nova_compute[256729]: 2025-11-29 08:03:08.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:03:08 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:03:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:03:08 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:03:08 compute-0 podman[290479]: 2025-11-29 08:03:08.405026919 +0000 UTC m=+1.349644460 container create 837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:03:08 compute-0 sshd-session[290207]: Connection closed by invalid user es 143.14.121.41 port 51344 [preauth]
Nov 29 08:03:08 compute-0 systemd[1]: Started libpod-conmon-837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11.scope.
Nov 29 08:03:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:08 compute-0 podman[290479]: 2025-11-29 08:03:08.529692437 +0000 UTC m=+1.474309978 container init 837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:08 compute-0 podman[290479]: 2025-11-29 08:03:08.540088164 +0000 UTC m=+1.484705685 container start 837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:03:08 compute-0 podman[290479]: 2025-11-29 08:03:08.54439451 +0000 UTC m=+1.489012031 container attach 837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:03:08 compute-0 systemd[1]: libpod-837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11.scope: Deactivated successfully.
Nov 29 08:03:08 compute-0 fervent_merkle[290495]: 167 167
Nov 29 08:03:08 compute-0 conmon[290495]: conmon 837a89dd2de66f40004b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11.scope/container/memory.events
Nov 29 08:03:08 compute-0 podman[290479]: 2025-11-29 08:03:08.549925816 +0000 UTC m=+1.494543357 container died 837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 08:03:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8693126a97f7cb46a5af2e8d0c77b43944e23452ada7d9b2fdd61461b438ccc3-merged.mount: Deactivated successfully.
Nov 29 08:03:08 compute-0 podman[290479]: 2025-11-29 08:03:08.595419082 +0000 UTC m=+1.540036603 container remove 837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 08:03:08 compute-0 systemd[1]: libpod-conmon-837a89dd2de66f40004bdf5ba401a803ddc4c872fc09786168381a741507cd11.scope: Deactivated successfully.
Nov 29 08:03:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1391028293' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1391028293' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2417944408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:08 compute-0 podman[290519]: 2025-11-29 08:03:08.823701495 +0000 UTC m=+0.061226635 container create 159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:08 compute-0 systemd[1]: Started libpod-conmon-159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4.scope.
Nov 29 08:03:08 compute-0 podman[290519]: 2025-11-29 08:03:08.801482592 +0000 UTC m=+0.039007742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cff2188239167ddf0c7f3fdb82fa53407d7a3dda3f00752e068fed6bf1c37ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cff2188239167ddf0c7f3fdb82fa53407d7a3dda3f00752e068fed6bf1c37ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cff2188239167ddf0c7f3fdb82fa53407d7a3dda3f00752e068fed6bf1c37ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cff2188239167ddf0c7f3fdb82fa53407d7a3dda3f00752e068fed6bf1c37ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cff2188239167ddf0c7f3fdb82fa53407d7a3dda3f00752e068fed6bf1c37ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:08 compute-0 podman[290519]: 2025-11-29 08:03:08.943872753 +0000 UTC m=+0.181397983 container init 159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 08:03:08 compute-0 podman[290519]: 2025-11-29 08:03:08.954067246 +0000 UTC m=+0.191592386 container start 159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 08:03:08 compute-0 podman[290519]: 2025-11-29 08:03:08.957190629 +0000 UTC m=+0.194715799 container attach 159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:03:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 248 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 96 KiB/s wr, 153 op/s
Nov 29 08:03:09 compute-0 ovn_controller[153383]: 2025-11-29T08:03:09Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:91:bb:d9 10.100.0.6
Nov 29 08:03:09 compute-0 ovn_controller[153383]: 2025-11-29T08:03:09Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:91:bb:d9 10.100.0.6
Nov 29 08:03:09 compute-0 ceph-mon[75050]: pgmap v1860: 305 pgs: 305 active+clean; 248 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 21 KiB/s wr, 110 op/s
Nov 29 08:03:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1391028293' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1391028293' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2417944408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Nov 29 08:03:09 compute-0 nova_compute[256729]: 2025-11-29 08:03:09.421 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Nov 29 08:03:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Nov 29 08:03:09 compute-0 nova_compute[256729]: 2025-11-29 08:03:09.957 256736 DEBUG nova.compute.manager [req-817090af-ccce-4469-b06e-9713c2964aa7 req-f72e54e1-436c-444e-85fe-0948e71b44e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received event network-changed-bf742235-ed01-4672-8d0f-37c829df931f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:09 compute-0 nova_compute[256729]: 2025-11-29 08:03:09.959 256736 DEBUG nova.compute.manager [req-817090af-ccce-4469-b06e-9713c2964aa7 req-f72e54e1-436c-444e-85fe-0948e71b44e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Refreshing instance network info cache due to event network-changed-bf742235-ed01-4672-8d0f-37c829df931f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:09 compute-0 nova_compute[256729]: 2025-11-29 08:03:09.960 256736 DEBUG oslo_concurrency.lockutils [req-817090af-ccce-4469-b06e-9713c2964aa7 req-f72e54e1-436c-444e-85fe-0948e71b44e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:09 compute-0 nova_compute[256729]: 2025-11-29 08:03:09.961 256736 DEBUG oslo_concurrency.lockutils [req-817090af-ccce-4469-b06e-9713c2964aa7 req-f72e54e1-436c-444e-85fe-0948e71b44e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:09 compute-0 nova_compute[256729]: 2025-11-29 08:03:09.962 256736 DEBUG nova.network.neutron [req-817090af-ccce-4469-b06e-9713c2964aa7 req-f72e54e1-436c-444e-85fe-0948e71b44e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Refreshing network info cache for port bf742235-ed01-4672-8d0f-37c829df931f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:10 compute-0 nervous_tharp[290537]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:03:10 compute-0 nervous_tharp[290537]: --> relative data size: 1.0
Nov 29 08:03:10 compute-0 nervous_tharp[290537]: --> All data devices are unavailable
Nov 29 08:03:10 compute-0 systemd[1]: libpod-159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4.scope: Deactivated successfully.
Nov 29 08:03:10 compute-0 podman[290519]: 2025-11-29 08:03:10.135417941 +0000 UTC m=+1.372943081 container died 159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:03:10 compute-0 systemd[1]: libpod-159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4.scope: Consumed 1.092s CPU time.
Nov 29 08:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cff2188239167ddf0c7f3fdb82fa53407d7a3dda3f00752e068fed6bf1c37ef-merged.mount: Deactivated successfully.
Nov 29 08:03:10 compute-0 podman[290519]: 2025-11-29 08:03:10.215188851 +0000 UTC m=+1.452714001 container remove 159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:03:10 compute-0 systemd[1]: libpod-conmon-159c4d72a356e48b7d317a33d289331d160ea0dcf457bfb1e5b8471e727f23e4.scope: Deactivated successfully.
Nov 29 08:03:10 compute-0 sudo[290415]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:10 compute-0 sudo[290577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:10 compute-0 sudo[290577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:10 compute-0 sudo[290577]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:10 compute-0 sudo[290602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:10 compute-0 sudo[290602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:10 compute-0 sudo[290602]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:03:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 19K writes, 75K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 19K writes, 6587 syncs, 3.02 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8309 writes, 29K keys, 8309 commit groups, 1.0 writes per commit group, ingest: 19.27 MB, 0.03 MB/s
                                           Interval WAL: 8309 writes, 3430 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:03:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Nov 29 08:03:10 compute-0 ceph-mon[75050]: pgmap v1861: 305 pgs: 305 active+clean; 248 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 96 KiB/s wr, 153 op/s
Nov 29 08:03:10 compute-0 ceph-mon[75050]: osdmap e340: 3 total, 3 up, 3 in
Nov 29 08:03:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Nov 29 08:03:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Nov 29 08:03:10 compute-0 sudo[290627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:10 compute-0 sudo[290627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:10 compute-0 sudo[290627]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:10 compute-0 sudo[290652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:03:10 compute-0 sudo[290652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:10 compute-0 podman[290721]: 2025-11-29 08:03:10.948915638 +0000 UTC m=+0.055276337 container create e0397c07e23897f764e18f04aeafb1d423afd87fb673431c8fe4f000c50849fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 08:03:11 compute-0 systemd[1]: Started libpod-conmon-e0397c07e23897f764e18f04aeafb1d423afd87fb673431c8fe4f000c50849fb.scope.
Nov 29 08:03:11 compute-0 podman[290721]: 2025-11-29 08:03:10.920926651 +0000 UTC m=+0.027287380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 248 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 98 KiB/s wr, 141 op/s
Nov 29 08:03:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:11 compute-0 podman[290721]: 2025-11-29 08:03:11.063027195 +0000 UTC m=+0.169387894 container init e0397c07e23897f764e18f04aeafb1d423afd87fb673431c8fe4f000c50849fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:03:11 compute-0 podman[290721]: 2025-11-29 08:03:11.073991476 +0000 UTC m=+0.180352135 container start e0397c07e23897f764e18f04aeafb1d423afd87fb673431c8fe4f000c50849fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_davinci, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:03:11 compute-0 podman[290721]: 2025-11-29 08:03:11.07707408 +0000 UTC m=+0.183434739 container attach e0397c07e23897f764e18f04aeafb1d423afd87fb673431c8fe4f000c50849fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 08:03:11 compute-0 naughty_davinci[290738]: 167 167
Nov 29 08:03:11 compute-0 systemd[1]: libpod-e0397c07e23897f764e18f04aeafb1d423afd87fb673431c8fe4f000c50849fb.scope: Deactivated successfully.
Nov 29 08:03:11 compute-0 podman[290743]: 2025-11-29 08:03:11.135214731 +0000 UTC m=+0.036890656 container died e0397c07e23897f764e18f04aeafb1d423afd87fb673431c8fe4f000c50849fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_davinci, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:03:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-174b92369d55dc0d659f3a35c6703672df932c7ba551cc860355909d3ff782dd-merged.mount: Deactivated successfully.
Nov 29 08:03:11 compute-0 podman[290743]: 2025-11-29 08:03:11.190750484 +0000 UTC m=+0.092426359 container remove e0397c07e23897f764e18f04aeafb1d423afd87fb673431c8fe4f000c50849fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:03:11 compute-0 systemd[1]: libpod-conmon-e0397c07e23897f764e18f04aeafb1d423afd87fb673431c8fe4f000c50849fb.scope: Deactivated successfully.
Nov 29 08:03:11 compute-0 nova_compute[256729]: 2025-11-29 08:03:11.334 256736 DEBUG nova.network.neutron [req-817090af-ccce-4469-b06e-9713c2964aa7 req-f72e54e1-436c-444e-85fe-0948e71b44e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Updated VIF entry in instance network info cache for port bf742235-ed01-4672-8d0f-37c829df931f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:03:11 compute-0 nova_compute[256729]: 2025-11-29 08:03:11.338 256736 DEBUG nova.network.neutron [req-817090af-ccce-4469-b06e-9713c2964aa7 req-f72e54e1-436c-444e-85fe-0948e71b44e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Updating instance_info_cache with network_info: [{"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:11 compute-0 nova_compute[256729]: 2025-11-29 08:03:11.363 256736 DEBUG oslo_concurrency.lockutils [req-817090af-ccce-4469-b06e-9713c2964aa7 req-f72e54e1-436c-444e-85fe-0948e71b44e3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:11 compute-0 podman[290765]: 2025-11-29 08:03:11.420855586 +0000 UTC m=+0.066111415 container create 88f10f231a363dd7795b2efd2a1e21147cce68a5e59af78e5fe4bd90694a2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 08:03:11 compute-0 ceph-mon[75050]: osdmap e341: 3 total, 3 up, 3 in
Nov 29 08:03:11 compute-0 podman[290765]: 2025-11-29 08:03:11.385538324 +0000 UTC m=+0.030794203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:11 compute-0 systemd[1]: Started libpod-conmon-88f10f231a363dd7795b2efd2a1e21147cce68a5e59af78e5fe4bd90694a2961.scope.
Nov 29 08:03:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cfa2572645d60c5fd9ef2019184a3a031c7c70ddbf296abc3410a3f836844b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cfa2572645d60c5fd9ef2019184a3a031c7c70ddbf296abc3410a3f836844b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cfa2572645d60c5fd9ef2019184a3a031c7c70ddbf296abc3410a3f836844b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cfa2572645d60c5fd9ef2019184a3a031c7c70ddbf296abc3410a3f836844b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:11 compute-0 podman[290765]: 2025-11-29 08:03:11.544766755 +0000 UTC m=+0.190022624 container init 88f10f231a363dd7795b2efd2a1e21147cce68a5e59af78e5fe4bd90694a2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bose, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:03:11 compute-0 podman[290765]: 2025-11-29 08:03:11.557566446 +0000 UTC m=+0.202822285 container start 88f10f231a363dd7795b2efd2a1e21147cce68a5e59af78e5fe4bd90694a2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:03:11 compute-0 podman[290765]: 2025-11-29 08:03:11.563242117 +0000 UTC m=+0.208498016 container attach 88f10f231a363dd7795b2efd2a1e21147cce68a5e59af78e5fe4bd90694a2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:03:11 compute-0 sshd-session[290514]: Invalid user dd from 143.14.121.41 port 51346
Nov 29 08:03:12 compute-0 vibrant_bose[290782]: {
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:     "0": [
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:         {
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "devices": [
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "/dev/loop3"
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             ],
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_name": "ceph_lv0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_size": "21470642176",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "name": "ceph_lv0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "tags": {
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.cluster_name": "ceph",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.crush_device_class": "",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.encrypted": "0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.osd_id": "0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.type": "block",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.vdo": "0"
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             },
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "type": "block",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "vg_name": "ceph_vg0"
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:         }
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:     ],
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:     "1": [
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:         {
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "devices": [
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "/dev/loop4"
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             ],
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_name": "ceph_lv1",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_size": "21470642176",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "name": "ceph_lv1",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "tags": {
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.cluster_name": "ceph",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.crush_device_class": "",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.encrypted": "0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.osd_id": "1",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.type": "block",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.vdo": "0"
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             },
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "type": "block",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "vg_name": "ceph_vg1"
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:         }
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:     ],
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:     "2": [
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:         {
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "devices": [
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "/dev/loop5"
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             ],
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_name": "ceph_lv2",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_size": "21470642176",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "name": "ceph_lv2",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "tags": {
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.cluster_name": "ceph",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.crush_device_class": "",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.encrypted": "0",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.osd_id": "2",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.type": "block",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:                 "ceph.vdo": "0"
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             },
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "type": "block",
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:             "vg_name": "ceph_vg2"
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:         }
Nov 29 08:03:12 compute-0 vibrant_bose[290782]:     ]
Nov 29 08:03:12 compute-0 vibrant_bose[290782]: }
Nov 29 08:03:12 compute-0 sshd-session[290514]: Connection closed by invalid user dd 143.14.121.41 port 51346 [preauth]
Nov 29 08:03:12 compute-0 systemd[1]: libpod-88f10f231a363dd7795b2efd2a1e21147cce68a5e59af78e5fe4bd90694a2961.scope: Deactivated successfully.
Nov 29 08:03:12 compute-0 podman[290791]: 2025-11-29 08:03:12.394997701 +0000 UTC m=+0.035987262 container died 88f10f231a363dd7795b2efd2a1e21147cce68a5e59af78e5fe4bd90694a2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:03:12 compute-0 nova_compute[256729]: 2025-11-29 08:03:12.407 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Nov 29 08:03:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Nov 29 08:03:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Nov 29 08:03:12 compute-0 ceph-mon[75050]: pgmap v1864: 305 pgs: 305 active+clean; 248 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 98 KiB/s wr, 141 op/s
Nov 29 08:03:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4cfa2572645d60c5fd9ef2019184a3a031c7c70ddbf296abc3410a3f836844b-merged.mount: Deactivated successfully.
Nov 29 08:03:12 compute-0 podman[290791]: 2025-11-29 08:03:12.781304894 +0000 UTC m=+0.422294405 container remove 88f10f231a363dd7795b2efd2a1e21147cce68a5e59af78e5fe4bd90694a2961 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bose, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 08:03:12 compute-0 systemd[1]: libpod-conmon-88f10f231a363dd7795b2efd2a1e21147cce68a5e59af78e5fe4bd90694a2961.scope: Deactivated successfully.
Nov 29 08:03:12 compute-0 sudo[290652]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:12 compute-0 sudo[290807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:12 compute-0 sudo[290807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:12 compute-0 sudo[290807]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:12 compute-0 sudo[290833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:03:12 compute-0 sudo[290833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:12 compute-0 sudo[290833]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 267 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 203 op/s
Nov 29 08:03:13 compute-0 sudo[290858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:13 compute-0 sudo[290858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:13 compute-0 sudo[290858]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:13 compute-0 sudo[290883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:03:13 compute-0 sudo[290883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/909517564' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/909517564' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:13 compute-0 podman[290950]: 2025-11-29 08:03:13.548636388 +0000 UTC m=+0.053611372 container create 36c36a928961d6c81f7f42f3a611fed5dfd5d77e0012d9b969b0373dd6fcecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 08:03:13 compute-0 systemd[1]: Started libpod-conmon-36c36a928961d6c81f7f42f3a611fed5dfd5d77e0012d9b969b0373dd6fcecb7.scope.
Nov 29 08:03:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:13 compute-0 podman[290950]: 2025-11-29 08:03:13.521479703 +0000 UTC m=+0.026454697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:13 compute-0 podman[290950]: 2025-11-29 08:03:13.628938112 +0000 UTC m=+0.133913076 container init 36c36a928961d6c81f7f42f3a611fed5dfd5d77e0012d9b969b0373dd6fcecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 08:03:13 compute-0 podman[290950]: 2025-11-29 08:03:13.638090796 +0000 UTC m=+0.143065740 container start 36c36a928961d6c81f7f42f3a611fed5dfd5d77e0012d9b969b0373dd6fcecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:03:13 compute-0 podman[290950]: 2025-11-29 08:03:13.641658321 +0000 UTC m=+0.146633275 container attach 36c36a928961d6c81f7f42f3a611fed5dfd5d77e0012d9b969b0373dd6fcecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 08:03:13 compute-0 hopeful_sanderson[290966]: 167 167
Nov 29 08:03:13 compute-0 systemd[1]: libpod-36c36a928961d6c81f7f42f3a611fed5dfd5d77e0012d9b969b0373dd6fcecb7.scope: Deactivated successfully.
Nov 29 08:03:13 compute-0 podman[290950]: 2025-11-29 08:03:13.645777111 +0000 UTC m=+0.150752085 container died 36c36a928961d6c81f7f42f3a611fed5dfd5d77e0012d9b969b0373dd6fcecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:03:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-65eb00d081405f6ac8f4f9a6b5b0b53d1189df8c183389ad8b49edfa83f1e39c-merged.mount: Deactivated successfully.
Nov 29 08:03:13 compute-0 podman[290950]: 2025-11-29 08:03:13.694193394 +0000 UTC m=+0.199168338 container remove 36c36a928961d6c81f7f42f3a611fed5dfd5d77e0012d9b969b0373dd6fcecb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sanderson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:13 compute-0 systemd[1]: libpod-conmon-36c36a928961d6c81f7f42f3a611fed5dfd5d77e0012d9b969b0373dd6fcecb7.scope: Deactivated successfully.
Nov 29 08:03:13 compute-0 ceph-mon[75050]: osdmap e342: 3 total, 3 up, 3 in
Nov 29 08:03:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/909517564' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/909517564' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:13 compute-0 podman[290992]: 2025-11-29 08:03:13.87948861 +0000 UTC m=+0.048332971 container create ddf4061cd26373d5e54ba4183d8b57e1444ebf08a438b09950813a808a4b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:03:13 compute-0 systemd[1]: Started libpod-conmon-ddf4061cd26373d5e54ba4183d8b57e1444ebf08a438b09950813a808a4b8722.scope.
Nov 29 08:03:13 compute-0 podman[290992]: 2025-11-29 08:03:13.859578538 +0000 UTC m=+0.028422929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:03:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72794a27b00d133c8cd28e9439e60bdb409c73b77fb0154374b7246dc2852d85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72794a27b00d133c8cd28e9439e60bdb409c73b77fb0154374b7246dc2852d85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Nov 29 08:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72794a27b00d133c8cd28e9439e60bdb409c73b77fb0154374b7246dc2852d85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72794a27b00d133c8cd28e9439e60bdb409c73b77fb0154374b7246dc2852d85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:03:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Nov 29 08:03:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Nov 29 08:03:13 compute-0 podman[290992]: 2025-11-29 08:03:13.987873623 +0000 UTC m=+0.156717984 container init ddf4061cd26373d5e54ba4183d8b57e1444ebf08a438b09950813a808a4b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sutherland, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:03:14 compute-0 podman[290992]: 2025-11-29 08:03:14.003755297 +0000 UTC m=+0.172599698 container start ddf4061cd26373d5e54ba4183d8b57e1444ebf08a438b09950813a808a4b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 08:03:14 compute-0 podman[290992]: 2025-11-29 08:03:14.007641871 +0000 UTC m=+0.176486322 container attach ddf4061cd26373d5e54ba4183d8b57e1444ebf08a438b09950813a808a4b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sutherland, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:03:14 compute-0 nova_compute[256729]: 2025-11-29 08:03:14.423 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:14 compute-0 ceph-mon[75050]: pgmap v1866: 305 pgs: 305 active+clean; 267 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 203 op/s
Nov 29 08:03:14 compute-0 ceph-mon[75050]: osdmap e343: 3 total, 3 up, 3 in
Nov 29 08:03:14 compute-0 nice_sutherland[291009]: {
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "osd_id": 2,
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "type": "bluestore"
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:     },
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "osd_id": 1,
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "type": "bluestore"
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:     },
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:03:14 compute-0 sshd-session[290804]: Invalid user bitrix from 143.14.121.41 port 49458
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "osd_id": 0,
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:         "type": "bluestore"
Nov 29 08:03:14 compute-0 nice_sutherland[291009]:     }
Nov 29 08:03:14 compute-0 nice_sutherland[291009]: }
Nov 29 08:03:14 compute-0 systemd[1]: libpod-ddf4061cd26373d5e54ba4183d8b57e1444ebf08a438b09950813a808a4b8722.scope: Deactivated successfully.
Nov 29 08:03:14 compute-0 podman[290992]: 2025-11-29 08:03:14.945376303 +0000 UTC m=+1.114220674 container died ddf4061cd26373d5e54ba4183d8b57e1444ebf08a438b09950813a808a4b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 08:03:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-72794a27b00d133c8cd28e9439e60bdb409c73b77fb0154374b7246dc2852d85-merged.mount: Deactivated successfully.
Nov 29 08:03:14 compute-0 podman[290992]: 2025-11-29 08:03:14.994086134 +0000 UTC m=+1.162930485 container remove ddf4061cd26373d5e54ba4183d8b57e1444ebf08a438b09950813a808a4b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 08:03:15 compute-0 systemd[1]: libpod-conmon-ddf4061cd26373d5e54ba4183d8b57e1444ebf08a438b09950813a808a4b8722.scope: Deactivated successfully.
Nov 29 08:03:15 compute-0 sudo[290883]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 281 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 710 KiB/s rd, 4.5 MiB/s wr, 164 op/s
Nov 29 08:03:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:03:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:03:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:03:15 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 03ffe2f6-aa25-4baf-93da-05e3c9d5d8b7 does not exist
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 8bfe08c8-12dd-4190-a509-35887995df20 does not exist
Nov 29 08:03:15 compute-0 sudo[291057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:03:15 compute-0 sudo[291057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:15 compute-0 sudo[291057]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:15 compute-0 sudo[291082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:03:15 compute-0 sudo[291082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:03:15 compute-0 sudo[291082]: pam_unix(sudo:session): session closed for user root
Nov 29 08:03:15 compute-0 sshd-session[290804]: Connection closed by invalid user bitrix 143.14.121.41 port 49458 [preauth]
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.2780287491046426e-06 of space, bias 1.0, pg target 0.0015834086247313928 quantized to 32 (current 32)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002929051592921795 of space, bias 1.0, pg target 0.8787154778765385 quantized to 32 (current 32)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.266792016669923e-07 of space, bias 1.0, pg target 0.0002480037605000977 quantized to 32 (current 32)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:03:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:03:16 compute-0 ceph-mon[75050]: pgmap v1868: 305 pgs: 305 active+clean; 281 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 710 KiB/s rd, 4.5 MiB/s wr, 164 op/s
Nov 29 08:03:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:03:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:03:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:03:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.5 total, 600.0 interval
                                           Cumulative writes: 16K writes, 62K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 5324 syncs, 3.09 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6905 writes, 23K keys, 6905 commit groups, 1.0 writes per commit group, ingest: 19.97 MB, 0.03 MB/s
                                           Interval WAL: 6905 writes, 2834 syncs, 2.44 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:03:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/501819516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 281 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.1 MiB/s wr, 208 op/s
Nov 29 08:03:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/501819516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:17 compute-0 nova_compute[256729]: 2025-11-29 08:03:17.410 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:17 compute-0 ovn_controller[153383]: 2025-11-29T08:03:17Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:20:11 10.100.0.7
Nov 29 08:03:17 compute-0 ovn_controller[153383]: 2025-11-29T08:03:17Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:20:11 10.100.0.7
Nov 29 08:03:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Nov 29 08:03:18 compute-0 ceph-mon[75050]: pgmap v1869: 305 pgs: 305 active+clean; 281 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.1 MiB/s wr, 208 op/s
Nov 29 08:03:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Nov 29 08:03:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Nov 29 08:03:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Nov 29 08:03:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Nov 29 08:03:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Nov 29 08:03:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 301 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 955 KiB/s rd, 8.5 MiB/s wr, 188 op/s
Nov 29 08:03:19 compute-0 ceph-mon[75050]: osdmap e344: 3 total, 3 up, 3 in
Nov 29 08:03:19 compute-0 ceph-mon[75050]: osdmap e345: 3 total, 3 up, 3 in
Nov 29 08:03:19 compute-0 sshd-session[291107]: Invalid user api from 143.14.121.41 port 49472
Nov 29 08:03:19 compute-0 nova_compute[256729]: 2025-11-29 08:03:19.469 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:19 compute-0 sshd-session[291107]: Connection closed by invalid user api 143.14.121.41 port 49472 [preauth]
Nov 29 08:03:20 compute-0 ceph-mon[75050]: pgmap v1872: 305 pgs: 305 active+clean; 301 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 955 KiB/s rd, 8.5 MiB/s wr, 188 op/s
Nov 29 08:03:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 301 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 5.9 MiB/s wr, 132 op/s
Nov 29 08:03:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Nov 29 08:03:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Nov 29 08:03:21 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Nov 29 08:03:21 compute-0 ceph-mgr[75345]: [devicehealth INFO root] Check health
Nov 29 08:03:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3794848706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3794848706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:22 compute-0 ceph-mon[75050]: pgmap v1873: 305 pgs: 305 active+clean; 301 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 5.9 MiB/s wr, 132 op/s
Nov 29 08:03:22 compute-0 ceph-mon[75050]: osdmap e346: 3 total, 3 up, 3 in
Nov 29 08:03:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3794848706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3794848706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:22 compute-0 nova_compute[256729]: 2025-11-29 08:03:22.413 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:22 compute-0 sshd-session[291109]: Invalid user admin from 143.14.121.41 port 49484
Nov 29 08:03:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 346 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 8.4 MiB/s wr, 152 op/s
Nov 29 08:03:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Nov 29 08:03:23 compute-0 sshd-session[291109]: Connection closed by invalid user admin 143.14.121.41 port 49484 [preauth]
Nov 29 08:03:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Nov 29 08:03:23 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Nov 29 08:03:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:24 compute-0 ceph-mon[75050]: pgmap v1875: 305 pgs: 305 active+clean; 346 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 8.4 MiB/s wr, 152 op/s
Nov 29 08:03:24 compute-0 ceph-mon[75050]: osdmap e347: 3 total, 3 up, 3 in
Nov 29 08:03:24 compute-0 nova_compute[256729]: 2025-11-29 08:03:24.472 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 4.7 MiB/s wr, 115 op/s
Nov 29 08:03:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Nov 29 08:03:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Nov 29 08:03:25 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Nov 29 08:03:26 compute-0 ceph-mon[75050]: pgmap v1877: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 4.7 MiB/s wr, 115 op/s
Nov 29 08:03:26 compute-0 ceph-mon[75050]: osdmap e348: 3 total, 3 up, 3 in
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.354 256736 DEBUG oslo_concurrency.lockutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.355 256736 DEBUG oslo_concurrency.lockutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.356 256736 DEBUG oslo_concurrency.lockutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.357 256736 DEBUG oslo_concurrency.lockutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.357 256736 DEBUG oslo_concurrency.lockutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.360 256736 INFO nova.compute.manager [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Terminating instance
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.362 256736 DEBUG nova.compute.manager [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:03:26 compute-0 kernel: tapbf742235-ed (unregistering): left promiscuous mode
Nov 29 08:03:26 compute-0 NetworkManager[48962]: <info>  [1764403406.4321] device (tapbf742235-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.445 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:26 compute-0 ovn_controller[153383]: 2025-11-29T08:03:26Z|00184|binding|INFO|Releasing lport bf742235-ed01-4672-8d0f-37c829df931f from this chassis (sb_readonly=0)
Nov 29 08:03:26 compute-0 ovn_controller[153383]: 2025-11-29T08:03:26Z|00185|binding|INFO|Setting lport bf742235-ed01-4672-8d0f-37c829df931f down in Southbound
Nov 29 08:03:26 compute-0 ovn_controller[153383]: 2025-11-29T08:03:26Z|00186|binding|INFO|Removing iface tapbf742235-ed ovn-installed in OVS
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.449 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.482 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:26 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 29 08:03:26 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 16.733s CPU time.
Nov 29 08:03:26 compute-0 systemd-machined[217781]: Machine qemu-18-instance-00000012 terminated.
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.538 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:20:11 10.100.0.7'], port_security=['fa:16:3e:43:20:11 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc8975a3-8b30-4fd7-b465-76d299802b38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=357216b9-f046-4273-a2c2-2385abe848ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=bf742235-ed01-4672-8d0f-37c829df931f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.540 163655 INFO neutron.agent.ovn.metadata.agent [-] Port bf742235-ed01-4672-8d0f-37c829df931f in datapath 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c unbound from our chassis
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.543 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.545 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[893b33dd-0ab3-4e90-871b-23b5049fba77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.546 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c namespace which is not needed anymore
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.628 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.637 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.641 256736 INFO nova.virt.libvirt.driver [-] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Instance destroyed successfully.
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.642 256736 DEBUG nova.objects.instance [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lazy-loading 'resources' on Instance uuid 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:03:26 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[290102]: [NOTICE]   (290106) : haproxy version is 2.8.14-c23fe91
Nov 29 08:03:26 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[290102]: [NOTICE]   (290106) : path to executable is /usr/sbin/haproxy
Nov 29 08:03:26 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[290102]: [WARNING]  (290106) : Exiting Master process...
Nov 29 08:03:26 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[290102]: [ALERT]    (290106) : Current worker (290108) exited with code 143 (Terminated)
Nov 29 08:03:26 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[290102]: [WARNING]  (290106) : All workers exited. Exiting... (0)
Nov 29 08:03:26 compute-0 systemd[1]: libpod-b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f.scope: Deactivated successfully.
Nov 29 08:03:26 compute-0 podman[291149]: 2025-11-29 08:03:26.744897591 +0000 UTC m=+0.048516176 container died b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f-userdata-shm.mount: Deactivated successfully.
Nov 29 08:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-349c6c3816f9e7582eb3b9a279cf4b949df0c0ab238debfc3c264391e2bfa0b8-merged.mount: Deactivated successfully.
Nov 29 08:03:26 compute-0 podman[291149]: 2025-11-29 08:03:26.782572417 +0000 UTC m=+0.086191042 container cleanup b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:03:26 compute-0 systemd[1]: libpod-conmon-b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f.scope: Deactivated successfully.
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.810 256736 DEBUG nova.virt.libvirt.vif [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2119805276',display_name='tempest-TransferEncryptedVolumeTest-server-2119805276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2119805276',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOK0uPKk5+iwu2ACxwLiXPfPFKjqAeuswoaNdzNGpYFdv9fCZRffGqJNvJmfqnbg+KUupmPFmswjEh+khO5A2TFlJ9LMuOBogxQ7cFR7kmTFduCVQRkpWi0Jux9/KRhlg==',key_name='tempest-TransferEncryptedVolumeTest-347364744',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:03:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-3vdrc1i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:03:04Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.811 256736 DEBUG nova.network.os_vif_util [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "bf742235-ed01-4672-8d0f-37c829df931f", "address": "fa:16:3e:43:20:11", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf742235-ed", "ovs_interfaceid": "bf742235-ed01-4672-8d0f-37c829df931f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.812 256736 DEBUG nova.network.os_vif_util [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:20:11,bridge_name='br-int',has_traffic_filtering=True,id=bf742235-ed01-4672-8d0f-37c829df931f,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf742235-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.813 256736 DEBUG os_vif [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:20:11,bridge_name='br-int',has_traffic_filtering=True,id=bf742235-ed01-4672-8d0f-37c829df931f,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf742235-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.817 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.817 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf742235-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.820 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.824 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.829 256736 INFO os_vif [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:20:11,bridge_name='br-int',has_traffic_filtering=True,id=bf742235-ed01-4672-8d0f-37c829df931f,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf742235-ed')
Nov 29 08:03:26 compute-0 podman[291178]: 2025-11-29 08:03:26.863409006 +0000 UTC m=+0.053549311 container remove b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.873 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[21313bd3-762a-41a1-860a-2af509adb296]: (4, ('Sat Nov 29 08:03:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c (b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f)\nb5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f\nSat Nov 29 08:03:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c (b5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f)\nb5e3f0630e6c8d2ec97ad96d4469473288a63676fa284a465639d31513a6ca1f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.875 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0393ee75-b230-4530-9fce-e28d9f7328b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.876 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f1bbc0-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.879 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:26 compute-0 kernel: tap45f1bbc0-c0: left promiscuous mode
Nov 29 08:03:26 compute-0 nova_compute[256729]: 2025-11-29 08:03:26.912 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.917 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f961a4a7-7626-4d96-bc95-4d42e20aa4a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.935 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[105f93ae-65e5-4642-9068-4ddb13f26063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.936 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5078c324-292c-4615-98fe-8ed6b60b366e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.963 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1b33e664-a399-4c2c-b9a0-3e00c3a56028]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569555, 'reachable_time': 28154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291208, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.966 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:03:26 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:26.966 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[cd65ca12-f7c9-4774-832f-d76d6e6b66f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d45f1bbc0\x2dc06e\x2d4a64\x2d9d82\x2d3a4cbaa9482c.mount: Deactivated successfully.
Nov 29 08:03:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 298 KiB/s rd, 4.7 MiB/s wr, 160 op/s
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.033 256736 INFO nova.virt.libvirt.driver [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Deleting instance files /var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d_del
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.034 256736 INFO nova.virt.libvirt.driver [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Deletion of /var/lib/nova/instances/0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d_del complete
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.069 256736 DEBUG nova.compute.manager [req-30bb063f-0b3c-4d37-b2e1-c98230084beb req-1217cefa-d1a3-4a86-8e1f-52ceecd2c035 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received event network-vif-unplugged-bf742235-ed01-4672-8d0f-37c829df931f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.070 256736 DEBUG oslo_concurrency.lockutils [req-30bb063f-0b3c-4d37-b2e1-c98230084beb req-1217cefa-d1a3-4a86-8e1f-52ceecd2c035 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.070 256736 DEBUG oslo_concurrency.lockutils [req-30bb063f-0b3c-4d37-b2e1-c98230084beb req-1217cefa-d1a3-4a86-8e1f-52ceecd2c035 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.070 256736 DEBUG oslo_concurrency.lockutils [req-30bb063f-0b3c-4d37-b2e1-c98230084beb req-1217cefa-d1a3-4a86-8e1f-52ceecd2c035 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.070 256736 DEBUG nova.compute.manager [req-30bb063f-0b3c-4d37-b2e1-c98230084beb req-1217cefa-d1a3-4a86-8e1f-52ceecd2c035 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] No waiting events found dispatching network-vif-unplugged-bf742235-ed01-4672-8d0f-37c829df931f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.071 256736 DEBUG nova.compute.manager [req-30bb063f-0b3c-4d37-b2e1-c98230084beb req-1217cefa-d1a3-4a86-8e1f-52ceecd2c035 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received event network-vif-unplugged-bf742235-ed01-4672-8d0f-37c829df931f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.360 256736 INFO nova.compute.manager [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Took 1.00 seconds to destroy the instance on the hypervisor.
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.361 256736 DEBUG oslo.service.loopingcall [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.362 256736 DEBUG nova.compute.manager [-] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.362 256736 DEBUG nova.network.neutron [-] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:03:27 compute-0 nova_compute[256729]: 2025-11-29 08:03:27.416 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:27 compute-0 sshd-session[291111]: Invalid user admin from 143.14.121.41 port 57540
Nov 29 08:03:28 compute-0 sshd-session[291111]: Connection closed by invalid user admin 143.14.121.41 port 57540 [preauth]
Nov 29 08:03:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Nov 29 08:03:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Nov 29 08:03:28 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Nov 29 08:03:28 compute-0 ceph-mon[75050]: pgmap v1879: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 298 KiB/s rd, 4.7 MiB/s wr, 160 op/s
Nov 29 08:03:28 compute-0 nova_compute[256729]: 2025-11-29 08:03:28.569 256736 DEBUG nova.network.neutron [-] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:28 compute-0 nova_compute[256729]: 2025-11-29 08:03:28.616 256736 DEBUG nova.compute.manager [req-e2c68309-f89d-4f75-8c1a-ee79206c0451 req-699cdb4d-8c75-4a6d-b46a-e5522871526b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received event network-vif-deleted-bf742235-ed01-4672-8d0f-37c829df931f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:28 compute-0 nova_compute[256729]: 2025-11-29 08:03:28.616 256736 INFO nova.compute.manager [req-e2c68309-f89d-4f75-8c1a-ee79206c0451 req-699cdb4d-8c75-4a6d-b46a-e5522871526b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Neutron deleted interface bf742235-ed01-4672-8d0f-37c829df931f; detaching it from the instance and deleting it from the info cache
Nov 29 08:03:28 compute-0 nova_compute[256729]: 2025-11-29 08:03:28.616 256736 DEBUG nova.network.neutron [req-e2c68309-f89d-4f75-8c1a-ee79206c0451 req-699cdb4d-8c75-4a6d-b46a-e5522871526b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:28 compute-0 nova_compute[256729]: 2025-11-29 08:03:28.765 256736 INFO nova.compute.manager [-] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Took 1.40 seconds to deallocate network for instance.
Nov 29 08:03:28 compute-0 nova_compute[256729]: 2025-11-29 08:03:28.777 256736 DEBUG nova.compute.manager [req-e2c68309-f89d-4f75-8c1a-ee79206c0451 req-699cdb4d-8c75-4a6d-b46a-e5522871526b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Detach interface failed, port_id=bf742235-ed01-4672-8d0f-37c829df931f, reason: Instance 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 29 08:03:28 compute-0 nova_compute[256729]: 2025-11-29 08:03:28.926 256736 INFO nova.compute.manager [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Took 0.16 seconds to detach 1 volumes for instance.
Nov 29 08:03:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Nov 29 08:03:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 722 KiB/s wr, 149 op/s
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.035 256736 DEBUG oslo_concurrency.lockutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.036 256736 DEBUG oslo_concurrency.lockutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.131 256736 DEBUG oslo_concurrency.processutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Nov 29 08:03:29 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.259 256736 DEBUG nova.compute.manager [req-24d6b210-61b8-4eaa-9bda-c99be0c4fe43 req-d6706dfb-bca8-4356-beb9-64a469257c69 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received event network-vif-plugged-bf742235-ed01-4672-8d0f-37c829df931f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.260 256736 DEBUG oslo_concurrency.lockutils [req-24d6b210-61b8-4eaa-9bda-c99be0c4fe43 req-d6706dfb-bca8-4356-beb9-64a469257c69 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.261 256736 DEBUG oslo_concurrency.lockutils [req-24d6b210-61b8-4eaa-9bda-c99be0c4fe43 req-d6706dfb-bca8-4356-beb9-64a469257c69 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.262 256736 DEBUG oslo_concurrency.lockutils [req-24d6b210-61b8-4eaa-9bda-c99be0c4fe43 req-d6706dfb-bca8-4356-beb9-64a469257c69 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.262 256736 DEBUG nova.compute.manager [req-24d6b210-61b8-4eaa-9bda-c99be0c4fe43 req-d6706dfb-bca8-4356-beb9-64a469257c69 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] No waiting events found dispatching network-vif-plugged-bf742235-ed01-4672-8d0f-37c829df931f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.263 256736 WARNING nova.compute.manager [req-24d6b210-61b8-4eaa-9bda-c99be0c4fe43 req-d6706dfb-bca8-4356-beb9-64a469257c69 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Received unexpected event network-vif-plugged-bf742235-ed01-4672-8d0f-37c829df931f for instance with vm_state deleted and task_state None.
Nov 29 08:03:29 compute-0 ceph-mon[75050]: osdmap e349: 3 total, 3 up, 3 in
Nov 29 08:03:29 compute-0 ceph-mon[75050]: osdmap e350: 3 total, 3 up, 3 in
Nov 29 08:03:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070509752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.622 256736 DEBUG oslo_concurrency.processutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.629 256736 DEBUG nova.compute.provider_tree [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.807 256736 DEBUG nova.scheduler.client.report [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.839 256736 DEBUG oslo_concurrency.lockutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:29 compute-0 nova_compute[256729]: 2025-11-29 08:03:29.986 256736 INFO nova.scheduler.client.report [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Deleted allocations for instance 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d
Nov 29 08:03:30 compute-0 nova_compute[256729]: 2025-11-29 08:03:30.273 256736 DEBUG oslo_concurrency.lockutils [None req-0f3037b7-cae5-4a8c-9a85-00a20ed11985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:30 compute-0 ceph-mon[75050]: pgmap v1881: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 722 KiB/s wr, 149 op/s
Nov 29 08:03:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4070509752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 34 KiB/s wr, 106 op/s
Nov 29 08:03:31 compute-0 sshd-session[291210]: Invalid user user from 143.14.121.41 port 57550
Nov 29 08:03:31 compute-0 sshd-session[291210]: Connection closed by invalid user user 143.14.121.41 port 57550 [preauth]
Nov 29 08:03:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Nov 29 08:03:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Nov 29 08:03:31 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Nov 29 08:03:31 compute-0 nova_compute[256729]: 2025-11-29 08:03:31.821 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:32 compute-0 nova_compute[256729]: 2025-11-29 08:03:32.466 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:32 compute-0 ceph-mon[75050]: pgmap v1883: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 34 KiB/s wr, 106 op/s
Nov 29 08:03:32 compute-0 ceph-mon[75050]: osdmap e351: 3 total, 3 up, 3 in
Nov 29 08:03:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Nov 29 08:03:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Nov 29 08:03:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Nov 29 08:03:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 653 B/s wr, 18 op/s
Nov 29 08:03:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2651201836' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:33 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2651201836' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:33 compute-0 ceph-mon[75050]: osdmap e352: 3 total, 3 up, 3 in
Nov 29 08:03:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2651201836' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2651201836' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:33 compute-0 podman[291238]: 2025-11-29 08:03:33.734077958 +0000 UTC m=+0.092758897 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:03:33 compute-0 podman[291237]: 2025-11-29 08:03:33.740040458 +0000 UTC m=+0.113976484 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 08:03:33 compute-0 podman[291236]: 2025-11-29 08:03:33.761392727 +0000 UTC m=+0.131822860 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:03:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Nov 29 08:03:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Nov 29 08:03:34 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Nov 29 08:03:34 compute-0 ceph-mon[75050]: pgmap v1886: 305 pgs: 305 active+clean; 350 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 653 B/s wr, 18 op/s
Nov 29 08:03:34 compute-0 ceph-mon[75050]: osdmap e353: 3 total, 3 up, 3 in
Nov 29 08:03:34 compute-0 sshd-session[291234]: Invalid user ubuntu from 143.14.121.41 port 52162
Nov 29 08:03:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 2.9 KiB/s wr, 87 op/s
Nov 29 08:03:35 compute-0 sshd-session[291234]: Connection closed by invalid user ubuntu 143.14.121.41 port 52162 [preauth]
Nov 29 08:03:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:03:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:03:36 compute-0 ceph-mon[75050]: pgmap v1888: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 2.9 KiB/s wr, 87 op/s
Nov 29 08:03:36 compute-0 nova_compute[256729]: 2025-11-29 08:03:36.824 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 5.0 KiB/s wr, 105 op/s
Nov 29 08:03:37 compute-0 nova_compute[256729]: 2025-11-29 08:03:37.468 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:38 compute-0 sshd-session[291298]: Connection closed by authenticating user root 143.14.121.41 port 52166 [preauth]
Nov 29 08:03:38 compute-0 ceph-mon[75050]: pgmap v1889: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 5.0 KiB/s wr, 105 op/s
Nov 29 08:03:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 6.9 KiB/s wr, 86 op/s
Nov 29 08:03:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Nov 29 08:03:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Nov 29 08:03:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Nov 29 08:03:40 compute-0 ceph-mon[75050]: pgmap v1890: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 6.9 KiB/s wr, 86 op/s
Nov 29 08:03:40 compute-0 ceph-mon[75050]: osdmap e354: 3 total, 3 up, 3 in
Nov 29 08:03:40 compute-0 nova_compute[256729]: 2025-11-29 08:03:40.254 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:40 compute-0 nova_compute[256729]: 2025-11-29 08:03:40.254 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:40 compute-0 nova_compute[256729]: 2025-11-29 08:03:40.331 256736 DEBUG nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:03:40 compute-0 nova_compute[256729]: 2025-11-29 08:03:40.525 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:40 compute-0 nova_compute[256729]: 2025-11-29 08:03:40.525 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:40 compute-0 nova_compute[256729]: 2025-11-29 08:03:40.533 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:03:40 compute-0 nova_compute[256729]: 2025-11-29 08:03:40.534 256736 INFO nova.compute.claims [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:03:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2713228593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:40 compute-0 nova_compute[256729]: 2025-11-29 08:03:40.870 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 6.0 KiB/s wr, 72 op/s
Nov 29 08:03:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2713228593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390688920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.339 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.351 256736 DEBUG nova.compute.provider_tree [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:03:41 compute-0 sshd-session[291300]: Connection closed by authenticating user root 143.14.121.41 port 52180 [preauth]
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.379 256736 DEBUG nova.scheduler.client.report [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.415 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.417 256736 DEBUG nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.488 256736 INFO nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.495 256736 DEBUG nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.496 256736 DEBUG nova.network.neutron [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.519 256736 DEBUG nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.584 256736 INFO nova.virt.block_device [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Booting with volume snapshot 5c0d4ab6-41cc-4a27-8b26-cf2070936d9c at /dev/vda
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.638 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403406.6368282, 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.639 256736 INFO nova.compute.manager [-] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] VM Stopped (Lifecycle Event)
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.679 256736 DEBUG nova.compute.manager [None req-c731e2a1-008b-4564-b528-2b8ba5f1b560 - - - - - -] [instance: 0dfcbed1-3503-4bc9-a2e0-d1af2f1fc25d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.684 256736 DEBUG nova.policy [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9664e420085d412aae898a6ec021b24f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dfb6854e99614af5b8df420841fde0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:03:41 compute-0 nova_compute[256729]: 2025-11-29 08:03:41.826 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Nov 29 08:03:42 compute-0 ceph-mon[75050]: pgmap v1892: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 6.0 KiB/s wr, 72 op/s
Nov 29 08:03:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3390688920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Nov 29 08:03:42 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Nov 29 08:03:42 compute-0 nova_compute[256729]: 2025-11-29 08:03:42.470 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:42 compute-0 nova_compute[256729]: 2025-11-29 08:03:42.660 256736 DEBUG nova.network.neutron [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Successfully created port: 3aeb3e64-1138-4555-bde4-4f5d2e627b7a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:03:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 7.7 KiB/s wr, 28 op/s
Nov 29 08:03:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Nov 29 08:03:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Nov 29 08:03:43 compute-0 ceph-mon[75050]: osdmap e355: 3 total, 3 up, 3 in
Nov 29 08:03:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.376 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "7b834f92-a941-48d4-830a-98e70067cabb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.376 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.416 256736 DEBUG nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.531 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.532 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.540 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.541 256736 INFO nova.compute.claims [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.751 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.853 256736 DEBUG nova.network.neutron [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Successfully updated port: 3aeb3e64-1138-4555-bde4-4f5d2e627b7a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.902 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "refresh_cache-f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.903 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquired lock "refresh_cache-f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:43 compute-0 nova_compute[256729]: 2025-11-29 08:03:43.903 256736 DEBUG nova.network.neutron [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:03:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:03:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030489028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.187 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.195 256736 DEBUG nova.compute.provider_tree [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:03:44 compute-0 ceph-mon[75050]: pgmap v1894: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 7.7 KiB/s wr, 28 op/s
Nov 29 08:03:44 compute-0 ceph-mon[75050]: osdmap e356: 3 total, 3 up, 3 in
Nov 29 08:03:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3030489028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.215 256736 DEBUG nova.scheduler.client.report [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.282 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.282 256736 DEBUG nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.369 256736 DEBUG nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.369 256736 DEBUG nova.network.neutron [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.403 256736 INFO nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.440 256736 DEBUG nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.514 256736 INFO nova.virt.block_device [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Booting with volume 1e0633b8-d2a6-4f22-aa22-9308e9b3acc4 at /dev/vda
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.551 256736 DEBUG nova.network.neutron [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.605 256736 DEBUG nova.policy [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb2de7fb67042f89a025f1a3e872530', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.694 256736 DEBUG nova.compute.manager [req-8a9c5052-dfb0-4869-b4a3-6943b9be5bc2 req-e2b22513-e34b-4e10-9965-83a2f22a8707 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received event network-changed-3aeb3e64-1138-4555-bde4-4f5d2e627b7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.695 256736 DEBUG nova.compute.manager [req-8a9c5052-dfb0-4869-b4a3-6943b9be5bc2 req-e2b22513-e34b-4e10-9965-83a2f22a8707 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Refreshing instance network info cache due to event network-changed-3aeb3e64-1138-4555-bde4-4f5d2e627b7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.696 256736 DEBUG oslo_concurrency.lockutils [req-8a9c5052-dfb0-4869-b4a3-6943b9be5bc2 req-e2b22513-e34b-4e10-9965-83a2f22a8707 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.723 256736 DEBUG os_brick.utils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.725 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.745 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.746 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[1925c725-802e-4f51-a3e4-0ac69c9afb73]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.748 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.763 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.764 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[12bbfba5-c782-4dc6-85aa-98c1fb2de6a4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.768 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.784 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.784 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[591c9a88-c1c2-4896-8afe-7e7c81e26d99]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.785 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[26136dd0-a0c3-4f7a-a94b-caf9bec4ad00]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.786 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.825 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.831 256736 DEBUG os_brick.initiator.connectors.lightos [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.832 256736 DEBUG os_brick.initiator.connectors.lightos [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.833 256736 DEBUG os_brick.initiator.connectors.lightos [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.834 256736 DEBUG os_brick.utils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] <== get_connector_properties: return (110ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:03:44 compute-0 nova_compute[256729]: 2025-11-29 08:03:44.834 256736 DEBUG nova.virt.block_device [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Updating existing volume attachment record: 66df94ab-9486-472d-a6dd-3c6b26a38087 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:03:44 compute-0 sshd-session[291324]: Connection closed by authenticating user root 143.14.121.41 port 52188 [preauth]
Nov 29 08:03:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.7 KiB/s wr, 34 op/s
Nov 29 08:03:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4025288389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.120 256736 DEBUG os_brick.utils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.121 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.138 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.139 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2d84ef-98b0-4ac5-81e2-6369fac3ff12]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.141 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.148 256736 DEBUG nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.150 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.151 256736 INFO nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Creating image(s)
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.152 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.153 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Ensure instance console log exists: /var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.154 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.154 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.155 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.154 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.155 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[060fcab3-098d-4eba-93c6-81ba9551c32b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.158 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.172 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.173 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6c9abb-4ea5-4132-8afa-3083b59da1df]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.176 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[54643521-ac8a-46fc-928a-c4620d675849]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.177 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.217 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.221 256736 DEBUG os_brick.initiator.connectors.lightos [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.222 256736 DEBUG os_brick.initiator.connectors.lightos [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.222 256736 DEBUG os_brick.initiator.connectors.lightos [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.223 256736 DEBUG os_brick.utils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] <== get_connector_properties: return (101ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.224 256736 DEBUG nova.virt.block_device [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Updating existing volume attachment record: 56b2f4a1-be25-4dd7-ad9e-117ad4f16695 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:03:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Nov 29 08:03:46 compute-0 ceph-mon[75050]: pgmap v1896: 305 pgs: 305 active+clean; 350 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.7 KiB/s wr, 34 op/s
Nov 29 08:03:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4025288389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Nov 29 08:03:46 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.586 256736 DEBUG nova.network.neutron [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Updating instance_info_cache with network_info: [{"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.695 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Releasing lock "refresh_cache-f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.696 256736 DEBUG nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Instance network_info: |[{"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.698 256736 DEBUG oslo_concurrency.lockutils [req-8a9c5052-dfb0-4869-b4a3-6943b9be5bc2 req-e2b22513-e34b-4e10-9965-83a2f22a8707 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.699 256736 DEBUG nova.network.neutron [req-8a9c5052-dfb0-4869-b4a3-6943b9be5bc2 req-e2b22513-e34b-4e10-9965-83a2f22a8707 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Refreshing network info cache for port 3aeb3e64-1138-4555-bde4-4f5d2e627b7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.830 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:46 compute-0 nova_compute[256729]: 2025-11-29 08:03:46.846 256736 DEBUG nova.network.neutron [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Successfully created port: 94ee6a33-75bb-43b8-952b-43a160169df4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:03:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/219916680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 11 KiB/s wr, 94 op/s
Nov 29 08:03:47 compute-0 ceph-mon[75050]: osdmap e357: 3 total, 3 up, 3 in
Nov 29 08:03:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/219916680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.472 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.788 256736 DEBUG nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.790 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.791 256736 INFO nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Creating image(s)
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.792 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.792 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Ensure instance console log exists: /var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.793 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.793 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.794 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.799 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Start _get_guest_xml network_info=[{"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-29T08:03:31Z,direct_url=<?>,disk_format='qcow2',id=f7315a32-137c-4094-b682-0e4e6066843f,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1540287631',owner='dfb6854e99614af5b8df420841fde0db',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-29T08:03:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': True, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-86c4247c-01b7-40b2-b116-bdc19256ee22', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '86c4247c-01b7-40b2-b116-bdc19256ee22', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f', 'attached_at': '', 'detached_at': '', 'volume_id': '86c4247c-01b7-40b2-b116-bdc19256ee22', 'serial': '86c4247c-01b7-40b2-b116-bdc19256ee22'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': '56b2f4a1-be25-4dd7-ad9e-117ad4f16695', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.805 256736 WARNING nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.811 256736 DEBUG nova.virt.libvirt.host [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.812 256736 DEBUG nova.virt.libvirt.host [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.817 256736 DEBUG nova.virt.libvirt.host [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.817 256736 DEBUG nova.virt.libvirt.host [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.818 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.819 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-29T08:03:31Z,direct_url=<?>,disk_format='qcow2',id=f7315a32-137c-4094-b682-0e4e6066843f,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1540287631',owner='dfb6854e99614af5b8df420841fde0db',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-29T08:03:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.820 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.820 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.821 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.821 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.821 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.822 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.823 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.823 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.823 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.824 256736 DEBUG nova.virt.hardware [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.859 256736 DEBUG nova.storage.rbd_utils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:03:47 compute-0 nova_compute[256729]: 2025-11-29 08:03:47.865 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Nov 29 08:03:48 compute-0 ceph-mon[75050]: pgmap v1898: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 11 KiB/s wr, 94 op/s
Nov 29 08:03:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Nov 29 08:03:48 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Nov 29 08:03:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639432823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.367 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.535 256736 DEBUG nova.network.neutron [req-8a9c5052-dfb0-4869-b4a3-6943b9be5bc2 req-e2b22513-e34b-4e10-9965-83a2f22a8707 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Updated VIF entry in instance network info cache for port 3aeb3e64-1138-4555-bde4-4f5d2e627b7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.536 256736 DEBUG nova.network.neutron [req-8a9c5052-dfb0-4869-b4a3-6943b9be5bc2 req-e2b22513-e34b-4e10-9965-83a2f22a8707 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Updating instance_info_cache with network_info: [{"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.624 256736 DEBUG nova.virt.libvirt.vif [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1026264877',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1026264877',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1026264877',id=19,image_ref='f7315a32-137c-4094-b682-0e4e6066843f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ1ZrolRCF6iA70/5EqxJPvz7IuR5YX7KRRGcfQUylcOTLvF5Qe/G8xWb4/Hpy0xQYRaluMYZS24WZm4N5QiZpQWWo2/zF+jfBx5rKk4cjRXWcqYhWVpc78G6H8G1bSNEQ==',key_name='tempest-keypair-766407201',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-5w2sikm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-776329285',image_owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9664e420085d412aae898a6ec021b24f',uuid=f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.625 256736 DEBUG nova.network.os_vif_util [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.626 256736 DEBUG nova.network.os_vif_util [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:6f:f0,bridge_name='br-int',has_traffic_filtering=True,id=3aeb3e64-1138-4555-bde4-4f5d2e627b7a,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aeb3e64-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.628 256736 DEBUG nova.objects.instance [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'pci_devices' on Instance uuid f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.715 256736 DEBUG nova.network.neutron [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Successfully updated port: 94ee6a33-75bb-43b8-952b-43a160169df4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.901 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <uuid>f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f</uuid>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <name>instance-00000013</name>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-1026264877</nova:name>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:03:47</nova:creationTime>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <nova:user uuid="9664e420085d412aae898a6ec021b24f">tempest-TestVolumeBootPattern-776329285-project-member</nova:user>
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <nova:project uuid="dfb6854e99614af5b8df420841fde0db">tempest-TestVolumeBootPattern-776329285</nova:project>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="f7315a32-137c-4094-b682-0e4e6066843f"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <nova:port uuid="3aeb3e64-1138-4555-bde4-4f5d2e627b7a">
Nov 29 08:03:48 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <system>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <entry name="serial">f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f</entry>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <entry name="uuid">f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f</entry>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     </system>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <os>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   </os>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <features>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   </features>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f_disk.config">
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       </source>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-86c4247c-01b7-40b2-b116-bdc19256ee22">
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       </source>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:03:48 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <serial>86c4247c-01b7-40b2-b116-bdc19256ee22</serial>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:aa:6f:f0"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <target dev="tap3aeb3e64-11"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f/console.log" append="off"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <video>
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     </video>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <input type="keyboard" bus="usb"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:03:48 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:03:48 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:03:48 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:03:48 compute-0 nova_compute[256729]: </domain>
Nov 29 08:03:48 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.902 256736 DEBUG nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Preparing to wait for external event network-vif-plugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.902 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.902 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.903 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.904 256736 DEBUG nova.virt.libvirt.vif [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1026264877',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1026264877',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1026264877',id=19,image_ref='f7315a32-137c-4094-b682-0e4e6066843f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ1ZrolRCF6iA70/5EqxJPvz7IuR5YX7KRRGcfQUylcOTLvF5Qe/G8xWb4/Hpy0xQYRaluMYZS24WZm4N5QiZpQWWo2/zF+jfBx5rKk4cjRXWcqYhWVpc78G6H8G1bSNEQ==',key_name='tempest-keypair-766407201',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-5w2sikm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-776329285',image_owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9664e420085d412aae898a6ec021b24f',uuid=f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.905 256736 DEBUG nova.network.os_vif_util [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.906 256736 DEBUG nova.network.os_vif_util [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:6f:f0,bridge_name='br-int',has_traffic_filtering=True,id=3aeb3e64-1138-4555-bde4-4f5d2e627b7a,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aeb3e64-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.906 256736 DEBUG os_vif [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:6f:f0,bridge_name='br-int',has_traffic_filtering=True,id=3aeb3e64-1138-4555-bde4-4f5d2e627b7a,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aeb3e64-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.907 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.908 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.908 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.911 256736 DEBUG oslo_concurrency.lockutils [req-8a9c5052-dfb0-4869-b4a3-6943b9be5bc2 req-e2b22513-e34b-4e10-9965-83a2f22a8707 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.913 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.913 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3aeb3e64-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.914 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3aeb3e64-11, col_values=(('external_ids', {'iface-id': '3aeb3e64-1138-4555-bde4-4f5d2e627b7a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:6f:f0', 'vm-uuid': 'f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.916 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:48 compute-0 NetworkManager[48962]: <info>  [1764403428.9177] manager: (tap3aeb3e64-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.921 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.925 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:48 compute-0 nova_compute[256729]: 2025-11-29 08:03:48.927 256736 INFO os_vif [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:6f:f0,bridge_name='br-int',has_traffic_filtering=True,id=3aeb3e64-1138-4555-bde4-4f5d2e627b7a,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aeb3e64-11')
Nov 29 08:03:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 6.7 KiB/s wr, 84 op/s
Nov 29 08:03:49 compute-0 sshd-session[291355]: Connection closed by authenticating user root 143.14.121.41 port 58522 [preauth]
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.155 256736 DEBUG nova.compute.manager [req-7ea3a636-bc79-4c08-80da-1f2488376d02 req-9e385185-9823-4773-b076-929c9bff4cfd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received event network-changed-94ee6a33-75bb-43b8-952b-43a160169df4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.155 256736 DEBUG nova.compute.manager [req-7ea3a636-bc79-4c08-80da-1f2488376d02 req-9e385185-9823-4773-b076-929c9bff4cfd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Refreshing instance network info cache due to event network-changed-94ee6a33-75bb-43b8-952b-43a160169df4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.156 256736 DEBUG oslo_concurrency.lockutils [req-7ea3a636-bc79-4c08-80da-1f2488376d02 req-9e385185-9823-4773-b076-929c9bff4cfd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-7b834f92-a941-48d4-830a-98e70067cabb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.156 256736 DEBUG oslo_concurrency.lockutils [req-7ea3a636-bc79-4c08-80da-1f2488376d02 req-9e385185-9823-4773-b076-929c9bff4cfd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-7b834f92-a941-48d4-830a-98e70067cabb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.156 256736 DEBUG nova.network.neutron [req-7ea3a636-bc79-4c08-80da-1f2488376d02 req-9e385185-9823-4773-b076-929c9bff4cfd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Refreshing network info cache for port 94ee6a33-75bb-43b8-952b-43a160169df4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:03:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:03:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1253634707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:03:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1253634707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.249 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "refresh_cache-7b834f92-a941-48d4-830a-98e70067cabb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:03:49 compute-0 ceph-mon[75050]: osdmap e358: 3 total, 3 up, 3 in
Nov 29 08:03:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3639432823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1253634707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:03:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1253634707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.865 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.866 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.866 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No VIF found with MAC fa:16:3e:aa:6f:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.867 256736 INFO nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Using config drive
Nov 29 08:03:49 compute-0 nova_compute[256729]: 2025-11-29 08:03:49.898 256736 DEBUG nova.storage.rbd_utils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:03:50 compute-0 ceph-mon[75050]: pgmap v1900: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 6.7 KiB/s wr, 84 op/s
Nov 29 08:03:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 5.1 KiB/s wr, 64 op/s
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.193 256736 DEBUG nova.network.neutron [req-7ea3a636-bc79-4c08-80da-1f2488376d02 req-9e385185-9823-4773-b076-929c9bff4cfd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.474 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.519 256736 INFO nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Creating config drive at /var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f/disk.config
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.532 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpezzcsgiw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.687 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpezzcsgiw" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.732 256736 DEBUG nova.storage.rbd_utils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.739 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f/disk.config f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:52 compute-0 ceph-mon[75050]: pgmap v1901: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 5.1 KiB/s wr, 64 op/s
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.977 256736 DEBUG nova.network.neutron [req-7ea3a636-bc79-4c08-80da-1f2488376d02 req-9e385185-9823-4773-b076-929c9bff4cfd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.988 256736 DEBUG oslo_concurrency.processutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f/disk.config f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:52 compute-0 nova_compute[256729]: 2025-11-29 08:03:52.989 256736 INFO nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Deleting local config drive /var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f/disk.config because it was imported into RBD.
Nov 29 08:03:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 16 KiB/s wr, 73 op/s
Nov 29 08:03:53 compute-0 kernel: tap3aeb3e64-11: entered promiscuous mode
Nov 29 08:03:53 compute-0 NetworkManager[48962]: <info>  [1764403433.0642] manager: (tap3aeb3e64-11): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Nov 29 08:03:53 compute-0 systemd-udevd[291477]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:03:53 compute-0 ovn_controller[153383]: 2025-11-29T08:03:53Z|00187|binding|INFO|Claiming lport 3aeb3e64-1138-4555-bde4-4f5d2e627b7a for this chassis.
Nov 29 08:03:53 compute-0 ovn_controller[153383]: 2025-11-29T08:03:53Z|00188|binding|INFO|3aeb3e64-1138-4555-bde4-4f5d2e627b7a: Claiming fa:16:3e:aa:6f:f0 10.100.0.10
Nov 29 08:03:53 compute-0 nova_compute[256729]: 2025-11-29 08:03:53.102 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:53 compute-0 NetworkManager[48962]: <info>  [1764403433.1196] device (tap3aeb3e64-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:03:53 compute-0 NetworkManager[48962]: <info>  [1764403433.1204] device (tap3aeb3e64-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:03:53 compute-0 systemd-machined[217781]: New machine qemu-19-instance-00000013.
Nov 29 08:03:53 compute-0 ovn_controller[153383]: 2025-11-29T08:03:53Z|00189|binding|INFO|Setting lport 3aeb3e64-1138-4555-bde4-4f5d2e627b7a ovn-installed in OVS
Nov 29 08:03:53 compute-0 nova_compute[256729]: 2025-11-29 08:03:53.138 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:53 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Nov 29 08:03:53 compute-0 nova_compute[256729]: 2025-11-29 08:03:53.147 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:53 compute-0 sshd-session[291407]: Connection closed by authenticating user root 143.14.121.41 port 58524 [preauth]
Nov 29 08:03:53 compute-0 nova_compute[256729]: 2025-11-29 08:03:53.523 256736 DEBUG oslo_concurrency.lockutils [req-7ea3a636-bc79-4c08-80da-1f2488376d02 req-9e385185-9823-4773-b076-929c9bff4cfd ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-7b834f92-a941-48d4-830a-98e70067cabb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:53 compute-0 nova_compute[256729]: 2025-11-29 08:03:53.525 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquired lock "refresh_cache-7b834f92-a941-48d4-830a-98e70067cabb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:03:53 compute-0 nova_compute[256729]: 2025-11-29 08:03:53.525 256736 DEBUG nova.network.neutron [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:03:53 compute-0 ceph-mon[75050]: pgmap v1902: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 16 KiB/s wr, 73 op/s
Nov 29 08:03:53 compute-0 nova_compute[256729]: 2025-11-29 08:03:53.916 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:54 compute-0 ovn_controller[153383]: 2025-11-29T08:03:54Z|00190|binding|INFO|Setting lport 3aeb3e64-1138-4555-bde4-4f5d2e627b7a up in Southbound
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.087 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:6f:f0 10.100.0.10'], port_security=['fa:16:3e:aa:6f:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be64615c-e0f7-4f3c-a2e6-e3b78b09a803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=3aeb3e64-1138-4555-bde4-4f5d2e627b7a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.090 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 3aeb3e64-1138-4555-bde4-4f5d2e627b7a in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 bound to our chassis
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.094 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.116 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1648fd-23cd-40bc-9733-0f36344eec85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.127 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403434.1271129, f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.128 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] VM Started (Lifecycle Event)
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.157 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7968e1-aa4c-4f59-92cc-372c7e62bdfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.161 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7550624c-9892-41b6-b85c-651df87c3898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Nov 29 08:03:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Nov 29 08:03:54 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.208 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7f93eec7-9598-4f16-9044-1ca20ce2a5c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.234 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2defde71-ffe0-44e0-9098-519cc331272c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568955, 'reachable_time': 33122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291538, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.258 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e9739dc9-20bc-4303-a572-7c1f717a01a9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2d9c390c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568972, 'tstamp': 568972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291539, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2d9c390c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568975, 'tstamp': 568975}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291539, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.260 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.288 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.289 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.291 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d9c390c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.292 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.292 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d9c390c-30, col_values=(('external_ids', {'iface-id': '30965993-2787-409a-9e74-8cf68d39c3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:54.293 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.303 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.309 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403434.12731, f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.309 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] VM Paused (Lifecycle Event)
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.333 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.338 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.462 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.560 256736 DEBUG nova.network.neutron [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.963 256736 DEBUG nova.compute.manager [req-61a04863-75b2-4cbd-826f-4e0c105a8895 req-fa66c6ad-78b0-46b6-a4b5-bdc402fb1c55 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received event network-vif-plugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.964 256736 DEBUG oslo_concurrency.lockutils [req-61a04863-75b2-4cbd-826f-4e0c105a8895 req-fa66c6ad-78b0-46b6-a4b5-bdc402fb1c55 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.964 256736 DEBUG oslo_concurrency.lockutils [req-61a04863-75b2-4cbd-826f-4e0c105a8895 req-fa66c6ad-78b0-46b6-a4b5-bdc402fb1c55 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.965 256736 DEBUG oslo_concurrency.lockutils [req-61a04863-75b2-4cbd-826f-4e0c105a8895 req-fa66c6ad-78b0-46b6-a4b5-bdc402fb1c55 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.965 256736 DEBUG nova.compute.manager [req-61a04863-75b2-4cbd-826f-4e0c105a8895 req-fa66c6ad-78b0-46b6-a4b5-bdc402fb1c55 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Processing event network-vif-plugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.967 256736 DEBUG nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.971 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403434.9714963, f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.972 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] VM Resumed (Lifecycle Event)
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.975 256736 DEBUG nova.virt.libvirt.driver [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.980 256736 INFO nova.virt.libvirt.driver [-] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Instance spawned successfully.
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.980 256736 INFO nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Took 7.19 seconds to spawn the instance on the hypervisor.
Nov 29 08:03:54 compute-0 nova_compute[256729]: 2025-11-29 08:03:54.981 256736 DEBUG nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 15 KiB/s wr, 36 op/s
Nov 29 08:03:55 compute-0 nova_compute[256729]: 2025-11-29 08:03:55.142 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:03:55 compute-0 nova_compute[256729]: 2025-11-29 08:03:55.146 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:03:55 compute-0 nova_compute[256729]: 2025-11-29 08:03:55.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:55 compute-0 nova_compute[256729]: 2025-11-29 08:03:55.148 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:03:55 compute-0 ceph-mon[75050]: osdmap e359: 3 total, 3 up, 3 in
Nov 29 08:03:55 compute-0 nova_compute[256729]: 2025-11-29 08:03:55.796 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:03:55 compute-0 nova_compute[256729]: 2025-11-29 08:03:55.883 256736 INFO nova.compute.manager [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Took 15.39 seconds to build instance.
Nov 29 08:03:55 compute-0 nova_compute[256729]: 2025-11-29 08:03:55.966 256736 DEBUG oslo_concurrency.lockutils [None req-ddf36091-d7a6-40a5-828c-3e1233c46d3b 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:56 compute-0 ceph-mon[75050]: pgmap v1904: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 15 KiB/s wr, 36 op/s
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.557 256736 DEBUG nova.network.neutron [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Updating instance_info_cache with network_info: [{"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:03:56 compute-0 sshd-session[291489]: Connection closed by authenticating user root 143.14.121.41 port 40632 [preauth]
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.965 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Releasing lock "refresh_cache-7b834f92-a941-48d4-830a-98e70067cabb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.965 256736 DEBUG nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Instance network_info: |[{"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.971 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Start _get_guest_xml network_info=[{"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '7b834f92-a941-48d4-830a-98e70067cabb', 'attached_at': '', 'detached_at': '', 'volume_id': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'serial': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': '66df94ab-9486-472d-a6dd-3c6b26a38087', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.978 256736 WARNING nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.985 256736 DEBUG nova.virt.libvirt.host [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.986 256736 DEBUG nova.virt.libvirt.host [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.991 256736 DEBUG nova.virt.libvirt.host [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.992 256736 DEBUG nova.virt.libvirt.host [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.993 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.993 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.994 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.994 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.995 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.995 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.996 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.996 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.997 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.997 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.997 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:03:56 compute-0 nova_compute[256729]: 2025-11-29 08:03:56.998 256736 DEBUG nova.virt.hardware [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.041 256736 DEBUG nova.storage.rbd_utils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image 7b834f92-a941-48d4-830a-98e70067cabb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:03:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 33 KiB/s wr, 91 op/s
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.047 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.272 256736 DEBUG nova.compute.manager [req-09435714-0ab1-4735-a06b-5e59467b446c req-32dbf806-8b32-4d6a-a2fe-58f52f2c3b49 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received event network-vif-plugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.273 256736 DEBUG oslo_concurrency.lockutils [req-09435714-0ab1-4735-a06b-5e59467b446c req-32dbf806-8b32-4d6a-a2fe-58f52f2c3b49 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.274 256736 DEBUG oslo_concurrency.lockutils [req-09435714-0ab1-4735-a06b-5e59467b446c req-32dbf806-8b32-4d6a-a2fe-58f52f2c3b49 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.274 256736 DEBUG oslo_concurrency.lockutils [req-09435714-0ab1-4735-a06b-5e59467b446c req-32dbf806-8b32-4d6a-a2fe-58f52f2c3b49 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.274 256736 DEBUG nova.compute.manager [req-09435714-0ab1-4735-a06b-5e59467b446c req-32dbf806-8b32-4d6a-a2fe-58f52f2c3b49 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] No waiting events found dispatching network-vif-plugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.275 256736 WARNING nova.compute.manager [req-09435714-0ab1-4735-a06b-5e59467b446c req-32dbf806-8b32-4d6a-a2fe-58f52f2c3b49 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received unexpected event network-vif-plugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a for instance with vm_state active and task_state None.
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.521 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:03:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901987194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.544 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:03:57 compute-0 nova_compute[256729]: 2025-11-29 08:03:57.792 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.115 256736 DEBUG os_brick.encryptors [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Using volume encryption metadata '{'encryption_key_id': 'ef6975dd-2810-47fb-bdd7-27676f1e4dc5', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '7b834f92-a941-48d4-830a-98e70067cabb', 'attached_at': '', 'detached_at': '', 'volume_id': '1e0633b8-d2a6-4f22-aa22-9308e9b3acc4', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.117 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.143 256736 DEBUG barbicanclient.v1.secrets [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.143 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.145 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.194 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.195 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 ceph-mon[75050]: pgmap v1905: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 33 KiB/s wr, 91 op/s
Nov 29 08:03:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3901987194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.236 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.237 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.271 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.272 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.312 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.312 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.349 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.350 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.385 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.386 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.409 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.410 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.437 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.438 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.468 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.468 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.506 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.506 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.525 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.525 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.544 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.545 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.567 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.568 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.590 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.590 256736 INFO barbicanclient.base [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/ef6975dd-2810-47fb-bdd7-27676f1e4dc5
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.614 256736 DEBUG barbicanclient.client [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.615 256736 DEBUG nova.virt.libvirt.host [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:03:58 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 08:03:58 compute-0 nova_compute[256729]:     <volume>1e0633b8-d2a6-4f22-aa22-9308e9b3acc4</volume>
Nov 29 08:03:58 compute-0 nova_compute[256729]:   </usage>
Nov 29 08:03:58 compute-0 nova_compute[256729]: </secret>
Nov 29 08:03:58 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:03:58 compute-0 nova_compute[256729]: 2025-11-29 08:03:58.919 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 29 KiB/s wr, 115 op/s
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.163 256736 DEBUG nova.virt.libvirt.vif [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1211888278',display_name='tempest-TransferEncryptedVolumeTest-server-1211888278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1211888278',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOK0uPKk5+iwu2ACxwLiXPfPFKjqAeuswoaNdzNGpYFdv9fCZRffGqJNvJmfqnbg+KUupmPFmswjEh+khO5A2TFlJ9LMuOBogxQ7cFR7kmTFduCVQRkpWi0Jux9/KRhlg==',key_name='tempest-TransferEncryptedVolumeTest-347364744',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-znuo30b0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:44Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=7b834f92-a941-48d4-830a-98e70067cabb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.164 256736 DEBUG nova.network.os_vif_util [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.164 256736 DEBUG nova.network.os_vif_util [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:27:f3,bridge_name='br-int',has_traffic_filtering=True,id=94ee6a33-75bb-43b8-952b-43a160169df4,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94ee6a33-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.166 256736 DEBUG nova.objects.instance [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b834f92-a941-48d4-830a-98e70067cabb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:03:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:03:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Nov 29 08:03:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Nov 29 08:03:59 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.456 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <uuid>7b834f92-a941-48d4-830a-98e70067cabb</uuid>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <name>instance-00000014</name>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1211888278</nova:name>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:03:56</nova:creationTime>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <nova:user uuid="2cb2de7fb67042f89a025f1a3e872530">tempest-TransferEncryptedVolumeTest-2049180676-project-member</nova:user>
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <nova:project uuid="00f4c1f7964a4e5fbe3db5be46b9676e">tempest-TransferEncryptedVolumeTest-2049180676</nova:project>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <nova:port uuid="94ee6a33-75bb-43b8-952b-43a160169df4">
Nov 29 08:03:59 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <system>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <entry name="serial">7b834f92-a941-48d4-830a-98e70067cabb</entry>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <entry name="uuid">7b834f92-a941-48d4-830a-98e70067cabb</entry>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     </system>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <os>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   </os>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <features>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   </features>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/7b834f92-a941-48d4-830a-98e70067cabb_disk.config">
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       </source>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-1e0633b8-d2a6-4f22-aa22-9308e9b3acc4">
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       </source>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <serial>1e0633b8-d2a6-4f22-aa22-9308e9b3acc4</serial>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <encryption format="luks">
Nov 29 08:03:59 compute-0 nova_compute[256729]:         <secret type="passphrase" uuid="323e36b4-4be6-44c9-a362-8c18e8c57583"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       </encryption>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:3d:27:f3"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <target dev="tap94ee6a33-75"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb/console.log" append="off"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <video>
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     </video>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:03:59 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:03:59 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:03:59 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:03:59 compute-0 nova_compute[256729]: </domain>
Nov 29 08:03:59 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.457 256736 DEBUG nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Preparing to wait for external event network-vif-plugged-94ee6a33-75bb-43b8-952b-43a160169df4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.457 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "7b834f92-a941-48d4-830a-98e70067cabb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.457 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.457 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.458 256736 DEBUG nova.virt.libvirt.vif [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:03:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1211888278',display_name='tempest-TransferEncryptedVolumeTest-server-1211888278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1211888278',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOK0uPKk5+iwu2ACxwLiXPfPFKjqAeuswoaNdzNGpYFdv9fCZRffGqJNvJmfqnbg+KUupmPFmswjEh+khO5A2TFlJ9LMuOBogxQ7cFR7kmTFduCVQRkpWi0Jux9/KRhlg==',key_name='tempest-TransferEncryptedVolumeTest-347364744',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-znuo30b0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:03:44Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=7b834f92-a941-48d4-830a-98e70067cabb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.458 256736 DEBUG nova.network.os_vif_util [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.459 256736 DEBUG nova.network.os_vif_util [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:27:f3,bridge_name='br-int',has_traffic_filtering=True,id=94ee6a33-75bb-43b8-952b-43a160169df4,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94ee6a33-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.459 256736 DEBUG os_vif [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:27:f3,bridge_name='br-int',has_traffic_filtering=True,id=94ee6a33-75bb-43b8-952b-43a160169df4,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94ee6a33-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.460 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.460 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.461 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.463 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.463 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94ee6a33-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.464 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94ee6a33-75, col_values=(('external_ids', {'iface-id': '94ee6a33-75bb-43b8-952b-43a160169df4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:27:f3', 'vm-uuid': '7b834f92-a941-48d4-830a-98e70067cabb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.465 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:59 compute-0 NetworkManager[48962]: <info>  [1764403439.4661] manager: (tap94ee6a33-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.468 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.475 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.476 256736 INFO os_vif [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:27:f3,bridge_name='br-int',has_traffic_filtering=True,id=94ee6a33-75bb-43b8-952b-43a160169df4,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94ee6a33-75')
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.621 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.622 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.622 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No VIF found with MAC fa:16:3e:3d:27:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.623 256736 INFO nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Using config drive
Nov 29 08:03:59 compute-0 nova_compute[256729]: 2025-11-29 08:03:59.659 256736 DEBUG nova.storage.rbd_utils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image 7b834f92-a941-48d4-830a-98e70067cabb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:03:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:59.782 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:03:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:59.782 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:03:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:03:59.784 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.122 256736 DEBUG nova.compute.manager [req-09d82334-a48b-4e9a-aaea-c0f34e059d4f req-983f4d4a-d020-4efc-ac64-22720f2a3302 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received event network-changed-3aeb3e64-1138-4555-bde4-4f5d2e627b7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.122 256736 DEBUG nova.compute.manager [req-09d82334-a48b-4e9a-aaea-c0f34e059d4f req-983f4d4a-d020-4efc-ac64-22720f2a3302 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Refreshing instance network info cache due to event network-changed-3aeb3e64-1138-4555-bde4-4f5d2e627b7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.123 256736 DEBUG oslo_concurrency.lockutils [req-09d82334-a48b-4e9a-aaea-c0f34e059d4f req-983f4d4a-d020-4efc-ac64-22720f2a3302 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.123 256736 DEBUG oslo_concurrency.lockutils [req-09d82334-a48b-4e9a-aaea-c0f34e059d4f req-983f4d4a-d020-4efc-ac64-22720f2a3302 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.123 256736 DEBUG nova.network.neutron [req-09d82334-a48b-4e9a-aaea-c0f34e059d4f req-983f4d4a-d020-4efc-ac64-22720f2a3302 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Refreshing network info cache for port 3aeb3e64-1138-4555-bde4-4f5d2e627b7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.147 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:00 compute-0 ceph-mon[75050]: pgmap v1906: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 29 KiB/s wr, 115 op/s
Nov 29 08:04:00 compute-0 ceph-mon[75050]: osdmap e360: 3 total, 3 up, 3 in
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.664 256736 INFO nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Creating config drive at /var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb/disk.config
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.671 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk3azekta execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.819 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk3azekta" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.859 256736 DEBUG nova.storage.rbd_utils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image 7b834f92-a941-48d4-830a-98e70067cabb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:04:00 compute-0 nova_compute[256729]: 2025-11-29 08:04:00.865 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb/disk.config 7b834f92-a941-48d4-830a-98e70067cabb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 116 op/s
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.060 256736 DEBUG oslo_concurrency.processutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb/disk.config 7b834f92-a941-48d4-830a-98e70067cabb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.061 256736 INFO nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Deleting local config drive /var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb/disk.config because it was imported into RBD.
Nov 29 08:04:01 compute-0 kernel: tap94ee6a33-75: entered promiscuous mode
Nov 29 08:04:01 compute-0 NetworkManager[48962]: <info>  [1764403441.1121] manager: (tap94ee6a33-75): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Nov 29 08:04:01 compute-0 ovn_controller[153383]: 2025-11-29T08:04:01Z|00191|binding|INFO|Claiming lport 94ee6a33-75bb-43b8-952b-43a160169df4 for this chassis.
Nov 29 08:04:01 compute-0 ovn_controller[153383]: 2025-11-29T08:04:01Z|00192|binding|INFO|94ee6a33-75bb-43b8-952b-43a160169df4: Claiming fa:16:3e:3d:27:f3 10.100.0.10
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.117 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:01 compute-0 systemd-machined[217781]: New machine qemu-20-instance-00000014.
Nov 29 08:04:01 compute-0 systemd-udevd[291657]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:01 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Nov 29 08:04:01 compute-0 ovn_controller[153383]: 2025-11-29T08:04:01Z|00193|binding|INFO|Setting lport 94ee6a33-75bb-43b8-952b-43a160169df4 ovn-installed in OVS
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.162 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.165 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:01 compute-0 NetworkManager[48962]: <info>  [1764403441.1728] device (tap94ee6a33-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:04:01 compute-0 NetworkManager[48962]: <info>  [1764403441.1757] device (tap94ee6a33-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:04:01 compute-0 ovn_controller[153383]: 2025-11-29T08:04:01Z|00194|binding|INFO|Setting lport 94ee6a33-75bb-43b8-952b-43a160169df4 up in Southbound
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.238 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:27:f3 10.100.0.10'], port_security=['fa:16:3e:3d:27:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7b834f92-a941-48d4-830a-98e70067cabb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc8975a3-8b30-4fd7-b465-76d299802b38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=357216b9-f046-4273-a2c2-2385abe848ac, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=94ee6a33-75bb-43b8-952b-43a160169df4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.239 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 94ee6a33-75bb-43b8-952b-43a160169df4 in datapath 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c bound to our chassis
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.242 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.260 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[70cf13fb-1cca-4ba1-9a45-5b496d09f67b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.261 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45f1bbc0-c1 in ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.264 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45f1bbc0-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.264 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[efe0e38e-6eeb-46c1-9941-597350536b57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.265 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[beacb4c6-aadd-4746-a8e2-4e33d6ddb31a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.281 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc190da-fb75-4614-8a74-9ebaa85e48a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.308 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3e5d16-7a1d-4a5b-9b00-6c6b2d439a8c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.330 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.331 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.331 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.331 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.332 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.355 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[e29b2ac2-64c1-4563-a229-3d398c31955a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 NetworkManager[48962]: <info>  [1764403441.3626] manager: (tap45f1bbc0-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.362 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[48ef5bfe-64d9-4afb-b78f-ded511106167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.403 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[697d9fb7-98bf-44a9-bb8f-2fd81cecde4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.415 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[abf95135-8393-4ae3-8360-2fd833d7de4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 NetworkManager[48962]: <info>  [1764403441.4363] device (tap45f1bbc0-c0): carrier: link connected
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.441 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6ed5d6-1804-4cbd-8944-d737ee076f69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.465 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1c517d88-c69f-47f1-b5e2-6de3bc33b682]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f1bbc0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:b9:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575650, 'reachable_time': 30975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291691, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.488 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c549d7-a2dc-48a3-acb4-8c166c5494d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:b9ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575650, 'tstamp': 575650}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291692, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.520 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a75a60-bf7e-4143-a473-7d146c77dd1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f1bbc0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:b9:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575650, 'reachable_time': 30975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291712, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.550 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2818bf-b349-44f8-bf16-eccca919465f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.618 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4221f0de-e971-4810-aef5-051d0b934af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.620 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f1bbc0-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.620 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.620 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45f1bbc0-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:01 compute-0 kernel: tap45f1bbc0-c0: entered promiscuous mode
Nov 29 08:04:01 compute-0 NetworkManager[48962]: <info>  [1764403441.6228] manager: (tap45f1bbc0-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.622 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.625 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45f1bbc0-c0, col_values=(('external_ids', {'iface-id': '1506b576-854d-4118-b808-0e5e32d85d28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:01 compute-0 ovn_controller[153383]: 2025-11-29T08:04:01Z|00195|binding|INFO|Releasing lport 1506b576-854d-4118-b808-0e5e32d85d28 from this chassis (sb_readonly=0)
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.644 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.645 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.646 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0a95a8c5-8430-4401-908d-6e7b219f00c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.647 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:04:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:01.648 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'env', 'PROCESS_TAG=haproxy-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:04:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3560211792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:01 compute-0 nova_compute[256729]: 2025-11-29 08:04:01.831 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:02 compute-0 podman[291744]: 2025-11-29 08:04:02.094474408 +0000 UTC m=+0.067999436 container create 21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:02 compute-0 systemd[1]: Started libpod-conmon-21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1.scope.
Nov 29 08:04:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:02 compute-0 podman[291744]: 2025-11-29 08:04:02.069024419 +0000 UTC m=+0.042549497 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:04:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224c32bf320da23ab62bd7d6298d3092abc8e4526efa69166490fb6cfa82db94/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:02 compute-0 ceph-mon[75050]: pgmap v1908: 305 pgs: 305 active+clean; 350 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 116 op/s
Nov 29 08:04:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3560211792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:02 compute-0 podman[291744]: 2025-11-29 08:04:02.191876159 +0000 UTC m=+0.165401287 container init 21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:04:02 compute-0 podman[291744]: 2025-11-29 08:04:02.198487895 +0000 UTC m=+0.172012923 container start 21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.210 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.210 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.214 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.215 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:02 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[291794]: [NOTICE]   (291798) : New worker (291800) forked
Nov 29 08:04:02 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[291794]: [NOTICE]   (291798) : Loading success.
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.218 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.219 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.357 256736 DEBUG nova.compute.manager [req-965fb9d1-1304-44f7-886c-d955cdf20635 req-c61fbeb5-2abf-42df-8477-dd9f1faa60d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received event network-vif-plugged-94ee6a33-75bb-43b8-952b-43a160169df4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.358 256736 DEBUG oslo_concurrency.lockutils [req-965fb9d1-1304-44f7-886c-d955cdf20635 req-c61fbeb5-2abf-42df-8477-dd9f1faa60d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "7b834f92-a941-48d4-830a-98e70067cabb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.358 256736 DEBUG oslo_concurrency.lockutils [req-965fb9d1-1304-44f7-886c-d955cdf20635 req-c61fbeb5-2abf-42df-8477-dd9f1faa60d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.358 256736 DEBUG oslo_concurrency.lockutils [req-965fb9d1-1304-44f7-886c-d955cdf20635 req-c61fbeb5-2abf-42df-8477-dd9f1faa60d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.358 256736 DEBUG nova.compute.manager [req-965fb9d1-1304-44f7-886c-d955cdf20635 req-c61fbeb5-2abf-42df-8477-dd9f1faa60d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Processing event network-vif-plugged-94ee6a33-75bb-43b8-952b-43a160169df4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.400 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.401 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4033MB free_disk=59.98794174194336GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.401 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.401 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.524 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:02 compute-0 sshd-session[291541]: Connection closed by authenticating user root 143.14.121.41 port 40648 [preauth]
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.675 256736 DEBUG nova.network.neutron [req-09d82334-a48b-4e9a-aaea-c0f34e059d4f req-983f4d4a-d020-4efc-ac64-22720f2a3302 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Updated VIF entry in instance network info cache for port 3aeb3e64-1138-4555-bde4-4f5d2e627b7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.676 256736 DEBUG nova.network.neutron [req-09d82334-a48b-4e9a-aaea-c0f34e059d4f req-983f4d4a-d020-4efc-ac64-22720f2a3302 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Updating instance_info_cache with network_info: [{"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.958 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance a3133710-8c54-433d-9263-c081a69bf339 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.959 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.959 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 7b834f92-a941-48d4-830a-98e70067cabb actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.959 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.960 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:04:02 compute-0 nova_compute[256729]: 2025-11-29 08:04:02.987 256736 DEBUG oslo_concurrency.lockutils [req-09d82334-a48b-4e9a-aaea-c0f34e059d4f req-983f4d4a-d020-4efc-ac64-22720f2a3302 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 350 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 36 KiB/s wr, 103 op/s
Nov 29 08:04:03 compute-0 nova_compute[256729]: 2025-11-29 08:04:03.070 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771917629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:03 compute-0 nova_compute[256729]: 2025-11-29 08:04:03.639 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:03 compute-0 nova_compute[256729]: 2025-11-29 08:04:03.645 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:04:03 compute-0 nova_compute[256729]: 2025-11-29 08:04:03.664 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:04:03 compute-0 nova_compute[256729]: 2025-11-29 08:04:03.706 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:04:03 compute-0 nova_compute[256729]: 2025-11-29 08:04:03.707 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:03 compute-0 nova_compute[256729]: 2025-11-29 08:04:03.708 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:03 compute-0 nova_compute[256729]: 2025-11-29 08:04:03.709 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:04:03 compute-0 nova_compute[256729]: 2025-11-29 08:04:03.729 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:04:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:04 compute-0 ceph-mon[75050]: pgmap v1909: 305 pgs: 305 active+clean; 350 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 36 KiB/s wr, 103 op/s
Nov 29 08:04:04 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/771917629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.460 256736 DEBUG nova.compute.manager [req-58d51905-c789-49fc-9e35-b552e8822723 req-4b3255ff-554b-4c8b-bda6-b617dbeab30c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received event network-vif-plugged-94ee6a33-75bb-43b8-952b-43a160169df4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.461 256736 DEBUG oslo_concurrency.lockutils [req-58d51905-c789-49fc-9e35-b552e8822723 req-4b3255ff-554b-4c8b-bda6-b617dbeab30c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "7b834f92-a941-48d4-830a-98e70067cabb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.462 256736 DEBUG oslo_concurrency.lockutils [req-58d51905-c789-49fc-9e35-b552e8822723 req-4b3255ff-554b-4c8b-bda6-b617dbeab30c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.462 256736 DEBUG oslo_concurrency.lockutils [req-58d51905-c789-49fc-9e35-b552e8822723 req-4b3255ff-554b-4c8b-bda6-b617dbeab30c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.463 256736 DEBUG nova.compute.manager [req-58d51905-c789-49fc-9e35-b552e8822723 req-4b3255ff-554b-4c8b-bda6-b617dbeab30c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] No waiting events found dispatching network-vif-plugged-94ee6a33-75bb-43b8-952b-43a160169df4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.463 256736 WARNING nova.compute.manager [req-58d51905-c789-49fc-9e35-b552e8822723 req-4b3255ff-554b-4c8b-bda6-b617dbeab30c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received unexpected event network-vif-plugged-94ee6a33-75bb-43b8-952b-43a160169df4 for instance with vm_state building and task_state spawning.
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.466 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:04 compute-0 podman[291840]: 2025-11-29 08:04:04.534823263 +0000 UTC m=+0.078215029 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:04:04 compute-0 podman[291838]: 2025-11-29 08:04:04.538408129 +0000 UTC m=+0.083658354 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.548 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403444.5483863, 7b834f92-a941-48d4-830a-98e70067cabb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.549 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] VM Started (Lifecycle Event)
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.551 256736 DEBUG nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.556 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.558 256736 INFO nova.virt.libvirt.driver [-] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Instance spawned successfully.
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.558 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.575 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:04 compute-0 podman[291837]: 2025-11-29 08:04:04.578696105 +0000 UTC m=+0.119495131 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.580 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.585 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.585 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.586 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.586 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.586 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.586 256736 DEBUG nova.virt.libvirt.driver [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.616 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.616 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403444.550555, 7b834f92-a941-48d4-830a-98e70067cabb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.616 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] VM Paused (Lifecycle Event)
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.651 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.654 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403444.55312, 7b834f92-a941-48d4-830a-98e70067cabb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.655 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] VM Resumed (Lifecycle Event)
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.666 256736 INFO nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Took 18.52 seconds to spawn the instance on the hypervisor.
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.667 256736 DEBUG nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.677 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.680 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.723 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.755 256736 INFO nova.compute.manager [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Took 21.25 seconds to build instance.
Nov 29 08:04:04 compute-0 nova_compute[256729]: 2025-11-29 08:04:04.774 256736 DEBUG oslo_concurrency.lockutils [None req-e653c64d-58ef-4881-9eb7-e1e33b15b3c9 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 350 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 93 op/s
Nov 29 08:04:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1481477941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:04:05
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control']
Nov 29 08:04:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:04:05 compute-0 nova_compute[256729]: 2025-11-29 08:04:05.728 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:05 compute-0 nova_compute[256729]: 2025-11-29 08:04:05.728 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:04:05 compute-0 nova_compute[256729]: 2025-11-29 08:04:05.749 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:04:05 compute-0 nova_compute[256729]: 2025-11-29 08:04:05.751 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:05 compute-0 nova_compute[256729]: 2025-11-29 08:04:05.751 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:05 compute-0 nova_compute[256729]: 2025-11-29 08:04:05.751 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:04:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Nov 29 08:04:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Nov 29 08:04:06 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Nov 29 08:04:06 compute-0 ceph-mon[75050]: pgmap v1910: 305 pgs: 305 active+clean; 350 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 93 op/s
Nov 29 08:04:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1481477941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:06 compute-0 sshd-session[291809]: Connection closed by authenticating user root 143.14.121.41 port 37918 [preauth]
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 350 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 20 KiB/s wr, 83 op/s
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:04:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:04:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Nov 29 08:04:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Nov 29 08:04:07 compute-0 ceph-mon[75050]: osdmap e361: 3 total, 3 up, 3 in
Nov 29 08:04:07 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Nov 29 08:04:07 compute-0 nova_compute[256729]: 2025-11-29 08:04:07.527 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:08 compute-0 ovn_controller[153383]: 2025-11-29T08:04:08Z|00032|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.10
Nov 29 08:04:08 compute-0 ovn_controller[153383]: 2025-11-29T08:04:08Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:aa:6f:f0 10.100.0.10
Nov 29 08:04:08 compute-0 ceph-mon[75050]: pgmap v1912: 305 pgs: 305 active+clean; 350 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 20 KiB/s wr, 83 op/s
Nov 29 08:04:08 compute-0 ceph-mon[75050]: osdmap e362: 3 total, 3 up, 3 in
Nov 29 08:04:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1576371196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1576371196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1919027617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 355 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 447 KiB/s wr, 157 op/s
Nov 29 08:04:09 compute-0 nova_compute[256729]: 2025-11-29 08:04:09.147 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:09 compute-0 nova_compute[256729]: 2025-11-29 08:04:09.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:09 compute-0 sshd-session[291902]: Connection closed by authenticating user root 143.14.121.41 port 37930 [preauth]
Nov 29 08:04:09 compute-0 nova_compute[256729]: 2025-11-29 08:04:09.469 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:09.518 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:09 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:09.520 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:04:09 compute-0 nova_compute[256729]: 2025-11-29 08:04:09.525 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Nov 29 08:04:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1576371196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1576371196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1919027617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Nov 29 08:04:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Nov 29 08:04:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 355 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 571 KiB/s wr, 199 op/s
Nov 29 08:04:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Nov 29 08:04:11 compute-0 ceph-mon[75050]: pgmap v1914: 305 pgs: 305 active+clean; 355 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 447 KiB/s wr, 157 op/s
Nov 29 08:04:11 compute-0 ceph-mon[75050]: osdmap e363: 3 total, 3 up, 3 in
Nov 29 08:04:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Nov 29 08:04:11 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Nov 29 08:04:11 compute-0 nova_compute[256729]: 2025-11-29 08:04:11.586 256736 DEBUG nova.compute.manager [req-84e9cc73-766c-44c5-8d48-c86d844ff9d3 req-0034077d-d48d-401b-9f25-21426289e226 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received event network-changed-94ee6a33-75bb-43b8-952b-43a160169df4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:11 compute-0 nova_compute[256729]: 2025-11-29 08:04:11.586 256736 DEBUG nova.compute.manager [req-84e9cc73-766c-44c5-8d48-c86d844ff9d3 req-0034077d-d48d-401b-9f25-21426289e226 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Refreshing instance network info cache due to event network-changed-94ee6a33-75bb-43b8-952b-43a160169df4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:04:11 compute-0 nova_compute[256729]: 2025-11-29 08:04:11.587 256736 DEBUG oslo_concurrency.lockutils [req-84e9cc73-766c-44c5-8d48-c86d844ff9d3 req-0034077d-d48d-401b-9f25-21426289e226 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-7b834f92-a941-48d4-830a-98e70067cabb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:04:11 compute-0 nova_compute[256729]: 2025-11-29 08:04:11.587 256736 DEBUG oslo_concurrency.lockutils [req-84e9cc73-766c-44c5-8d48-c86d844ff9d3 req-0034077d-d48d-401b-9f25-21426289e226 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-7b834f92-a941-48d4-830a-98e70067cabb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:04:11 compute-0 nova_compute[256729]: 2025-11-29 08:04:11.588 256736 DEBUG nova.network.neutron [req-84e9cc73-766c-44c5-8d48-c86d844ff9d3 req-0034077d-d48d-401b-9f25-21426289e226 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Refreshing network info cache for port 94ee6a33-75bb-43b8-952b-43a160169df4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:04:11 compute-0 ovn_controller[153383]: 2025-11-29T08:04:11Z|00034|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.10
Nov 29 08:04:11 compute-0 ovn_controller[153383]: 2025-11-29T08:04:11Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:aa:6f:f0 10.100.0.10
Nov 29 08:04:12 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:12.522 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:12 compute-0 nova_compute[256729]: 2025-11-29 08:04:12.531 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:12 compute-0 ceph-mon[75050]: pgmap v1916: 305 pgs: 305 active+clean; 355 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 571 KiB/s wr, 199 op/s
Nov 29 08:04:12 compute-0 ceph-mon[75050]: osdmap e364: 3 total, 3 up, 3 in
Nov 29 08:04:12 compute-0 nova_compute[256729]: 2025-11-29 08:04:12.695 256736 DEBUG nova.network.neutron [req-84e9cc73-766c-44c5-8d48-c86d844ff9d3 req-0034077d-d48d-401b-9f25-21426289e226 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Updated VIF entry in instance network info cache for port 94ee6a33-75bb-43b8-952b-43a160169df4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:04:12 compute-0 nova_compute[256729]: 2025-11-29 08:04:12.697 256736 DEBUG nova.network.neutron [req-84e9cc73-766c-44c5-8d48-c86d844ff9d3 req-0034077d-d48d-401b-9f25-21426289e226 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Updating instance_info_cache with network_info: [{"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:12 compute-0 nova_compute[256729]: 2025-11-29 08:04:12.735 256736 DEBUG oslo_concurrency.lockutils [req-84e9cc73-766c-44c5-8d48-c86d844ff9d3 req-0034077d-d48d-401b-9f25-21426289e226 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-7b834f92-a941-48d4-830a-98e70067cabb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:04:12 compute-0 sshd-session[291904]: Connection closed by authenticating user root 143.14.121.41 port 37940 [preauth]
Nov 29 08:04:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 364 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 991 KiB/s wr, 199 op/s
Nov 29 08:04:13 compute-0 ovn_controller[153383]: 2025-11-29T08:04:13Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:6f:f0 10.100.0.10
Nov 29 08:04:13 compute-0 ovn_controller[153383]: 2025-11-29T08:04:13Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:6f:f0 10.100.0.10
Nov 29 08:04:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Nov 29 08:04:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Nov 29 08:04:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Nov 29 08:04:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:14 compute-0 nova_compute[256729]: 2025-11-29 08:04:14.471 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:14 compute-0 ceph-mon[75050]: pgmap v1918: 305 pgs: 305 active+clean; 364 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 991 KiB/s wr, 199 op/s
Nov 29 08:04:14 compute-0 ceph-mon[75050]: osdmap e365: 3 total, 3 up, 3 in
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 364 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 423 KiB/s wr, 104 op/s
Nov 29 08:04:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2679413507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:15 compute-0 sudo[291908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:15 compute-0 sudo[291908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:15 compute-0 sudo[291908]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:15 compute-0 sudo[291933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:04:15 compute-0 sudo[291933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:15 compute-0 sudo[291933]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.076019893208309e-06 of space, bias 1.0, pg target 0.0024228059679624924 quantized to 32 (current 32)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0037433941972716024 of space, bias 1.0, pg target 1.1230182591814808 quantized to 32 (current 32)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.19924374114817345 quantized to 32 (current 32)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:04:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 29 08:04:15 compute-0 sudo[291958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:15 compute-0 sudo[291958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:15 compute-0 sudo[291958]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:15 compute-0 sudo[291983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:04:15 compute-0 sudo[291983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Nov 29 08:04:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Nov 29 08:04:15 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Nov 29 08:04:15 compute-0 ceph-mon[75050]: pgmap v1920: 305 pgs: 305 active+clean; 364 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 423 KiB/s wr, 104 op/s
Nov 29 08:04:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2679413507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:16 compute-0 sudo[291983]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:04:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:04:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:04:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:04:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:04:16 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 361cc5d2-bf5d-472a-8c6c-bdd9d242b8ee does not exist
Nov 29 08:04:16 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c30fcb84-0c71-4a21-b7da-2bc438654849 does not exist
Nov 29 08:04:16 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c50e0749-9c57-46bc-b2aa-58742a682c9b does not exist
Nov 29 08:04:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:04:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:04:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:04:16 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:04:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:04:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:16 compute-0 sudo[292039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:16 compute-0 sudo[292039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:16 compute-0 sudo[292039]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:16 compute-0 sudo[292064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:04:16 compute-0 sudo[292064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:16 compute-0 sudo[292064]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:16 compute-0 sudo[292089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:16 compute-0 sudo[292089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:16 compute-0 sudo[292089]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:16 compute-0 sshd-session[291906]: Connection closed by authenticating user root 143.14.121.41 port 48114 [preauth]
Nov 29 08:04:16 compute-0 sudo[292114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:04:16 compute-0 sudo[292114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Nov 29 08:04:16 compute-0 podman[292178]: 2025-11-29 08:04:16.912291331 +0000 UTC m=+0.124896826 container create e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:04:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Nov 29 08:04:16 compute-0 podman[292178]: 2025-11-29 08:04:16.822680488 +0000 UTC m=+0.035285973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:16 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Nov 29 08:04:16 compute-0 ceph-mon[75050]: osdmap e366: 3 total, 3 up, 3 in
Nov 29 08:04:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:04:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:04:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:04:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:04:16 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:04:16 compute-0 systemd[1]: Started libpod-conmon-e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab.scope.
Nov 29 08:04:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:17 compute-0 podman[292178]: 2025-11-29 08:04:17.037196514 +0000 UTC m=+0.249802029 container init e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:04:17 compute-0 podman[292178]: 2025-11-29 08:04:17.046678568 +0000 UTC m=+0.259284063 container start e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 08:04:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 578 KiB/s wr, 247 op/s
Nov 29 08:04:17 compute-0 systemd[1]: libpod-e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab.scope: Deactivated successfully.
Nov 29 08:04:17 compute-0 admiring_lehmann[292194]: 167 167
Nov 29 08:04:17 compute-0 conmon[292194]: conmon e85422b2c76a5a4c6050 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab.scope/container/memory.events
Nov 29 08:04:17 compute-0 podman[292178]: 2025-11-29 08:04:17.054751553 +0000 UTC m=+0.267357048 container attach e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:04:17 compute-0 podman[292178]: 2025-11-29 08:04:17.055635347 +0000 UTC m=+0.268240842 container died e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 08:04:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a97a567373d1630b0d1ed933886462c857c9a8d4f9504d4b48d14e46bbb8059a-merged.mount: Deactivated successfully.
Nov 29 08:04:17 compute-0 podman[292178]: 2025-11-29 08:04:17.142682511 +0000 UTC m=+0.355287976 container remove e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:04:17 compute-0 systemd[1]: libpod-conmon-e85422b2c76a5a4c6050fb1565bcd11cad21ba94d8837921878b0d42be6a57ab.scope: Deactivated successfully.
Nov 29 08:04:17 compute-0 podman[292219]: 2025-11-29 08:04:17.389180151 +0000 UTC m=+0.066254440 container create a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ramanujan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:04:17 compute-0 systemd[1]: Started libpod-conmon-a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff.scope.
Nov 29 08:04:17 compute-0 podman[292219]: 2025-11-29 08:04:17.363318041 +0000 UTC m=+0.040392370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5bded0698577baeabaa9c6735bc6acdbaab35c0106b229686d95ffb2f91319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5bded0698577baeabaa9c6735bc6acdbaab35c0106b229686d95ffb2f91319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5bded0698577baeabaa9c6735bc6acdbaab35c0106b229686d95ffb2f91319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5bded0698577baeabaa9c6735bc6acdbaab35c0106b229686d95ffb2f91319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5bded0698577baeabaa9c6735bc6acdbaab35c0106b229686d95ffb2f91319/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:17 compute-0 nova_compute[256729]: 2025-11-29 08:04:17.533 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:17 compute-0 ovn_controller[153383]: 2025-11-29T08:04:17Z|00038|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.10
Nov 29 08:04:17 compute-0 ovn_controller[153383]: 2025-11-29T08:04:17Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:3d:27:f3 10.100.0.10
Nov 29 08:04:17 compute-0 podman[292219]: 2025-11-29 08:04:17.619204541 +0000 UTC m=+0.296278880 container init a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ramanujan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:04:17 compute-0 podman[292219]: 2025-11-29 08:04:17.632950948 +0000 UTC m=+0.310025197 container start a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ramanujan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:04:17 compute-0 podman[292219]: 2025-11-29 08:04:17.646750357 +0000 UTC m=+0.323824706 container attach a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ramanujan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 08:04:18 compute-0 ceph-mon[75050]: osdmap e367: 3 total, 3 up, 3 in
Nov 29 08:04:18 compute-0 ceph-mon[75050]: pgmap v1923: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 578 KiB/s wr, 247 op/s
Nov 29 08:04:18 compute-0 vigorous_ramanujan[292236]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:04:18 compute-0 vigorous_ramanujan[292236]: --> relative data size: 1.0
Nov 29 08:04:18 compute-0 vigorous_ramanujan[292236]: --> All data devices are unavailable
Nov 29 08:04:18 compute-0 systemd[1]: libpod-a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff.scope: Deactivated successfully.
Nov 29 08:04:18 compute-0 systemd[1]: libpod-a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff.scope: Consumed 1.063s CPU time.
Nov 29 08:04:18 compute-0 podman[292219]: 2025-11-29 08:04:18.757480137 +0000 UTC m=+1.434554376 container died a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ramanujan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 08:04:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5bded0698577baeabaa9c6735bc6acdbaab35c0106b229686d95ffb2f91319-merged.mount: Deactivated successfully.
Nov 29 08:04:18 compute-0 podman[292219]: 2025-11-29 08:04:18.926691365 +0000 UTC m=+1.603765654 container remove a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ramanujan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 08:04:18 compute-0 systemd[1]: libpod-conmon-a3a0df11a4b502b35751b847681416e46574f03bb06f3e00e583c8a35af5d3ff.scope: Deactivated successfully.
Nov 29 08:04:18 compute-0 sudo[292114]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:19 compute-0 sudo[292279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:19 compute-0 sudo[292279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:19 compute-0 sudo[292279]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 111 KiB/s wr, 166 op/s
Nov 29 08:04:19 compute-0 sudo[292304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:04:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Nov 29 08:04:19 compute-0 sudo[292304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:19 compute-0 sudo[292304]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Nov 29 08:04:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Nov 29 08:04:19 compute-0 sudo[292329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:19 compute-0 sudo[292329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:19 compute-0 sudo[292329]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:19 compute-0 sudo[292354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:04:19 compute-0 sudo[292354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:19 compute-0 nova_compute[256729]: 2025-11-29 08:04:19.474 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:19 compute-0 podman[292421]: 2025-11-29 08:04:19.711657909 +0000 UTC m=+0.061717668 container create 9843d5cd7df01a105fcc9d0293b4589bea4bb1c03491a6c9c421eca7b6d595ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 08:04:19 compute-0 systemd[1]: Started libpod-conmon-9843d5cd7df01a105fcc9d0293b4589bea4bb1c03491a6c9c421eca7b6d595ba.scope.
Nov 29 08:04:19 compute-0 podman[292421]: 2025-11-29 08:04:19.675563936 +0000 UTC m=+0.025623775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:19 compute-0 podman[292421]: 2025-11-29 08:04:19.964120639 +0000 UTC m=+0.314180458 container init 9843d5cd7df01a105fcc9d0293b4589bea4bb1c03491a6c9c421eca7b6d595ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 08:04:19 compute-0 podman[292421]: 2025-11-29 08:04:19.976056477 +0000 UTC m=+0.326116266 container start 9843d5cd7df01a105fcc9d0293b4589bea4bb1c03491a6c9c421eca7b6d595ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swirles, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:04:19 compute-0 elegant_swirles[292437]: 167 167
Nov 29 08:04:19 compute-0 systemd[1]: libpod-9843d5cd7df01a105fcc9d0293b4589bea4bb1c03491a6c9c421eca7b6d595ba.scope: Deactivated successfully.
Nov 29 08:04:20 compute-0 podman[292421]: 2025-11-29 08:04:20.017416242 +0000 UTC m=+0.367476061 container attach 9843d5cd7df01a105fcc9d0293b4589bea4bb1c03491a6c9c421eca7b6d595ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:04:20 compute-0 podman[292421]: 2025-11-29 08:04:20.017954906 +0000 UTC m=+0.368014675 container died 9843d5cd7df01a105fcc9d0293b4589bea4bb1c03491a6c9c421eca7b6d595ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 08:04:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-22a9ec1ca27cddce1d6a4657e18757eba9c604a06318cae6c7cfc9fd21f24aaf-merged.mount: Deactivated successfully.
Nov 29 08:04:20 compute-0 ceph-mon[75050]: pgmap v1924: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 111 KiB/s wr, 166 op/s
Nov 29 08:04:20 compute-0 ceph-mon[75050]: osdmap e368: 3 total, 3 up, 3 in
Nov 29 08:04:20 compute-0 podman[292421]: 2025-11-29 08:04:20.310881596 +0000 UTC m=+0.660941345 container remove 9843d5cd7df01a105fcc9d0293b4589bea4bb1c03491a6c9c421eca7b6d595ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:04:20 compute-0 systemd[1]: libpod-conmon-9843d5cd7df01a105fcc9d0293b4589bea4bb1c03491a6c9c421eca7b6d595ba.scope: Deactivated successfully.
Nov 29 08:04:20 compute-0 sshd-session[292146]: Connection closed by authenticating user root 143.14.121.41 port 48126 [preauth]
Nov 29 08:04:20 compute-0 podman[292461]: 2025-11-29 08:04:20.53582216 +0000 UTC m=+0.041488758 container create 61ac02d31c6221f3388a3232bb155ee1286f130888bf1c14a7c92d7e042f1a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ramanujan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:04:20 compute-0 systemd[1]: Started libpod-conmon-61ac02d31c6221f3388a3232bb155ee1286f130888bf1c14a7c92d7e042f1a55.scope.
Nov 29 08:04:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:20 compute-0 podman[292461]: 2025-11-29 08:04:20.515759885 +0000 UTC m=+0.021426493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2c63ce2b7cfc39aefa9e4363c921732d2644e7b42e2525f57c681302737f1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2c63ce2b7cfc39aefa9e4363c921732d2644e7b42e2525f57c681302737f1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2c63ce2b7cfc39aefa9e4363c921732d2644e7b42e2525f57c681302737f1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2c63ce2b7cfc39aefa9e4363c921732d2644e7b42e2525f57c681302737f1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:20 compute-0 podman[292461]: 2025-11-29 08:04:20.627521719 +0000 UTC m=+0.133188327 container init 61ac02d31c6221f3388a3232bb155ee1286f130888bf1c14a7c92d7e042f1a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 08:04:20 compute-0 podman[292461]: 2025-11-29 08:04:20.635147241 +0000 UTC m=+0.140813819 container start 61ac02d31c6221f3388a3232bb155ee1286f130888bf1c14a7c92d7e042f1a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ramanujan, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 08:04:20 compute-0 podman[292461]: 2025-11-29 08:04:20.638703147 +0000 UTC m=+0.144369735 container attach 61ac02d31c6221f3388a3232bb155ee1286f130888bf1c14a7c92d7e042f1a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:04:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951301392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1951301392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 111 KiB/s wr, 161 op/s
Nov 29 08:04:21 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1951301392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:21 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1951301392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]: {
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:     "0": [
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:         {
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "devices": [
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "/dev/loop3"
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             ],
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_name": "ceph_lv0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_size": "21470642176",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "name": "ceph_lv0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "tags": {
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.cluster_name": "ceph",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.crush_device_class": "",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.encrypted": "0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.osd_id": "0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.type": "block",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.vdo": "0"
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             },
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "type": "block",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "vg_name": "ceph_vg0"
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:         }
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:     ],
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:     "1": [
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:         {
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "devices": [
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "/dev/loop4"
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             ],
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_name": "ceph_lv1",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_size": "21470642176",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "name": "ceph_lv1",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "tags": {
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.cluster_name": "ceph",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.crush_device_class": "",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.encrypted": "0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.osd_id": "1",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.type": "block",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.vdo": "0"
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             },
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "type": "block",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "vg_name": "ceph_vg1"
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:         }
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:     ],
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:     "2": [
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:         {
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "devices": [
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "/dev/loop5"
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             ],
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_name": "ceph_lv2",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_size": "21470642176",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "name": "ceph_lv2",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "tags": {
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.cluster_name": "ceph",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.crush_device_class": "",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.encrypted": "0",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.osd_id": "2",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.type": "block",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:                 "ceph.vdo": "0"
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             },
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "type": "block",
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:             "vg_name": "ceph_vg2"
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:         }
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]:     ]
Nov 29 08:04:21 compute-0 youthful_ramanujan[292478]: }
Nov 29 08:04:21 compute-0 systemd[1]: libpod-61ac02d31c6221f3388a3232bb155ee1286f130888bf1c14a7c92d7e042f1a55.scope: Deactivated successfully.
Nov 29 08:04:21 compute-0 podman[292461]: 2025-11-29 08:04:21.424805851 +0000 UTC m=+0.930472439 container died 61ac02d31c6221f3388a3232bb155ee1286f130888bf1c14a7c92d7e042f1a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 08:04:21 compute-0 ovn_controller[153383]: 2025-11-29T08:04:21Z|00040|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.10
Nov 29 08:04:21 compute-0 ovn_controller[153383]: 2025-11-29T08:04:21Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:3d:27:f3 10.100.0.10
Nov 29 08:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e2c63ce2b7cfc39aefa9e4363c921732d2644e7b42e2525f57c681302737f1b-merged.mount: Deactivated successfully.
Nov 29 08:04:21 compute-0 podman[292461]: 2025-11-29 08:04:21.810073027 +0000 UTC m=+1.315739625 container remove 61ac02d31c6221f3388a3232bb155ee1286f130888bf1c14a7c92d7e042f1a55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:04:21 compute-0 systemd[1]: libpod-conmon-61ac02d31c6221f3388a3232bb155ee1286f130888bf1c14a7c92d7e042f1a55.scope: Deactivated successfully.
Nov 29 08:04:21 compute-0 sudo[292354]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:21 compute-0 sudo[292499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:21 compute-0 sudo[292499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:21 compute-0 sudo[292499]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:22 compute-0 sudo[292524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:04:22 compute-0 sudo[292524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:22 compute-0 sudo[292524]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:22 compute-0 sudo[292549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:22 compute-0 sudo[292549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:22 compute-0 sudo[292549]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:22 compute-0 sudo[292575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:04:22 compute-0 sudo[292575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:22 compute-0 ceph-mon[75050]: pgmap v1926: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 111 KiB/s wr, 161 op/s
Nov 29 08:04:22 compute-0 nova_compute[256729]: 2025-11-29 08:04:22.536 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:22 compute-0 ovn_controller[153383]: 2025-11-29T08:04:22Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:27:f3 10.100.0.10
Nov 29 08:04:22 compute-0 ovn_controller[153383]: 2025-11-29T08:04:22Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:27:f3 10.100.0.10
Nov 29 08:04:22 compute-0 podman[292642]: 2025-11-29 08:04:22.673158757 +0000 UTC m=+0.061749370 container create 0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:04:22 compute-0 systemd[1]: Started libpod-conmon-0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d.scope.
Nov 29 08:04:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:22 compute-0 podman[292642]: 2025-11-29 08:04:22.651740845 +0000 UTC m=+0.040331458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:22 compute-0 podman[292642]: 2025-11-29 08:04:22.761212797 +0000 UTC m=+0.149803400 container init 0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:04:22 compute-0 podman[292642]: 2025-11-29 08:04:22.76803122 +0000 UTC m=+0.156621803 container start 0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaum, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 08:04:22 compute-0 podman[292642]: 2025-11-29 08:04:22.771394829 +0000 UTC m=+0.159985412 container attach 0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 08:04:22 compute-0 keen_chaum[292659]: 167 167
Nov 29 08:04:22 compute-0 systemd[1]: libpod-0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d.scope: Deactivated successfully.
Nov 29 08:04:22 compute-0 conmon[292659]: conmon 0359a89cf8132341135b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d.scope/container/memory.events
Nov 29 08:04:22 compute-0 podman[292642]: 2025-11-29 08:04:22.776426694 +0000 UTC m=+0.165017277 container died 0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaum, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:04:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7de7b95188a09edd00b6c455f0c55521dde04a18233b55fe4eb14dfb0b1a5147-merged.mount: Deactivated successfully.
Nov 29 08:04:22 compute-0 podman[292642]: 2025-11-29 08:04:22.820697796 +0000 UTC m=+0.209288389 container remove 0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaum, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:04:22 compute-0 systemd[1]: libpod-conmon-0359a89cf8132341135b5066aa77d82317f690ce8a3861f9e957406b8aed7e3d.scope: Deactivated successfully.
Nov 29 08:04:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 101 KiB/s wr, 195 op/s
Nov 29 08:04:23 compute-0 podman[292683]: 2025-11-29 08:04:23.059947982 +0000 UTC m=+0.072765603 container create 4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:04:23 compute-0 systemd[1]: Started libpod-conmon-4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d.scope.
Nov 29 08:04:23 compute-0 podman[292683]: 2025-11-29 08:04:23.035554191 +0000 UTC m=+0.048371892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:04:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed4b14e765fc96e2961199552b000bd2b96cc7b2504793633a0f26a62528de5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed4b14e765fc96e2961199552b000bd2b96cc7b2504793633a0f26a62528de5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed4b14e765fc96e2961199552b000bd2b96cc7b2504793633a0f26a62528de5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed4b14e765fc96e2961199552b000bd2b96cc7b2504793633a0f26a62528de5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:04:23 compute-0 podman[292683]: 2025-11-29 08:04:23.176251397 +0000 UTC m=+0.189069018 container init 4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:04:23 compute-0 podman[292683]: 2025-11-29 08:04:23.187758104 +0000 UTC m=+0.200575725 container start 4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:04:23 compute-0 podman[292683]: 2025-11-29 08:04:23.206566746 +0000 UTC m=+0.219384397 container attach 4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 29 08:04:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2447106928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]: {
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "osd_id": 2,
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "type": "bluestore"
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:     },
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "osd_id": 1,
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "type": "bluestore"
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:     },
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "osd_id": 0,
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:         "type": "bluestore"
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]:     }
Nov 29 08:04:24 compute-0 dazzling_hellman[292700]: }
Nov 29 08:04:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Nov 29 08:04:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Nov 29 08:04:24 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Nov 29 08:04:24 compute-0 systemd[1]: libpod-4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d.scope: Deactivated successfully.
Nov 29 08:04:24 compute-0 systemd[1]: libpod-4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d.scope: Consumed 1.010s CPU time.
Nov 29 08:04:24 compute-0 conmon[292700]: conmon 4371fb40a61ae760a65d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d.scope/container/memory.events
Nov 29 08:04:24 compute-0 podman[292683]: 2025-11-29 08:04:24.199445431 +0000 UTC m=+1.212263042 container died 4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:04:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ed4b14e765fc96e2961199552b000bd2b96cc7b2504793633a0f26a62528de5-merged.mount: Deactivated successfully.
Nov 29 08:04:24 compute-0 podman[292683]: 2025-11-29 08:04:24.261558029 +0000 UTC m=+1.274375640 container remove 4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:04:24 compute-0 systemd[1]: libpod-conmon-4371fb40a61ae760a65dbd0c7ace02a9312570bd3f3a13ae103c83fdf7453f7d.scope: Deactivated successfully.
Nov 29 08:04:24 compute-0 ceph-mon[75050]: pgmap v1927: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 101 KiB/s wr, 195 op/s
Nov 29 08:04:24 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2447106928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:24 compute-0 ceph-mon[75050]: osdmap e369: 3 total, 3 up, 3 in
Nov 29 08:04:24 compute-0 sudo[292575]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:04:24 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:04:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:04:24 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:04:24 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev df21384a-b53f-444e-af83-4cbfb05c34d9 does not exist
Nov 29 08:04:24 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 67e22665-d200-4824-8877-5c0d1c3aab6e does not exist
Nov 29 08:04:24 compute-0 sudo[292743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:04:24 compute-0 sudo[292743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:24 compute-0 sudo[292743]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:24 compute-0 nova_compute[256729]: 2025-11-29 08:04:24.476 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:24 compute-0 sudo[292768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:04:24 compute-0 sudo[292768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:04:24 compute-0 sudo[292768]: pam_unix(sudo:session): session closed for user root
Nov 29 08:04:24 compute-0 sshd-session[292483]: Connection closed by authenticating user root 143.14.121.41 port 48138 [preauth]
Nov 29 08:04:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 20 KiB/s wr, 91 op/s
Nov 29 08:04:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Nov 29 08:04:25 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:04:25 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:04:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Nov 29 08:04:25 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Nov 29 08:04:26 compute-0 ceph-mon[75050]: pgmap v1929: 305 pgs: 305 active+clean; 368 MiB data, 604 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 20 KiB/s wr, 91 op/s
Nov 29 08:04:26 compute-0 ceph-mon[75050]: osdmap e370: 3 total, 3 up, 3 in
Nov 29 08:04:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 372 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 327 KiB/s wr, 84 op/s
Nov 29 08:04:27 compute-0 nova_compute[256729]: 2025-11-29 08:04:27.539 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Nov 29 08:04:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Nov 29 08:04:27 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Nov 29 08:04:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1173196371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:28 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1173196371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:28 compute-0 sshd-session[292793]: Connection closed by authenticating user root 143.14.121.41 port 42566 [preauth]
Nov 29 08:04:28 compute-0 ceph-mon[75050]: pgmap v1931: 305 pgs: 305 active+clean; 372 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 327 KiB/s wr, 84 op/s
Nov 29 08:04:28 compute-0 ceph-mon[75050]: osdmap e371: 3 total, 3 up, 3 in
Nov 29 08:04:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1173196371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1173196371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 372 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 422 KiB/s wr, 55 op/s
Nov 29 08:04:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Nov 29 08:04:29 compute-0 nova_compute[256729]: 2025-11-29 08:04:29.478 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Nov 29 08:04:29 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Nov 29 08:04:30 compute-0 ceph-mon[75050]: pgmap v1933: 305 pgs: 305 active+clean; 372 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 422 KiB/s wr, 55 op/s
Nov 29 08:04:30 compute-0 ceph-mon[75050]: osdmap e372: 3 total, 3 up, 3 in
Nov 29 08:04:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 372 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 407 KiB/s wr, 44 op/s
Nov 29 08:04:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4057378470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4057378470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Nov 29 08:04:32 compute-0 nova_compute[256729]: 2025-11-29 08:04:32.542 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Nov 29 08:04:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Nov 29 08:04:32 compute-0 ceph-mon[75050]: pgmap v1935: 305 pgs: 305 active+clean; 372 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 407 KiB/s wr, 44 op/s
Nov 29 08:04:32 compute-0 nova_compute[256729]: 2025-11-29 08:04:32.986 256736 DEBUG oslo_concurrency.lockutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:32 compute-0 nova_compute[256729]: 2025-11-29 08:04:32.987 256736 DEBUG oslo_concurrency.lockutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:32 compute-0 nova_compute[256729]: 2025-11-29 08:04:32.987 256736 DEBUG oslo_concurrency.lockutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:32 compute-0 nova_compute[256729]: 2025-11-29 08:04:32.987 256736 DEBUG oslo_concurrency.lockutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:32 compute-0 nova_compute[256729]: 2025-11-29 08:04:32.987 256736 DEBUG oslo_concurrency.lockutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:32 compute-0 nova_compute[256729]: 2025-11-29 08:04:32.989 256736 INFO nova.compute.manager [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Terminating instance
Nov 29 08:04:32 compute-0 nova_compute[256729]: 2025-11-29 08:04:32.990 256736 DEBUG nova.compute.manager [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:04:33 compute-0 kernel: tap3aeb3e64-11 (unregistering): left promiscuous mode
Nov 29 08:04:33 compute-0 NetworkManager[48962]: <info>  [1764403473.0527] device (tap3aeb3e64-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 372 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 22 KiB/s wr, 76 op/s
Nov 29 08:04:33 compute-0 ovn_controller[153383]: 2025-11-29T08:04:33Z|00196|binding|INFO|Releasing lport 3aeb3e64-1138-4555-bde4-4f5d2e627b7a from this chassis (sb_readonly=0)
Nov 29 08:04:33 compute-0 ovn_controller[153383]: 2025-11-29T08:04:33Z|00197|binding|INFO|Setting lport 3aeb3e64-1138-4555-bde4-4f5d2e627b7a down in Southbound
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.073 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:33 compute-0 ovn_controller[153383]: 2025-11-29T08:04:33Z|00198|binding|INFO|Removing iface tap3aeb3e64-11 ovn-installed in OVS
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.076 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.100 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:33 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 29 08:04:33 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 15.693s CPU time.
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.116 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:6f:f0 10.100.0.10'], port_security=['fa:16:3e:aa:6f:f0 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be64615c-e0f7-4f3c-a2e6-e3b78b09a803', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=3aeb3e64-1138-4555-bde4-4f5d2e627b7a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.118 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 3aeb3e64-1138-4555-bde4-4f5d2e627b7a in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 unbound from our chassis
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.121 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:04:33 compute-0 systemd-machined[217781]: Machine qemu-19-instance-00000013 terminated.
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.136 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb48933-0cbc-436b-816e-bf6588e795ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.171 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[76560f1e-447d-4523-98c1-8ffe53799a72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.175 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b80647-3c22-4306-9bc3-5291ca40d6f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.208 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[85a5a030-5a8e-4fa2-9805-f413caf8eb84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.228 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[382407ff-a250-4ea7-b9dd-4b24c8685b4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568955, 'reachable_time': 33122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292813, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.232 256736 INFO nova.virt.libvirt.driver [-] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Instance destroyed successfully.
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.233 256736 DEBUG nova.objects.instance [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'resources' on Instance uuid f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.247 256736 DEBUG nova.virt.libvirt.vif [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1026264877',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1026264877',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1026264877',id=19,image_ref='f7315a32-137c-4094-b682-0e4e6066843f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ1ZrolRCF6iA70/5EqxJPvz7IuR5YX7KRRGcfQUylcOTLvF5Qe/G8xWb4/Hpy0xQYRaluMYZS24WZm4N5QiZpQWWo2/zF+jfBx5rKk4cjRXWcqYhWVpc78G6H8G1bSNEQ==',key_name='tempest-keypair-766407201',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:03:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-5w2sikm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-776329285',image_owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:03:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9664e420085d412aae898a6ec021b24f',uuid=f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.247 256736 DEBUG nova.network.os_vif_util [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "address": "fa:16:3e:aa:6f:f0", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aeb3e64-11", "ovs_interfaceid": "3aeb3e64-1138-4555-bde4-4f5d2e627b7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.248 256736 DEBUG nova.network.os_vif_util [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:6f:f0,bridge_name='br-int',has_traffic_filtering=True,id=3aeb3e64-1138-4555-bde4-4f5d2e627b7a,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aeb3e64-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.248 256736 DEBUG os_vif [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:6f:f0,bridge_name='br-int',has_traffic_filtering=True,id=3aeb3e64-1138-4555-bde4-4f5d2e627b7a,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aeb3e64-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.251 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.251 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3aeb3e64-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.253 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.255 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.252 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[78f92f19-5264-418d-abff-7317ae4787d0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2d9c390c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568972, 'tstamp': 568972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292821, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2d9c390c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568975, 'tstamp': 568975}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292821, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.258 256736 INFO os_vif [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:6f:f0,bridge_name='br-int',has_traffic_filtering=True,id=3aeb3e64-1138-4555-bde4-4f5d2e627b7a,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aeb3e64-11')
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.258 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:33 compute-0 sshd-session[292795]: Connection closed by authenticating user root 143.14.121.41 port 42574 [preauth]
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.260 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.261 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.262 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d9c390c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.263 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.263 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d9c390c-30, col_values=(('external_ids', {'iface-id': '30965993-2787-409a-9e74-8cf68d39c3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:33 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:33.264 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.442 256736 INFO nova.virt.libvirt.driver [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Deleting instance files /var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f_del
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.443 256736 INFO nova.virt.libvirt.driver [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Deletion of /var/lib/nova/instances/f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f_del complete
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.498 256736 INFO nova.compute.manager [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Took 0.51 seconds to destroy the instance on the hypervisor.
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.498 256736 DEBUG oslo.service.loopingcall [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.499 256736 DEBUG nova.compute.manager [-] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.499 256736 DEBUG nova.network.neutron [-] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:04:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Nov 29 08:04:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Nov 29 08:04:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Nov 29 08:04:33 compute-0 ceph-mon[75050]: osdmap e373: 3 total, 3 up, 3 in
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.761 256736 DEBUG nova.compute.manager [req-353578f2-59f6-4fb3-9b95-84f89fa2471a req-487d34bd-4ba1-4b85-845a-4ef726fc30ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received event network-vif-unplugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.762 256736 DEBUG oslo_concurrency.lockutils [req-353578f2-59f6-4fb3-9b95-84f89fa2471a req-487d34bd-4ba1-4b85-845a-4ef726fc30ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.762 256736 DEBUG oslo_concurrency.lockutils [req-353578f2-59f6-4fb3-9b95-84f89fa2471a req-487d34bd-4ba1-4b85-845a-4ef726fc30ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.762 256736 DEBUG oslo_concurrency.lockutils [req-353578f2-59f6-4fb3-9b95-84f89fa2471a req-487d34bd-4ba1-4b85-845a-4ef726fc30ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.762 256736 DEBUG nova.compute.manager [req-353578f2-59f6-4fb3-9b95-84f89fa2471a req-487d34bd-4ba1-4b85-845a-4ef726fc30ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] No waiting events found dispatching network-vif-unplugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:33 compute-0 nova_compute[256729]: 2025-11-29 08:04:33.763 256736 DEBUG nova.compute.manager [req-353578f2-59f6-4fb3-9b95-84f89fa2471a req-487d34bd-4ba1-4b85-845a-4ef726fc30ef ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received event network-vif-unplugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Nov 29 08:04:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Nov 29 08:04:34 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Nov 29 08:04:34 compute-0 nova_compute[256729]: 2025-11-29 08:04:34.538 256736 DEBUG nova.network.neutron [-] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:34 compute-0 ceph-mon[75050]: pgmap v1937: 305 pgs: 305 active+clean; 372 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 22 KiB/s wr, 76 op/s
Nov 29 08:04:34 compute-0 ceph-mon[75050]: osdmap e374: 3 total, 3 up, 3 in
Nov 29 08:04:34 compute-0 ceph-mon[75050]: osdmap e375: 3 total, 3 up, 3 in
Nov 29 08:04:34 compute-0 nova_compute[256729]: 2025-11-29 08:04:34.578 256736 INFO nova.compute.manager [-] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Took 1.08 seconds to deallocate network for instance.
Nov 29 08:04:34 compute-0 nova_compute[256729]: 2025-11-29 08:04:34.613 256736 DEBUG nova.compute.manager [req-f4b25947-5018-49ff-b00f-75a55bad6564 req-8c624c95-6c8e-4eb4-a169-f505a2eab51c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received event network-vif-deleted-3aeb3e64-1138-4555-bde4-4f5d2e627b7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:34 compute-0 podman[292846]: 2025-11-29 08:04:34.691826245 +0000 UTC m=+0.052457131 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 08:04:34 compute-0 podman[292845]: 2025-11-29 08:04:34.703095865 +0000 UTC m=+0.068292973 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 08:04:34 compute-0 podman[292844]: 2025-11-29 08:04:34.725843062 +0000 UTC m=+0.092815008 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:04:34 compute-0 nova_compute[256729]: 2025-11-29 08:04:34.754 256736 INFO nova.compute.manager [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Took 0.18 seconds to detach 1 volumes for instance.
Nov 29 08:04:34 compute-0 nova_compute[256729]: 2025-11-29 08:04:34.755 256736 DEBUG nova.compute.manager [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Deleting volume: 86c4247c-01b7-40b2-b116-bdc19256ee22 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 29 08:04:34 compute-0 nova_compute[256729]: 2025-11-29 08:04:34.921 256736 DEBUG oslo_concurrency.lockutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:34 compute-0 nova_compute[256729]: 2025-11-29 08:04:34.922 256736 DEBUG oslo_concurrency.lockutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.016 256736 DEBUG oslo_concurrency.processutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 372 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 31 KiB/s wr, 109 op/s
Nov 29 08:04:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:35 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940033584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.469 256736 DEBUG oslo_concurrency.processutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.477 256736 DEBUG nova.compute.provider_tree [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.533 256736 DEBUG nova.scheduler.client.report [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:04:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:04:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.630 256736 DEBUG oslo_concurrency.lockutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Nov 29 08:04:35 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Nov 29 08:04:35 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2940033584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.782 256736 INFO nova.scheduler.client.report [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Deleted allocations for instance f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.956 256736 DEBUG nova.compute.manager [req-3bc19938-e9f6-44fb-931f-e913777902e0 req-e135fa20-0abd-4d1f-995c-4fe56656dfe8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received event network-vif-plugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.957 256736 DEBUG oslo_concurrency.lockutils [req-3bc19938-e9f6-44fb-931f-e913777902e0 req-e135fa20-0abd-4d1f-995c-4fe56656dfe8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.957 256736 DEBUG oslo_concurrency.lockutils [req-3bc19938-e9f6-44fb-931f-e913777902e0 req-e135fa20-0abd-4d1f-995c-4fe56656dfe8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.958 256736 DEBUG oslo_concurrency.lockutils [req-3bc19938-e9f6-44fb-931f-e913777902e0 req-e135fa20-0abd-4d1f-995c-4fe56656dfe8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.958 256736 DEBUG nova.compute.manager [req-3bc19938-e9f6-44fb-931f-e913777902e0 req-e135fa20-0abd-4d1f-995c-4fe56656dfe8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] No waiting events found dispatching network-vif-plugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:35 compute-0 nova_compute[256729]: 2025-11-29 08:04:35.959 256736 WARNING nova.compute.manager [req-3bc19938-e9f6-44fb-931f-e913777902e0 req-e135fa20-0abd-4d1f-995c-4fe56656dfe8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Received unexpected event network-vif-plugged-3aeb3e64-1138-4555-bde4-4f5d2e627b7a for instance with vm_state deleted and task_state None.
Nov 29 08:04:36 compute-0 nova_compute[256729]: 2025-11-29 08:04:36.303 256736 DEBUG oslo_concurrency.lockutils [None req-ae55cf4b-9304-4ed9-8430-5a7fddec0d24 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4269990855' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4269990855' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1474447821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1474447821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 ceph-mon[75050]: pgmap v1940: 305 pgs: 305 active+clean; 372 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 31 KiB/s wr, 109 op/s
Nov 29 08:04:36 compute-0 ceph-mon[75050]: osdmap e376: 3 total, 3 up, 3 in
Nov 29 08:04:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4269990855' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4269990855' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1474447821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1474447821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 355 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 17 KiB/s wr, 226 op/s
Nov 29 08:04:37 compute-0 nova_compute[256729]: 2025-11-29 08:04:37.546 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Nov 29 08:04:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Nov 29 08:04:37 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.254 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-0 sshd-session[292842]: Connection closed by authenticating user root 143.14.121.41 port 52946 [preauth]
Nov 29 08:04:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2547451342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.660 256736 DEBUG oslo_concurrency.lockutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "a3133710-8c54-433d-9263-c081a69bf339" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.661 256736 DEBUG oslo_concurrency.lockutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.661 256736 DEBUG oslo_concurrency.lockutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "a3133710-8c54-433d-9263-c081a69bf339-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.662 256736 DEBUG oslo_concurrency.lockutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.662 256736 DEBUG oslo_concurrency.lockutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.663 256736 INFO nova.compute.manager [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Terminating instance
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.665 256736 DEBUG nova.compute.manager [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:04:38 compute-0 ceph-mon[75050]: pgmap v1942: 305 pgs: 305 active+clean; 355 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 17 KiB/s wr, 226 op/s
Nov 29 08:04:38 compute-0 ceph-mon[75050]: osdmap e377: 3 total, 3 up, 3 in
Nov 29 08:04:38 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2547451342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:38 compute-0 kernel: tap73b2234c-5b (unregistering): left promiscuous mode
Nov 29 08:04:38 compute-0 NetworkManager[48962]: <info>  [1764403478.7249] device (tap73b2234c-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:38 compute-0 ovn_controller[153383]: 2025-11-29T08:04:38Z|00199|binding|INFO|Releasing lport 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 from this chassis (sb_readonly=0)
Nov 29 08:04:38 compute-0 ovn_controller[153383]: 2025-11-29T08:04:38Z|00200|binding|INFO|Setting lport 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 down in Southbound
Nov 29 08:04:38 compute-0 ovn_controller[153383]: 2025-11-29T08:04:38Z|00201|binding|INFO|Removing iface tap73b2234c-5b ovn-installed in OVS
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.739 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:38.747 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:bb:d9 10.100.0.6'], port_security=['fa:16:3e:91:bb:d9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a3133710-8c54-433d-9263-c081a69bf339', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a943ee2c-e86d-4c9d-b5a9-5767d5e198b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:38.749 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 unbound from our chassis
Nov 29 08:04:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:38.752 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d9c390c-362a-41a5-93b0-23344eb99ae5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:38.753 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[74da96b5-2718-45e5-ad3d-b2540e1af6a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:38.754 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace which is not needed anymore
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.769 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 29 08:04:38 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 18.166s CPU time.
Nov 29 08:04:38 compute-0 systemd-machined[217781]: Machine qemu-17-instance-00000011 terminated.
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.916 256736 INFO nova.virt.libvirt.driver [-] [instance: a3133710-8c54-433d-9263-c081a69bf339] Instance destroyed successfully.
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.917 256736 DEBUG nova.objects.instance [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'resources' on Instance uuid a3133710-8c54-433d-9263-c081a69bf339 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.930 256736 DEBUG nova.virt.libvirt.vif [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:02:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-210416915',display_name='tempest-TestVolumeBootPattern-volume-backed-server-210416915',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-210416915',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFZhEoyK4Ll6+AydDTrWJFv/RbMCQAtShDe5Niki6glH36XIILYUyVDKeQk/cn/o6Cwac5T8/p7rhVRDTi0GPGnurLUi2m9wKBB92zkcjtET1jLWN6TzhYb5yEgAutrqkA==',key_name='tempest-keypair-370509897',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:02:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-vw6lyc25',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:02:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9664e420085d412aae898a6ec021b24f',uuid=a3133710-8c54-433d-9263-c081a69bf339,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.930 256736 DEBUG nova.network.os_vif_util [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "address": "fa:16:3e:91:bb:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73b2234c-5b", "ovs_interfaceid": "73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.931 256736 DEBUG nova.network.os_vif_util [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:91:bb:d9,bridge_name='br-int',has_traffic_filtering=True,id=73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b2234c-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.932 256736 DEBUG os_vif [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:bb:d9,bridge_name='br-int',has_traffic_filtering=True,id=73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b2234c-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.934 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.934 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73b2234c-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.937 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.938 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:38 compute-0 nova_compute[256729]: 2025-11-29 08:04:38.941 256736 INFO os_vif [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:bb:d9,bridge_name='br-int',has_traffic_filtering=True,id=73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73b2234c-5b')
Nov 29 08:04:38 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289833]: [NOTICE]   (289838) : haproxy version is 2.8.14-c23fe91
Nov 29 08:04:38 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289833]: [NOTICE]   (289838) : path to executable is /usr/sbin/haproxy
Nov 29 08:04:38 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289833]: [WARNING]  (289838) : Exiting Master process...
Nov 29 08:04:38 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289833]: [WARNING]  (289838) : Exiting Master process...
Nov 29 08:04:38 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289833]: [ALERT]    (289838) : Current worker (289840) exited with code 143 (Terminated)
Nov 29 08:04:38 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[289833]: [WARNING]  (289838) : All workers exited. Exiting... (0)
Nov 29 08:04:38 compute-0 systemd[1]: libpod-7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778.scope: Deactivated successfully.
Nov 29 08:04:38 compute-0 podman[292953]: 2025-11-29 08:04:38.97071065 +0000 UTC m=+0.079475593 container died 7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 08:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778-userdata-shm.mount: Deactivated successfully.
Nov 29 08:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8470f44048ef3459d0b3c5d4e39829fe69df53ea8f94aece510a538145dc5c2-merged.mount: Deactivated successfully.
Nov 29 08:04:39 compute-0 podman[292953]: 2025-11-29 08:04:39.019848961 +0000 UTC m=+0.128613874 container cleanup 7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:04:39 compute-0 systemd[1]: libpod-conmon-7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778.scope: Deactivated successfully.
Nov 29 08:04:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 350 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 157 KiB/s rd, 9.5 KiB/s wr, 222 op/s
Nov 29 08:04:39 compute-0 podman[293010]: 2025-11-29 08:04:39.101379488 +0000 UTC m=+0.054838085 container remove 7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:04:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:39.111 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7c11bb-f7f7-4042-8d8f-91ba357f5cd6]: (4, ('Sat Nov 29 08:04:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778)\n7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778\nSat Nov 29 08:04:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778)\n7ae1f85da2228a3664585a1197a408f0eb961f6c7298456b391af77471baa778\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:39.112 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5566a59b-9938-4bcf-8499-8a922bcda945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:39.113 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.115 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:39 compute-0 kernel: tap2d9c390c-30: left promiscuous mode
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.132 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:39.135 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd69d7d-3afa-483e-8714-89b256cfcea3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:39.148 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4293ca-bf16-45fc-ab36-b57f9d75a245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:39.150 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2f6edf-9c75-4330-8cd7-96ef287844ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.158 256736 INFO nova.virt.libvirt.driver [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Deleting instance files /var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339_del
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.158 256736 INFO nova.virt.libvirt.driver [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Deletion of /var/lib/nova/instances/a3133710-8c54-433d-9263-c081a69bf339_del complete
Nov 29 08:04:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:39.168 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[afca20aa-ae8d-4b08-a715-97242b434af6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568942, 'reachable_time': 25148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293026, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:39.170 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:04:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:39.170 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[89977ac6-38ee-41af-97d3-e60d853b4314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d9c390c\x2d362a\x2d41a5\x2d93b0\x2d23344eb99ae5.mount: Deactivated successfully.
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.211 256736 INFO nova.compute.manager [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Took 0.55 seconds to destroy the instance on the hypervisor.
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.211 256736 DEBUG oslo.service.loopingcall [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.212 256736 DEBUG nova.compute.manager [-] [instance: a3133710-8c54-433d-9263-c081a69bf339] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.212 256736 DEBUG nova.network.neutron [-] [instance: a3133710-8c54-433d-9263-c081a69bf339] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.329 256736 DEBUG nova.compute.manager [req-612778b5-cb6c-4da5-972e-e192e9ffc19c req-9e93486b-6202-40e3-9fef-3187e83a0681 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received event network-vif-unplugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.329 256736 DEBUG oslo_concurrency.lockutils [req-612778b5-cb6c-4da5-972e-e192e9ffc19c req-9e93486b-6202-40e3-9fef-3187e83a0681 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a3133710-8c54-433d-9263-c081a69bf339-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.330 256736 DEBUG oslo_concurrency.lockutils [req-612778b5-cb6c-4da5-972e-e192e9ffc19c req-9e93486b-6202-40e3-9fef-3187e83a0681 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.330 256736 DEBUG oslo_concurrency.lockutils [req-612778b5-cb6c-4da5-972e-e192e9ffc19c req-9e93486b-6202-40e3-9fef-3187e83a0681 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.330 256736 DEBUG nova.compute.manager [req-612778b5-cb6c-4da5-972e-e192e9ffc19c req-9e93486b-6202-40e3-9fef-3187e83a0681 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] No waiting events found dispatching network-vif-unplugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:39 compute-0 nova_compute[256729]: 2025-11-29 08:04:39.330 256736 DEBUG nova.compute.manager [req-612778b5-cb6c-4da5-972e-e192e9ffc19c req-9e93486b-6202-40e3-9fef-3187e83a0681 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received event network-vif-unplugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Nov 29 08:04:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Nov 29 08:04:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Nov 29 08:04:40 compute-0 nova_compute[256729]: 2025-11-29 08:04:40.158 256736 DEBUG nova.network.neutron [-] [instance: a3133710-8c54-433d-9263-c081a69bf339] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:40 compute-0 nova_compute[256729]: 2025-11-29 08:04:40.176 256736 INFO nova.compute.manager [-] [instance: a3133710-8c54-433d-9263-c081a69bf339] Took 0.96 seconds to deallocate network for instance.
Nov 29 08:04:40 compute-0 nova_compute[256729]: 2025-11-29 08:04:40.259 256736 DEBUG nova.compute.manager [req-7daea171-6239-44f8-8236-e5d99a132abb req-ec812b95-34e7-4d22-9871-f72d9de998d2 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received event network-vif-deleted-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:40 compute-0 nova_compute[256729]: 2025-11-29 08:04:40.348 256736 INFO nova.compute.manager [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Took 0.17 seconds to detach 1 volumes for instance.
Nov 29 08:04:40 compute-0 nova_compute[256729]: 2025-11-29 08:04:40.350 256736 DEBUG nova.compute.manager [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Deleting volume: f60c2fe3-0c52-4766-b57f-95edcd3ecac7 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 29 08:04:40 compute-0 nova_compute[256729]: 2025-11-29 08:04:40.573 256736 DEBUG oslo_concurrency.lockutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:40 compute-0 nova_compute[256729]: 2025-11-29 08:04:40.573 256736 DEBUG oslo_concurrency.lockutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:40 compute-0 nova_compute[256729]: 2025-11-29 08:04:40.650 256736 DEBUG oslo_concurrency.processutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:40 compute-0 ceph-mon[75050]: pgmap v1944: 305 pgs: 305 active+clean; 350 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 157 KiB/s rd, 9.5 KiB/s wr, 222 op/s
Nov 29 08:04:40 compute-0 ceph-mon[75050]: osdmap e378: 3 total, 3 up, 3 in
Nov 29 08:04:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 350 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 8.7 KiB/s wr, 203 op/s
Nov 29 08:04:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281764146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.101 256736 DEBUG oslo_concurrency.processutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.111 256736 DEBUG nova.compute.provider_tree [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.131 256736 DEBUG nova.scheduler.client.report [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.174 256736 DEBUG oslo_concurrency.lockutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4248093935' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4248093935' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.214 256736 INFO nova.scheduler.client.report [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Deleted allocations for instance a3133710-8c54-433d-9263-c081a69bf339
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.282 256736 DEBUG oslo_concurrency.lockutils [None req-8d4e5515-9079-43ad-94a7-ce850a387470 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.410 256736 DEBUG nova.compute.manager [req-4d8e213b-ed74-4bcd-99da-68c9b6d62a2c req-aa40c881-bab9-46cc-9746-34c8362111de ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received event network-vif-plugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.411 256736 DEBUG oslo_concurrency.lockutils [req-4d8e213b-ed74-4bcd-99da-68c9b6d62a2c req-aa40c881-bab9-46cc-9746-34c8362111de ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "a3133710-8c54-433d-9263-c081a69bf339-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.411 256736 DEBUG oslo_concurrency.lockutils [req-4d8e213b-ed74-4bcd-99da-68c9b6d62a2c req-aa40c881-bab9-46cc-9746-34c8362111de ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.412 256736 DEBUG oslo_concurrency.lockutils [req-4d8e213b-ed74-4bcd-99da-68c9b6d62a2c req-aa40c881-bab9-46cc-9746-34c8362111de ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "a3133710-8c54-433d-9263-c081a69bf339-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.413 256736 DEBUG nova.compute.manager [req-4d8e213b-ed74-4bcd-99da-68c9b6d62a2c req-aa40c881-bab9-46cc-9746-34c8362111de ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] No waiting events found dispatching network-vif-plugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:41 compute-0 nova_compute[256729]: 2025-11-29 08:04:41.413 256736 WARNING nova.compute.manager [req-4d8e213b-ed74-4bcd-99da-68c9b6d62a2c req-aa40c881-bab9-46cc-9746-34c8362111de ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: a3133710-8c54-433d-9263-c081a69bf339] Received unexpected event network-vif-plugged-73b2234c-5b4e-4e79-8ff7-a71d3fbe00e7 for instance with vm_state deleted and task_state None.
Nov 29 08:04:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Nov 29 08:04:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4281764146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4248093935' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4248093935' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Nov 29 08:04:41 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.724926) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403481725050, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2440, "num_deletes": 270, "total_data_size": 3491514, "memory_usage": 3547616, "flush_reason": "Manual Compaction"}
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403481755875, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3422252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32028, "largest_seqno": 34467, "table_properties": {"data_size": 3410777, "index_size": 7585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 24270, "raw_average_key_size": 21, "raw_value_size": 3387779, "raw_average_value_size": 3011, "num_data_blocks": 328, "num_entries": 1125, "num_filter_entries": 1125, "num_deletions": 270, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403321, "oldest_key_time": 1764403321, "file_creation_time": 1764403481, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 30998 microseconds, and 15047 cpu microseconds.
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.755935) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3422252 bytes OK
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.755956) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.757333) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.757346) EVENT_LOG_v1 {"time_micros": 1764403481757342, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.757375) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3480920, prev total WAL file size 3480920, number of live WAL files 2.
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.760293) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3342KB)], [68(8231KB)]
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403481760330, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11850804, "oldest_snapshot_seqno": -1}
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6576 keys, 10023268 bytes, temperature: kUnknown
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403481821386, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 10023268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9975193, "index_size": 30555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 166795, "raw_average_key_size": 25, "raw_value_size": 9852818, "raw_average_value_size": 1498, "num_data_blocks": 1226, "num_entries": 6576, "num_filter_entries": 6576, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764403481, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.821821) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 10023268 bytes
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.823829) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.6 rd, 163.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 7115, records dropped: 539 output_compression: NoCompression
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.823859) EVENT_LOG_v1 {"time_micros": 1764403481823845, "job": 38, "event": "compaction_finished", "compaction_time_micros": 61228, "compaction_time_cpu_micros": 23896, "output_level": 6, "num_output_files": 1, "total_output_size": 10023268, "num_input_records": 7115, "num_output_records": 6576, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403481825627, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403481828966, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.759601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.829196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.829205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.829210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.829214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:04:41 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:04:41.829218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:04:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3502203157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3502203157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:42 compute-0 sshd-session[292928]: Connection closed by authenticating user root 143.14.121.41 port 52950 [preauth]
Nov 29 08:04:42 compute-0 nova_compute[256729]: 2025-11-29 08:04:42.548 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:42 compute-0 ceph-mon[75050]: pgmap v1946: 305 pgs: 305 active+clean; 350 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 8.7 KiB/s wr, 203 op/s
Nov 29 08:04:42 compute-0 ceph-mon[75050]: osdmap e379: 3 total, 3 up, 3 in
Nov 29 08:04:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3502203157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3502203157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 297 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 5.5 KiB/s wr, 138 op/s
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.702 256736 DEBUG oslo_concurrency.lockutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "7b834f92-a941-48d4-830a-98e70067cabb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.703 256736 DEBUG oslo_concurrency.lockutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.704 256736 DEBUG oslo_concurrency.lockutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "7b834f92-a941-48d4-830a-98e70067cabb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.704 256736 DEBUG oslo_concurrency.lockutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.705 256736 DEBUG oslo_concurrency.lockutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.707 256736 INFO nova.compute.manager [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Terminating instance
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.709 256736 DEBUG nova.compute.manager [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:04:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Nov 29 08:04:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Nov 29 08:04:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Nov 29 08:04:43 compute-0 kernel: tap94ee6a33-75 (unregistering): left promiscuous mode
Nov 29 08:04:43 compute-0 NetworkManager[48962]: <info>  [1764403483.7801] device (tap94ee6a33-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:04:43 compute-0 ovn_controller[153383]: 2025-11-29T08:04:43Z|00202|binding|INFO|Releasing lport 94ee6a33-75bb-43b8-952b-43a160169df4 from this chassis (sb_readonly=0)
Nov 29 08:04:43 compute-0 ovn_controller[153383]: 2025-11-29T08:04:43Z|00203|binding|INFO|Setting lport 94ee6a33-75bb-43b8-952b-43a160169df4 down in Southbound
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.793 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:43 compute-0 ovn_controller[153383]: 2025-11-29T08:04:43Z|00204|binding|INFO|Removing iface tap94ee6a33-75 ovn-installed in OVS
Nov 29 08:04:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:43.802 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:27:f3 10.100.0.10'], port_security=['fa:16:3e:3d:27:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7b834f92-a941-48d4-830a-98e70067cabb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc8975a3-8b30-4fd7-b465-76d299802b38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=357216b9-f046-4273-a2c2-2385abe848ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=94ee6a33-75bb-43b8-952b-43a160169df4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:04:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:43.804 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 94ee6a33-75bb-43b8-952b-43a160169df4 in datapath 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c unbound from our chassis
Nov 29 08:04:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:43.805 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:04:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:43.806 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6914a98b-4176-4ac4-b84a-0fafc470e146]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:43.807 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c namespace which is not needed anymore
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.842 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:43 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 29 08:04:43 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 16.905s CPU time.
Nov 29 08:04:43 compute-0 systemd-machined[217781]: Machine qemu-20-instance-00000014 terminated.
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.936 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.939 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.952 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.956 256736 INFO nova.virt.libvirt.driver [-] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Instance destroyed successfully.
Nov 29 08:04:43 compute-0 nova_compute[256729]: 2025-11-29 08:04:43.957 256736 DEBUG nova.objects.instance [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lazy-loading 'resources' on Instance uuid 7b834f92-a941-48d4-830a-98e70067cabb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:04:43 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[291794]: [NOTICE]   (291798) : haproxy version is 2.8.14-c23fe91
Nov 29 08:04:43 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[291794]: [NOTICE]   (291798) : path to executable is /usr/sbin/haproxy
Nov 29 08:04:43 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[291794]: [WARNING]  (291798) : Exiting Master process...
Nov 29 08:04:43 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[291794]: [WARNING]  (291798) : Exiting Master process...
Nov 29 08:04:43 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[291794]: [ALERT]    (291798) : Current worker (291800) exited with code 143 (Terminated)
Nov 29 08:04:43 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[291794]: [WARNING]  (291798) : All workers exited. Exiting... (0)
Nov 29 08:04:43 compute-0 systemd[1]: libpod-21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1.scope: Deactivated successfully.
Nov 29 08:04:43 compute-0 podman[293077]: 2025-11-29 08:04:43.997881809 +0000 UTC m=+0.056766156 container died 21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.019 256736 DEBUG nova.virt.libvirt.vif [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:03:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1211888278',display_name='tempest-TransferEncryptedVolumeTest-server-1211888278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1211888278',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOK0uPKk5+iwu2ACxwLiXPfPFKjqAeuswoaNdzNGpYFdv9fCZRffGqJNvJmfqnbg+KUupmPFmswjEh+khO5A2TFlJ9LMuOBogxQ7cFR7kmTFduCVQRkpWi0Jux9/KRhlg==',key_name='tempest-TransferEncryptedVolumeTest-347364744',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:04:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-znuo30b0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:04:04Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=7b834f92-a941-48d4-830a-98e70067cabb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.021 256736 DEBUG nova.network.os_vif_util [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "94ee6a33-75bb-43b8-952b-43a160169df4", "address": "fa:16:3e:3d:27:f3", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94ee6a33-75", "ovs_interfaceid": "94ee6a33-75bb-43b8-952b-43a160169df4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.023 256736 DEBUG nova.network.os_vif_util [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:27:f3,bridge_name='br-int',has_traffic_filtering=True,id=94ee6a33-75bb-43b8-952b-43a160169df4,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94ee6a33-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.024 256736 DEBUG os_vif [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:27:f3,bridge_name='br-int',has_traffic_filtering=True,id=94ee6a33-75bb-43b8-952b-43a160169df4,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94ee6a33-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.027 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.028 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94ee6a33-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.034 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.037 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.048 256736 INFO os_vif [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:27:f3,bridge_name='br-int',has_traffic_filtering=True,id=94ee6a33-75bb-43b8-952b-43a160169df4,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94ee6a33-75')
Nov 29 08:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1-userdata-shm.mount: Deactivated successfully.
Nov 29 08:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-224c32bf320da23ab62bd7d6298d3092abc8e4526efa69166490fb6cfa82db94-merged.mount: Deactivated successfully.
Nov 29 08:04:44 compute-0 podman[293077]: 2025-11-29 08:04:44.06643244 +0000 UTC m=+0.125316767 container cleanup 21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:04:44 compute-0 systemd[1]: libpod-conmon-21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1.scope: Deactivated successfully.
Nov 29 08:04:44 compute-0 podman[293120]: 2025-11-29 08:04:44.157788618 +0000 UTC m=+0.058335188 container remove 21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:04:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:44.168 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[047f3e55-765d-44da-89a8-ce2cd4f515b5]: (4, ('Sat Nov 29 08:04:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c (21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1)\n21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1\nSat Nov 29 08:04:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c (21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1)\n21a0306451b43d7d7ecc9366fdcbd51471dcc2dc589345e342443b134b85c7b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:44.171 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ad300b19-8a24-4f8e-ad83-dfe910fc4e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:44.173 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f1bbc0-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.175 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:44 compute-0 kernel: tap45f1bbc0-c0: left promiscuous mode
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.179 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:44.185 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b9437f5e-97ff-45b2-8221-52dddb24a872]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.204 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:44.207 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c2589081-d7b1-4962-80a5-5b8a3494fbb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:44.208 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3732c293-470f-4b39-a0c5-52401f1ae80d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:44.235 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[fa29ca70-bc7b-45a3-874e-ee97614f0a74]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575641, 'reachable_time': 18831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293147, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d45f1bbc0\x2dc06e\x2d4a64\x2d9d82\x2d3a4cbaa9482c.mount: Deactivated successfully.
Nov 29 08:04:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:44.240 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:04:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:44.241 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[381f0034-d2a1-4799-9c80-2ac6717125d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.247 256736 DEBUG nova.compute.manager [req-8e066348-c75f-48e5-b3ec-942456058433 req-90042d11-b0ef-455e-9d1f-bf4df28b4e96 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received event network-vif-unplugged-94ee6a33-75bb-43b8-952b-43a160169df4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.247 256736 DEBUG oslo_concurrency.lockutils [req-8e066348-c75f-48e5-b3ec-942456058433 req-90042d11-b0ef-455e-9d1f-bf4df28b4e96 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "7b834f92-a941-48d4-830a-98e70067cabb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.247 256736 DEBUG oslo_concurrency.lockutils [req-8e066348-c75f-48e5-b3ec-942456058433 req-90042d11-b0ef-455e-9d1f-bf4df28b4e96 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.247 256736 DEBUG oslo_concurrency.lockutils [req-8e066348-c75f-48e5-b3ec-942456058433 req-90042d11-b0ef-455e-9d1f-bf4df28b4e96 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.248 256736 DEBUG nova.compute.manager [req-8e066348-c75f-48e5-b3ec-942456058433 req-90042d11-b0ef-455e-9d1f-bf4df28b4e96 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] No waiting events found dispatching network-vif-unplugged-94ee6a33-75bb-43b8-952b-43a160169df4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.248 256736 DEBUG nova.compute.manager [req-8e066348-c75f-48e5-b3ec-942456058433 req-90042d11-b0ef-455e-9d1f-bf4df28b4e96 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received event network-vif-unplugged-94ee6a33-75bb-43b8-952b-43a160169df4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.258 256736 INFO nova.virt.libvirt.driver [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Deleting instance files /var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb_del
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.259 256736 INFO nova.virt.libvirt.driver [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Deletion of /var/lib/nova/instances/7b834f92-a941-48d4-830a-98e70067cabb_del complete
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.317 256736 INFO nova.compute.manager [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Took 0.61 seconds to destroy the instance on the hypervisor.
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.318 256736 DEBUG oslo.service.loopingcall [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.318 256736 DEBUG nova.compute.manager [-] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:04:44 compute-0 nova_compute[256729]: 2025-11-29 08:04:44.318 256736 DEBUG nova.network.neutron [-] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:04:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Nov 29 08:04:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Nov 29 08:04:44 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Nov 29 08:04:44 compute-0 ceph-mon[75050]: pgmap v1948: 305 pgs: 305 active+clean; 297 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 5.5 KiB/s wr, 138 op/s
Nov 29 08:04:44 compute-0 ceph-mon[75050]: osdmap e380: 3 total, 3 up, 3 in
Nov 29 08:04:44 compute-0 ceph-mon[75050]: osdmap e381: 3 total, 3 up, 3 in
Nov 29 08:04:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 270 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 4.3 KiB/s wr, 141 op/s
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.188 256736 DEBUG nova.network.neutron [-] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.220 256736 INFO nova.compute.manager [-] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Took 0.90 seconds to deallocate network for instance.
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.274 256736 DEBUG nova.compute.manager [req-34b61697-df9f-4c2a-a548-2d358eac92ee req-5f40f4e7-3d19-427d-92d2-b28d112701c3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received event network-vif-deleted-94ee6a33-75bb-43b8-952b-43a160169df4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.397 256736 INFO nova.compute.manager [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Took 0.18 seconds to detach 1 volumes for instance.
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.448 256736 DEBUG oslo_concurrency.lockutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.448 256736 DEBUG oslo_concurrency.lockutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.489 256736 DEBUG oslo_concurrency.processutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:04:45 compute-0 sshd-session[293050]: Connection closed by authenticating user root 143.14.121.41 port 44884 [preauth]
Nov 29 08:04:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Nov 29 08:04:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Nov 29 08:04:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Nov 29 08:04:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:04:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513634453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.912 256736 DEBUG oslo_concurrency.processutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.922 256736 DEBUG nova.compute.provider_tree [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.942 256736 DEBUG nova.scheduler.client.report [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:04:45 compute-0 nova_compute[256729]: 2025-11-29 08:04:45.988 256736 DEBUG oslo_concurrency.lockutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:46 compute-0 nova_compute[256729]: 2025-11-29 08:04:46.009 256736 INFO nova.scheduler.client.report [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Deleted allocations for instance 7b834f92-a941-48d4-830a-98e70067cabb
Nov 29 08:04:46 compute-0 nova_compute[256729]: 2025-11-29 08:04:46.076 256736 DEBUG oslo_concurrency.lockutils [None req-bbf766f9-3433-480b-bae5-8ed80cfde762 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:46 compute-0 nova_compute[256729]: 2025-11-29 08:04:46.339 256736 DEBUG nova.compute.manager [req-747f5541-fa72-488c-b537-6615cb65ad99 req-9767f17c-8a91-4df9-bda5-95f2f0f44881 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received event network-vif-plugged-94ee6a33-75bb-43b8-952b-43a160169df4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:04:46 compute-0 nova_compute[256729]: 2025-11-29 08:04:46.340 256736 DEBUG oslo_concurrency.lockutils [req-747f5541-fa72-488c-b537-6615cb65ad99 req-9767f17c-8a91-4df9-bda5-95f2f0f44881 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "7b834f92-a941-48d4-830a-98e70067cabb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:46 compute-0 nova_compute[256729]: 2025-11-29 08:04:46.340 256736 DEBUG oslo_concurrency.lockutils [req-747f5541-fa72-488c-b537-6615cb65ad99 req-9767f17c-8a91-4df9-bda5-95f2f0f44881 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:46 compute-0 nova_compute[256729]: 2025-11-29 08:04:46.340 256736 DEBUG oslo_concurrency.lockutils [req-747f5541-fa72-488c-b537-6615cb65ad99 req-9767f17c-8a91-4df9-bda5-95f2f0f44881 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b834f92-a941-48d4-830a-98e70067cabb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:04:46 compute-0 nova_compute[256729]: 2025-11-29 08:04:46.341 256736 DEBUG nova.compute.manager [req-747f5541-fa72-488c-b537-6615cb65ad99 req-9767f17c-8a91-4df9-bda5-95f2f0f44881 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] No waiting events found dispatching network-vif-plugged-94ee6a33-75bb-43b8-952b-43a160169df4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:04:46 compute-0 nova_compute[256729]: 2025-11-29 08:04:46.341 256736 WARNING nova.compute.manager [req-747f5541-fa72-488c-b537-6615cb65ad99 req-9767f17c-8a91-4df9-bda5-95f2f0f44881 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Received unexpected event network-vif-plugged-94ee6a33-75bb-43b8-952b-43a160169df4 for instance with vm_state deleted and task_state None.
Nov 29 08:04:46 compute-0 ceph-mon[75050]: pgmap v1951: 305 pgs: 305 active+clean; 270 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 4.3 KiB/s wr, 141 op/s
Nov 29 08:04:46 compute-0 ceph-mon[75050]: osdmap e382: 3 total, 3 up, 3 in
Nov 29 08:04:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1513634453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:04:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Nov 29 08:04:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Nov 29 08:04:46 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Nov 29 08:04:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 270 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 688 KiB/s rd, 7.5 KiB/s wr, 274 op/s
Nov 29 08:04:47 compute-0 nova_compute[256729]: 2025-11-29 08:04:47.551 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:47 compute-0 ceph-mon[75050]: osdmap e383: 3 total, 3 up, 3 in
Nov 29 08:04:47 compute-0 sshd-session[293169]: Connection closed by authenticating user root 143.14.121.41 port 44892 [preauth]
Nov 29 08:04:48 compute-0 nova_compute[256729]: 2025-11-29 08:04:48.230 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403473.2295237, f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:48 compute-0 nova_compute[256729]: 2025-11-29 08:04:48.230 256736 INFO nova.compute.manager [-] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] VM Stopped (Lifecycle Event)
Nov 29 08:04:48 compute-0 nova_compute[256729]: 2025-11-29 08:04:48.268 256736 DEBUG nova.compute.manager [None req-e10f98cc-a743-4776-bf73-063e7a9833ef - - - - - -] [instance: f09ec4d5-69ed-4bcc-9fda-1b9d4e3bc11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Nov 29 08:04:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Nov 29 08:04:48 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Nov 29 08:04:48 compute-0 ceph-mon[75050]: pgmap v1954: 305 pgs: 305 active+clean; 270 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 688 KiB/s rd, 7.5 KiB/s wr, 274 op/s
Nov 29 08:04:49 compute-0 nova_compute[256729]: 2025-11-29 08:04:49.030 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 270 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 8.3 KiB/s wr, 263 op/s
Nov 29 08:04:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1300103125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1300103125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Nov 29 08:04:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Nov 29 08:04:49 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Nov 29 08:04:49 compute-0 ceph-mon[75050]: osdmap e384: 3 total, 3 up, 3 in
Nov 29 08:04:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1300103125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1300103125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:49 compute-0 ceph-mon[75050]: osdmap e385: 3 total, 3 up, 3 in
Nov 29 08:04:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:04:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4209599821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:04:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4209599821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:50 compute-0 ceph-mon[75050]: pgmap v1956: 305 pgs: 305 active+clean; 270 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 738 KiB/s rd, 8.3 KiB/s wr, 263 op/s
Nov 29 08:04:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4209599821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:04:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4209599821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:04:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 270 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 637 KiB/s rd, 7.1 KiB/s wr, 227 op/s
Nov 29 08:04:51 compute-0 ceph-mon[75050]: pgmap v1958: 305 pgs: 305 active+clean; 270 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 637 KiB/s rd, 7.1 KiB/s wr, 227 op/s
Nov 29 08:04:52 compute-0 nova_compute[256729]: 2025-11-29 08:04:52.578 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:52 compute-0 sshd-session[293173]: Connection closed by authenticating user root 143.14.121.41 port 44902 [preauth]
Nov 29 08:04:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 155 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 201 KiB/s rd, 5.7 KiB/s wr, 158 op/s
Nov 29 08:04:53 compute-0 nova_compute[256729]: 2025-11-29 08:04:53.912 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403478.9104233, a3133710-8c54-433d-9263-c081a69bf339 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:53 compute-0 nova_compute[256729]: 2025-11-29 08:04:53.912 256736 INFO nova.compute.manager [-] [instance: a3133710-8c54-433d-9263-c081a69bf339] VM Stopped (Lifecycle Event)
Nov 29 08:04:54 compute-0 nova_compute[256729]: 2025-11-29 08:04:54.031 256736 DEBUG nova.compute.manager [None req-ae364189-bf14-45f2-86aa-49ea3d5083ce - - - - - -] [instance: a3133710-8c54-433d-9263-c081a69bf339] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:54 compute-0 nova_compute[256729]: 2025-11-29 08:04:54.033 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:54 compute-0 ceph-mon[75050]: pgmap v1959: 305 pgs: 305 active+clean; 155 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 201 KiB/s rd, 5.7 KiB/s wr, 158 op/s
Nov 29 08:04:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Nov 29 08:04:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Nov 29 08:04:54 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Nov 29 08:04:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 88 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 KiB/s wr, 89 op/s
Nov 29 08:04:55 compute-0 ceph-mon[75050]: osdmap e386: 3 total, 3 up, 3 in
Nov 29 08:04:56 compute-0 nova_compute[256729]: 2025-11-29 08:04:56.154 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:56 compute-0 ceph-mon[75050]: pgmap v1961: 305 pgs: 305 active+clean; 88 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 KiB/s wr, 89 op/s
Nov 29 08:04:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:04:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2378921486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 88 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 KiB/s wr, 79 op/s
Nov 29 08:04:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Nov 29 08:04:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2378921486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:04:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Nov 29 08:04:57 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Nov 29 08:04:57 compute-0 nova_compute[256729]: 2025-11-29 08:04:57.581 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:58 compute-0 nova_compute[256729]: 2025-11-29 08:04:58.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:04:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Nov 29 08:04:58 compute-0 ceph-mon[75050]: pgmap v1962: 305 pgs: 305 active+clean; 88 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 KiB/s wr, 79 op/s
Nov 29 08:04:58 compute-0 ceph-mon[75050]: osdmap e387: 3 total, 3 up, 3 in
Nov 29 08:04:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Nov 29 08:04:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Nov 29 08:04:58 compute-0 nova_compute[256729]: 2025-11-29 08:04:58.950 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403483.9484413, 7b834f92-a941-48d4-830a-98e70067cabb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:04:58 compute-0 nova_compute[256729]: 2025-11-29 08:04:58.950 256736 INFO nova.compute.manager [-] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] VM Stopped (Lifecycle Event)
Nov 29 08:04:58 compute-0 nova_compute[256729]: 2025-11-29 08:04:58.972 256736 DEBUG nova.compute.manager [None req-eabdb87c-7d6b-444e-b7a9-342c6aa1c312 - - - - - -] [instance: 7b834f92-a941-48d4-830a-98e70067cabb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:04:59 compute-0 nova_compute[256729]: 2025-11-29 08:04:59.036 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:04:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 108 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Nov 29 08:04:59 compute-0 sshd-session[293175]: Connection closed by authenticating user root 143.14.121.41 port 55202 [preauth]
Nov 29 08:04:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:04:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Nov 29 08:04:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Nov 29 08:04:59 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Nov 29 08:04:59 compute-0 ceph-mon[75050]: osdmap e388: 3 total, 3 up, 3 in
Nov 29 08:04:59 compute-0 ceph-mon[75050]: osdmap e389: 3 total, 3 up, 3 in
Nov 29 08:04:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:59.783 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:04:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:59.784 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:04:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:04:59.784 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:00 compute-0 nova_compute[256729]: 2025-11-29 08:05:00.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:00 compute-0 ceph-mon[75050]: pgmap v1965: 305 pgs: 305 active+clean; 108 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Nov 29 08:05:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 108 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.4 MiB/s wr, 69 op/s
Nov 29 08:05:01 compute-0 nova_compute[256729]: 2025-11-29 08:05:01.482 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "7b518c20-fd37-4e46-af6a-11524b767485" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:01 compute-0 nova_compute[256729]: 2025-11-29 08:05:01.482 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:01 compute-0 nova_compute[256729]: 2025-11-29 08:05:01.506 256736 DEBUG nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:05:01 compute-0 nova_compute[256729]: 2025-11-29 08:05:01.636 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:01 compute-0 nova_compute[256729]: 2025-11-29 08:05:01.637 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:01 compute-0 nova_compute[256729]: 2025-11-29 08:05:01.647 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:05:01 compute-0 nova_compute[256729]: 2025-11-29 08:05:01.647 256736 INFO nova.compute.claims [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:05:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2488992702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.007 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.177 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/906679978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.434 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.440 256736 DEBUG nova.compute.provider_tree [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.455 256736 DEBUG nova.scheduler.client.report [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.473 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.474 256736 DEBUG nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.476 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.477 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.477 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.477 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.548 256736 DEBUG nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.549 256736 DEBUG nova.network.neutron [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.569 256736 INFO nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.582 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.590 256736 DEBUG nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.660 256736 INFO nova.virt.block_device [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Booting with volume 1614a07d-d62c-4dae-8875-9c623d26ae7c at /dev/vda
Nov 29 08:05:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Nov 29 08:05:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Nov 29 08:05:02 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Nov 29 08:05:02 compute-0 ceph-mon[75050]: pgmap v1967: 305 pgs: 305 active+clean; 108 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.4 MiB/s wr, 69 op/s
Nov 29 08:05:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2488992702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/906679978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.749 256736 DEBUG nova.policy [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9664e420085d412aae898a6ec021b24f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dfb6854e99614af5b8df420841fde0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.818 256736 DEBUG os_brick.utils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.820 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.832 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.833 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bcf1c8-4d9e-41c0-983e-a4bd5f7150e7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.834 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.848 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.849 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[30f8c8b8-4469-4736-bcd7-f75f24feac68]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.851 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.861 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.861 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9b9567-1ffb-4fe6-9002-058f88584ca4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.863 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3e8513-4c72-49d5-b985-49a4135eabcd]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.863 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.896 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.899 256736 DEBUG os_brick.initiator.connectors.lightos [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.899 256736 DEBUG os_brick.initiator.connectors.lightos [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.899 256736 DEBUG os_brick.initiator.connectors.lightos [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.900 256736 DEBUG os_brick.utils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.900 256736 DEBUG nova.virt.block_device [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Updating existing volume attachment record: 293a23a9-d4a3-4642-929f-93775371d183 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:05:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441449037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:02 compute-0 nova_compute[256729]: 2025-11-29 08:05:02.931 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.108 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.110 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4427MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.110 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.110 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.185 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 7b518c20-fd37-4e46-af6a-11524b767485 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.186 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.186 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.227 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/499457154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.560 256736 DEBUG nova.network.neutron [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Successfully created port: 2ce38729-f90c-40bf-aeca-bbe09b973bbb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:05:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/171550628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.653 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.660 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.679 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Nov 29 08:05:03 compute-0 ceph-mon[75050]: osdmap e390: 3 total, 3 up, 3 in
Nov 29 08:05:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1441449037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/499457154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/171550628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Nov 29 08:05:03 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.722 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.723 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.975 256736 DEBUG nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.978 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.979 256736 INFO nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Creating image(s)
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.980 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.981 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Ensure instance console log exists: /var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.982 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.982 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:03 compute-0 nova_compute[256729]: 2025-11-29 08:05:03.983 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.037 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.400 256736 DEBUG nova.network.neutron [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Successfully updated port: 2ce38729-f90c-40bf-aeca-bbe09b973bbb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.424 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "refresh_cache-7b518c20-fd37-4e46-af6a-11524b767485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.424 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquired lock "refresh_cache-7b518c20-fd37-4e46-af6a-11524b767485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.424 256736 DEBUG nova.network.neutron [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:05:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.513 256736 DEBUG nova.compute.manager [req-79d6c9d4-68bf-4428-bab3-069bc714ab0f req-ac13fda8-341a-4934-a505-0627158e1b15 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received event network-changed-2ce38729-f90c-40bf-aeca-bbe09b973bbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.514 256736 DEBUG nova.compute.manager [req-79d6c9d4-68bf-4428-bab3-069bc714ab0f req-ac13fda8-341a-4934-a505-0627158e1b15 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Refreshing instance network info cache due to event network-changed-2ce38729-f90c-40bf-aeca-bbe09b973bbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.514 256736 DEBUG oslo_concurrency.lockutils [req-79d6c9d4-68bf-4428-bab3-069bc714ab0f req-ac13fda8-341a-4934-a505-0627158e1b15 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-7b518c20-fd37-4e46-af6a-11524b767485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.604 256736 DEBUG nova.network.neutron [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:05:04 compute-0 ceph-mon[75050]: pgmap v1969: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Nov 29 08:05:04 compute-0 ceph-mon[75050]: osdmap e391: 3 total, 3 up, 3 in
Nov 29 08:05:04 compute-0 nova_compute[256729]: 2025-11-29 08:05:04.725 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:04 compute-0 sshd-session[293178]: Connection closed by authenticating user root 143.14.121.41 port 55208 [preauth]
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 54 op/s
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.167 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.168 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.169 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.169 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.170 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.253 256736 DEBUG nova.network.neutron [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Updating instance_info_cache with network_info: [{"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.276 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Releasing lock "refresh_cache-7b518c20-fd37-4e46-af6a-11524b767485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.277 256736 DEBUG nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Instance network_info: |[{"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.278 256736 DEBUG oslo_concurrency.lockutils [req-79d6c9d4-68bf-4428-bab3-069bc714ab0f req-ac13fda8-341a-4934-a505-0627158e1b15 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-7b518c20-fd37-4e46-af6a-11524b767485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.278 256736 DEBUG nova.network.neutron [req-79d6c9d4-68bf-4428-bab3-069bc714ab0f req-ac13fda8-341a-4934-a505-0627158e1b15 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Refreshing network info cache for port 2ce38729-f90c-40bf-aeca-bbe09b973bbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.284 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Start _get_guest_xml network_info=[{"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1614a07d-d62c-4dae-8875-9c623d26ae7c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1614a07d-d62c-4dae-8875-9c623d26ae7c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '7b518c20-fd37-4e46-af6a-11524b767485', 'attached_at': '', 'detached_at': '', 'volume_id': '1614a07d-d62c-4dae-8875-9c623d26ae7c', 'serial': '1614a07d-d62c-4dae-8875-9c623d26ae7c'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': '293a23a9-d4a3-4642-929f-93775371d183', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.291 256736 WARNING nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.297 256736 DEBUG nova.virt.libvirt.host [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.298 256736 DEBUG nova.virt.libvirt.host [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.309 256736 DEBUG nova.virt.libvirt.host [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.310 256736 DEBUG nova.virt.libvirt.host [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.311 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.312 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.313 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.313 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.314 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.314 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.315 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.316 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.316 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.317 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.317 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.318 256736 DEBUG nova.virt.hardware [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.355 256736 DEBUG nova.storage.rbd_utils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image 7b518c20-fd37-4e46-af6a-11524b767485_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.359 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:05:05
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['images', 'default.rgw.log', 'backups', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 29 08:05:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:05:05 compute-0 podman[293294]: 2025-11-29 08:05:05.691942992 +0000 UTC m=+0.057270419 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:05:05 compute-0 podman[293293]: 2025-11-29 08:05:05.706956024 +0000 UTC m=+0.068095670 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:05:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Nov 29 08:05:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Nov 29 08:05:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Nov 29 08:05:05 compute-0 podman[293292]: 2025-11-29 08:05:05.734303723 +0000 UTC m=+0.097705759 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:05:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644520742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.820 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.844 256736 DEBUG nova.virt.libvirt.vif [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-205175967',display_name='tempest-TestVolumeBootPattern-server-205175967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-205175967',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRhGhCDf+2DWWqDuvRpS/JaOK+f/CbMMIs9mX1kyTRqTPCFubI8ju/4twf4g9TbzLiRX/BzWwQ/uPnV3ZkV8vI7PffevvM5uIZzGBjdTxd3Z49lVgwpoVKRmE3GzO1NBg==',key_name='tempest-TestVolumeBootPattern-556618908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-ez9i1qjk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:02Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=7b518c20-fd37-4e46-af6a-11524b767485,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.845 256736 DEBUG nova.network.os_vif_util [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.846 256736 DEBUG nova.network.os_vif_util [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:43:d9,bridge_name='br-int',has_traffic_filtering=True,id=2ce38729-f90c-40bf-aeca-bbe09b973bbb,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce38729-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.848 256736 DEBUG nova.objects.instance [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b518c20-fd37-4e46-af6a-11524b767485 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.864 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <uuid>7b518c20-fd37-4e46-af6a-11524b767485</uuid>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <name>instance-00000015</name>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <nova:name>tempest-TestVolumeBootPattern-server-205175967</nova:name>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:05:05</nova:creationTime>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <nova:user uuid="9664e420085d412aae898a6ec021b24f">tempest-TestVolumeBootPattern-776329285-project-member</nova:user>
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <nova:project uuid="dfb6854e99614af5b8df420841fde0db">tempest-TestVolumeBootPattern-776329285</nova:project>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <nova:port uuid="2ce38729-f90c-40bf-aeca-bbe09b973bbb">
Nov 29 08:05:05 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <system>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <entry name="serial">7b518c20-fd37-4e46-af6a-11524b767485</entry>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <entry name="uuid">7b518c20-fd37-4e46-af6a-11524b767485</entry>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     </system>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <os>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   </os>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <features>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   </features>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/7b518c20-fd37-4e46-af6a-11524b767485_disk.config">
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       </source>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-1614a07d-d62c-4dae-8875-9c623d26ae7c">
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       </source>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:05:05 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <serial>1614a07d-d62c-4dae-8875-9c623d26ae7c</serial>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:6e:43:d9"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <target dev="tap2ce38729-f9"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485/console.log" append="off"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <video>
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     </video>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:05:05 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:05:05 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:05:05 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:05:05 compute-0 nova_compute[256729]: </domain>
Nov 29 08:05:05 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.865 256736 DEBUG nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Preparing to wait for external event network-vif-plugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.866 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "7b518c20-fd37-4e46-af6a-11524b767485-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.866 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.866 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.867 256736 DEBUG nova.virt.libvirt.vif [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-205175967',display_name='tempest-TestVolumeBootPattern-server-205175967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-205175967',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRhGhCDf+2DWWqDuvRpS/JaOK+f/CbMMIs9mX1kyTRqTPCFubI8ju/4twf4g9TbzLiRX/BzWwQ/uPnV3ZkV8vI7PffevvM5uIZzGBjdTxd3Z49lVgwpoVKRmE3GzO1NBg==',key_name='tempest-TestVolumeBootPattern-556618908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-ez9i1qjk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:02Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=7b518c20-fd37-4e46-af6a-11524b767485,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.867 256736 DEBUG nova.network.os_vif_util [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.867 256736 DEBUG nova.network.os_vif_util [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:43:d9,bridge_name='br-int',has_traffic_filtering=True,id=2ce38729-f90c-40bf-aeca-bbe09b973bbb,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce38729-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.868 256736 DEBUG os_vif [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:43:d9,bridge_name='br-int',has_traffic_filtering=True,id=2ce38729-f90c-40bf-aeca-bbe09b973bbb,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce38729-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.868 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.868 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.869 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.873 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.873 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ce38729-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.874 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ce38729-f9, col_values=(('external_ids', {'iface-id': '2ce38729-f90c-40bf-aeca-bbe09b973bbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:43:d9', 'vm-uuid': '7b518c20-fd37-4e46-af6a-11524b767485'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.876 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:05 compute-0 NetworkManager[48962]: <info>  [1764403505.8772] manager: (tap2ce38729-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.878 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.886 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.888 256736 INFO os_vif [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:43:d9,bridge_name='br-int',has_traffic_filtering=True,id=2ce38729-f90c-40bf-aeca-bbe09b973bbb,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce38729-f9')
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.948 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.949 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.949 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No VIF found with MAC fa:16:3e:6e:43:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.949 256736 INFO nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Using config drive
Nov 29 08:05:05 compute-0 nova_compute[256729]: 2025-11-29 08:05:05.975 256736 DEBUG nova.storage.rbd_utils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image 7b518c20-fd37-4e46-af6a-11524b767485_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/62588294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/62588294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.344 256736 INFO nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Creating config drive at /var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485/disk.config
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.352 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplroepoj3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.502 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplroepoj3" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.532 256736 DEBUG nova.storage.rbd_utils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image 7b518c20-fd37-4e46-af6a-11524b767485_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.537 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485/disk.config 7b518c20-fd37-4e46-af6a-11524b767485_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.721 256736 DEBUG oslo_concurrency.processutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485/disk.config 7b518c20-fd37-4e46-af6a-11524b767485_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.722 256736 INFO nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Deleting local config drive /var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485/disk.config because it was imported into RBD.
Nov 29 08:05:06 compute-0 ceph-mon[75050]: pgmap v1971: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 54 op/s
Nov 29 08:05:06 compute-0 ceph-mon[75050]: osdmap e392: 3 total, 3 up, 3 in
Nov 29 08:05:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1644520742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/62588294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/62588294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:06 compute-0 kernel: tap2ce38729-f9: entered promiscuous mode
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.791 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:06 compute-0 ovn_controller[153383]: 2025-11-29T08:05:06Z|00205|binding|INFO|Claiming lport 2ce38729-f90c-40bf-aeca-bbe09b973bbb for this chassis.
Nov 29 08:05:06 compute-0 ovn_controller[153383]: 2025-11-29T08:05:06Z|00206|binding|INFO|2ce38729-f90c-40bf-aeca-bbe09b973bbb: Claiming fa:16:3e:6e:43:d9 10.100.0.4
Nov 29 08:05:06 compute-0 NetworkManager[48962]: <info>  [1764403506.7933] manager: (tap2ce38729-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.802 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:43:d9 10.100.0.4'], port_security=['fa:16:3e:6e:43:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7b518c20-fd37-4e46-af6a-11524b767485', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '284fde66-e9d8-4738-b856-2e805436581e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=2ce38729-f90c-40bf-aeca-bbe09b973bbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.804 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 2ce38729-f90c-40bf-aeca-bbe09b973bbb in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 bound to our chassis
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.807 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:05:06 compute-0 systemd-udevd[293431]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:05:06 compute-0 ovn_controller[153383]: 2025-11-29T08:05:06Z|00207|binding|INFO|Setting lport 2ce38729-f90c-40bf-aeca-bbe09b973bbb ovn-installed in OVS
Nov 29 08:05:06 compute-0 ovn_controller[153383]: 2025-11-29T08:05:06Z|00208|binding|INFO|Setting lport 2ce38729-f90c-40bf-aeca-bbe09b973bbb up in Southbound
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.827 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4e5467-560c-410a-b884-e1049c510cc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.828 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d9c390c-31 in ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.829 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.831 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d9c390c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.832 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[8e89afe1-1a8d-49dd-95c0-d88f7c28524d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.833 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a6757885-193d-434d-8498-311b4d070069]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:06 compute-0 nova_compute[256729]: 2025-11-29 08:05:06.835 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:06 compute-0 systemd-machined[217781]: New machine qemu-21-instance-00000015.
Nov 29 08:05:06 compute-0 NetworkManager[48962]: <info>  [1764403506.8590] device (tap2ce38729-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:05:06 compute-0 NetworkManager[48962]: <info>  [1764403506.8600] device (tap2ce38729-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.860 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[4812cc45-a646-42f8-ae7e-c5b93d7dbd40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:06 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.886 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[da68b9d5-6b12-4432-8ac4-eeb0a02bef40]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.929 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[965727b9-c08f-4a97-b46e-dbd073f114c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.937 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ff10bb8d-d605-4861-9cc9-a68177fe6fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:06 compute-0 NetworkManager[48962]: <info>  [1764403506.9383] manager: (tap2d9c390c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Nov 29 08:05:06 compute-0 systemd-udevd[293435]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.982 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[c362d1d6-8f35-4b4c-848e-31245bbbfd98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:06 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:06.985 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[fab51f96-238e-4979-881f-ffa17c2f2a63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:07 compute-0 NetworkManager[48962]: <info>  [1764403507.0161] device (tap2d9c390c-30): carrier: link connected
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.025 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[c401b47d-2b8d-4f09-a6d5-a2e7336259e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.056 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a536fb3b-c306-404c-aff2-78c5ab8ecfed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582208, 'reachable_time': 32027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293464, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 118 op/s
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:05:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.089 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[afbf9cfb-ffe8-42d5-af88-612167c04c07]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:2407'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582208, 'tstamp': 582208}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293465, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.111 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e951ff7b-ce83-4c26-a3fa-45472bd1c5b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582208, 'reachable_time': 32027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293466, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.162 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7d491a3c-afb0-407a-8868-bbc3647d9cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.248 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[127d687b-dc47-4ad5-918f-d881d25086e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.250 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.250 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.251 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d9c390c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.253 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:07 compute-0 NetworkManager[48962]: <info>  [1764403507.2547] manager: (tap2d9c390c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Nov 29 08:05:07 compute-0 kernel: tap2d9c390c-30: entered promiscuous mode
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.258 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.260 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d9c390c-30, col_values=(('external_ids', {'iface-id': '30965993-2787-409a-9e74-8cf68d39c3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.262 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:07 compute-0 ovn_controller[153383]: 2025-11-29T08:05:07Z|00209|binding|INFO|Releasing lport 30965993-2787-409a-9e74-8cf68d39c3b3 from this chassis (sb_readonly=0)
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.296 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.298 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.299 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[224d8f1f-5a54-4134-9b3f-5d80df445b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.300 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:05:07 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:07.302 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'env', 'PROCESS_TAG=haproxy-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d9c390c-362a-41a5-93b0-23344eb99ae5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.425 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403507.4247856, 7b518c20-fd37-4e46-af6a-11524b767485 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.425 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] VM Started (Lifecycle Event)
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.612 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.625 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403507.42506, 7b518c20-fd37-4e46-af6a-11524b767485 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.625 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] VM Paused (Lifecycle Event)
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.627 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.662 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.667 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.671 256736 DEBUG nova.network.neutron [req-79d6c9d4-68bf-4428-bab3-069bc714ab0f req-ac13fda8-341a-4934-a505-0627158e1b15 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Updated VIF entry in instance network info cache for port 2ce38729-f90c-40bf-aeca-bbe09b973bbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.672 256736 DEBUG nova.network.neutron [req-79d6c9d4-68bf-4428-bab3-069bc714ab0f req-ac13fda8-341a-4934-a505-0627158e1b15 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Updating instance_info_cache with network_info: [{"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.726 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.727 256736 DEBUG oslo_concurrency.lockutils [req-79d6c9d4-68bf-4428-bab3-069bc714ab0f req-ac13fda8-341a-4934-a505-0627158e1b15 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-7b518c20-fd37-4e46-af6a-11524b767485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:07 compute-0 podman[293539]: 2025-11-29 08:05:07.746987462 +0000 UTC m=+0.068692675 container create 1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:05:07 compute-0 systemd[1]: Started libpod-conmon-1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936.scope.
Nov 29 08:05:07 compute-0 podman[293539]: 2025-11-29 08:05:07.717936976 +0000 UTC m=+0.039642149 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:05:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98684a15533a0935b1144221c4023c328a983ae226062ea5ab014c41948e456e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.847 256736 DEBUG nova.compute.manager [req-f2b56a27-b1ef-4676-bf42-7edc34d15cb2 req-74350397-9d70-4e17-be71-56c564f0c1b6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received event network-vif-plugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.848 256736 DEBUG oslo_concurrency.lockutils [req-f2b56a27-b1ef-4676-bf42-7edc34d15cb2 req-74350397-9d70-4e17-be71-56c564f0c1b6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "7b518c20-fd37-4e46-af6a-11524b767485-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.848 256736 DEBUG oslo_concurrency.lockutils [req-f2b56a27-b1ef-4676-bf42-7edc34d15cb2 req-74350397-9d70-4e17-be71-56c564f0c1b6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.848 256736 DEBUG oslo_concurrency.lockutils [req-f2b56a27-b1ef-4676-bf42-7edc34d15cb2 req-74350397-9d70-4e17-be71-56c564f0c1b6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.849 256736 DEBUG nova.compute.manager [req-f2b56a27-b1ef-4676-bf42-7edc34d15cb2 req-74350397-9d70-4e17-be71-56c564f0c1b6 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Processing event network-vif-plugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.850 256736 DEBUG nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.856 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403507.85575, 7b518c20-fd37-4e46-af6a-11524b767485 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.856 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] VM Resumed (Lifecycle Event)
Nov 29 08:05:07 compute-0 podman[293539]: 2025-11-29 08:05:07.858239962 +0000 UTC m=+0.179945175 container init 1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.859 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.864 256736 INFO nova.virt.libvirt.driver [-] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Instance spawned successfully.
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.865 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:05:07 compute-0 podman[293539]: 2025-11-29 08:05:07.869668997 +0000 UTC m=+0.191374180 container start 1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.890 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.901 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:07 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[293554]: [NOTICE]   (293558) : New worker (293560) forked
Nov 29 08:05:07 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[293554]: [NOTICE]   (293558) : Loading success.
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.907 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.908 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.909 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.910 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.910 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.911 256736 DEBUG nova.virt.libvirt.driver [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.948 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.991 256736 INFO nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Took 4.01 seconds to spawn the instance on the hypervisor.
Nov 29 08:05:07 compute-0 nova_compute[256729]: 2025-11-29 08:05:07.991 256736 DEBUG nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:08 compute-0 nova_compute[256729]: 2025-11-29 08:05:08.062 256736 INFO nova.compute.manager [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Took 6.46 seconds to build instance.
Nov 29 08:05:08 compute-0 nova_compute[256729]: 2025-11-29 08:05:08.091 256736 DEBUG oslo_concurrency.lockutils [None req-b83ad0ea-6073-4e41-b011-1927ed7175a3 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:08 compute-0 sshd-session[293357]: Connection closed by authenticating user root 143.14.121.41 port 58798 [preauth]
Nov 29 08:05:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3321270939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3321270939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:08 compute-0 ceph-mon[75050]: pgmap v1973: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 118 op/s
Nov 29 08:05:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3321270939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3321270939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 67 KiB/s wr, 85 op/s
Nov 29 08:05:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1429084757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1429084757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:09 compute-0 nova_compute[256729]: 2025-11-29 08:05:09.935 256736 DEBUG nova.compute.manager [req-e133076e-4d13-4ee3-9d08-f4297d474476 req-c305bafd-8b6b-4e86-b601-0fb2ad94f3d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received event network-vif-plugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:09 compute-0 nova_compute[256729]: 2025-11-29 08:05:09.935 256736 DEBUG oslo_concurrency.lockutils [req-e133076e-4d13-4ee3-9d08-f4297d474476 req-c305bafd-8b6b-4e86-b601-0fb2ad94f3d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "7b518c20-fd37-4e46-af6a-11524b767485-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:09 compute-0 nova_compute[256729]: 2025-11-29 08:05:09.936 256736 DEBUG oslo_concurrency.lockutils [req-e133076e-4d13-4ee3-9d08-f4297d474476 req-c305bafd-8b6b-4e86-b601-0fb2ad94f3d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:09 compute-0 nova_compute[256729]: 2025-11-29 08:05:09.936 256736 DEBUG oslo_concurrency.lockutils [req-e133076e-4d13-4ee3-9d08-f4297d474476 req-c305bafd-8b6b-4e86-b601-0fb2ad94f3d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:09 compute-0 nova_compute[256729]: 2025-11-29 08:05:09.937 256736 DEBUG nova.compute.manager [req-e133076e-4d13-4ee3-9d08-f4297d474476 req-c305bafd-8b6b-4e86-b601-0fb2ad94f3d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] No waiting events found dispatching network-vif-plugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:09 compute-0 nova_compute[256729]: 2025-11-29 08:05:09.937 256736 WARNING nova.compute.manager [req-e133076e-4d13-4ee3-9d08-f4297d474476 req-c305bafd-8b6b-4e86-b601-0fb2ad94f3d9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received unexpected event network-vif-plugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb for instance with vm_state active and task_state None.
Nov 29 08:05:10 compute-0 nova_compute[256729]: 2025-11-29 08:05:10.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:05:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Nov 29 08:05:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Nov 29 08:05:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Nov 29 08:05:10 compute-0 ceph-mon[75050]: pgmap v1974: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 67 KiB/s wr, 85 op/s
Nov 29 08:05:10 compute-0 nova_compute[256729]: 2025-11-29 08:05:10.821 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:10.820 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:10 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:10.823 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:05:10 compute-0 nova_compute[256729]: 2025-11-29 08:05:10.877 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 58 KiB/s wr, 71 op/s
Nov 29 08:05:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Nov 29 08:05:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Nov 29 08:05:11 compute-0 ceph-mon[75050]: osdmap e393: 3 total, 3 up, 3 in
Nov 29 08:05:11 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Nov 29 08:05:11 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:11.825 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:12 compute-0 nova_compute[256729]: 2025-11-29 08:05:12.628 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:12 compute-0 nova_compute[256729]: 2025-11-29 08:05:12.709 256736 DEBUG nova.compute.manager [req-52f99339-2731-45c7-9db1-bf90ca4a56e4 req-67ec6f69-f1ae-4ae0-8f51-9310da4cc727 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received event network-changed-2ce38729-f90c-40bf-aeca-bbe09b973bbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:12 compute-0 nova_compute[256729]: 2025-11-29 08:05:12.710 256736 DEBUG nova.compute.manager [req-52f99339-2731-45c7-9db1-bf90ca4a56e4 req-67ec6f69-f1ae-4ae0-8f51-9310da4cc727 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Refreshing instance network info cache due to event network-changed-2ce38729-f90c-40bf-aeca-bbe09b973bbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:12 compute-0 nova_compute[256729]: 2025-11-29 08:05:12.711 256736 DEBUG oslo_concurrency.lockutils [req-52f99339-2731-45c7-9db1-bf90ca4a56e4 req-67ec6f69-f1ae-4ae0-8f51-9310da4cc727 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-7b518c20-fd37-4e46-af6a-11524b767485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:12 compute-0 nova_compute[256729]: 2025-11-29 08:05:12.711 256736 DEBUG oslo_concurrency.lockutils [req-52f99339-2731-45c7-9db1-bf90ca4a56e4 req-67ec6f69-f1ae-4ae0-8f51-9310da4cc727 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-7b518c20-fd37-4e46-af6a-11524b767485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:12 compute-0 nova_compute[256729]: 2025-11-29 08:05:12.711 256736 DEBUG nova.network.neutron [req-52f99339-2731-45c7-9db1-bf90ca4a56e4 req-67ec6f69-f1ae-4ae0-8f51-9310da4cc727 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Refreshing network info cache for port 2ce38729-f90c-40bf-aeca-bbe09b973bbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:12 compute-0 ceph-mon[75050]: pgmap v1976: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 58 KiB/s wr, 71 op/s
Nov 29 08:05:12 compute-0 ceph-mon[75050]: osdmap e394: 3 total, 3 up, 3 in
Nov 29 08:05:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 60 KiB/s wr, 155 op/s
Nov 29 08:05:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Nov 29 08:05:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Nov 29 08:05:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Nov 29 08:05:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987350743' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1987350743' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:14 compute-0 nova_compute[256729]: 2025-11-29 08:05:14.400 256736 DEBUG nova.network.neutron [req-52f99339-2731-45c7-9db1-bf90ca4a56e4 req-67ec6f69-f1ae-4ae0-8f51-9310da4cc727 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Updated VIF entry in instance network info cache for port 2ce38729-f90c-40bf-aeca-bbe09b973bbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:14 compute-0 nova_compute[256729]: 2025-11-29 08:05:14.400 256736 DEBUG nova.network.neutron [req-52f99339-2731-45c7-9db1-bf90ca4a56e4 req-67ec6f69-f1ae-4ae0-8f51-9310da4cc727 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Updating instance_info_cache with network_info: [{"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:14 compute-0 nova_compute[256729]: 2025-11-29 08:05:14.441 256736 DEBUG oslo_concurrency.lockutils [req-52f99339-2731-45c7-9db1-bf90ca4a56e4 req-67ec6f69-f1ae-4ae0-8f51-9310da4cc727 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-7b518c20-fd37-4e46-af6a-11524b767485" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Nov 29 08:05:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Nov 29 08:05:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Nov 29 08:05:14 compute-0 ceph-mon[75050]: pgmap v1978: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 60 KiB/s wr, 155 op/s
Nov 29 08:05:14 compute-0 ceph-mon[75050]: osdmap e395: 3 total, 3 up, 3 in
Nov 29 08:05:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1987350743' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1987350743' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:14 compute-0 ceph-mon[75050]: osdmap e396: 3 total, 3 up, 3 in
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 3.2 KiB/s wr, 221 op/s
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006973357019600182 of space, bias 1.0, pg target 0.20920071058800546 quantized to 32 (current 32)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 7.630884938464543e-07 of space, bias 1.0, pg target 0.00022892654815393631 quantized to 32 (current 32)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:05:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:05:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Nov 29 08:05:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Nov 29 08:05:15 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Nov 29 08:05:15 compute-0 sshd-session[293569]: Connection closed by authenticating user root 143.14.121.41 port 58804 [preauth]
Nov 29 08:05:15 compute-0 nova_compute[256729]: 2025-11-29 08:05:15.881 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:16 compute-0 ceph-mon[75050]: pgmap v1981: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 3.2 KiB/s wr, 221 op/s
Nov 29 08:05:16 compute-0 ceph-mon[75050]: osdmap e397: 3 total, 3 up, 3 in
Nov 29 08:05:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.0 KiB/s wr, 150 op/s
Nov 29 08:05:17 compute-0 nova_compute[256729]: 2025-11-29 08:05:17.681 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Nov 29 08:05:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Nov 29 08:05:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Nov 29 08:05:17 compute-0 ceph-mon[75050]: pgmap v1983: 305 pgs: 305 active+clean; 134 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.0 KiB/s wr, 150 op/s
Nov 29 08:05:17 compute-0 ceph-mon[75050]: osdmap e398: 3 total, 3 up, 3 in
Nov 29 08:05:18 compute-0 sshd-session[293571]: Connection closed by authenticating user root 143.14.121.41 port 35140 [preauth]
Nov 29 08:05:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 154 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 3.5 MiB/s wr, 127 op/s
Nov 29 08:05:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Nov 29 08:05:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Nov 29 08:05:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Nov 29 08:05:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Nov 29 08:05:20 compute-0 ceph-mon[75050]: pgmap v1985: 305 pgs: 305 active+clean; 154 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 3.5 MiB/s wr, 127 op/s
Nov 29 08:05:20 compute-0 ceph-mon[75050]: osdmap e399: 3 total, 3 up, 3 in
Nov 29 08:05:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Nov 29 08:05:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Nov 29 08:05:20 compute-0 nova_compute[256729]: 2025-11-29 08:05:20.885 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 154 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 3.5 MiB/s wr, 127 op/s
Nov 29 08:05:21 compute-0 ovn_controller[153383]: 2025-11-29T08:05:21Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:43:d9 10.100.0.4
Nov 29 08:05:21 compute-0 ovn_controller[153383]: 2025-11-29T08:05:21Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:43:d9 10.100.0.4
Nov 29 08:05:21 compute-0 ceph-mon[75050]: osdmap e400: 3 total, 3 up, 3 in
Nov 29 08:05:22 compute-0 nova_compute[256729]: 2025-11-29 08:05:22.538 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "d165576a-73f8-49f3-874e-2fe3aba30532" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:22 compute-0 nova_compute[256729]: 2025-11-29 08:05:22.538 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:22 compute-0 nova_compute[256729]: 2025-11-29 08:05:22.638 256736 DEBUG nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:05:22 compute-0 nova_compute[256729]: 2025-11-29 08:05:22.683 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:22 compute-0 nova_compute[256729]: 2025-11-29 08:05:22.749 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:22 compute-0 nova_compute[256729]: 2025-11-29 08:05:22.750 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:22 compute-0 nova_compute[256729]: 2025-11-29 08:05:22.758 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:05:22 compute-0 nova_compute[256729]: 2025-11-29 08:05:22.758 256736 INFO nova.compute.claims [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:05:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Nov 29 08:05:22 compute-0 ceph-mon[75050]: pgmap v1988: 305 pgs: 305 active+clean; 154 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 3.5 MiB/s wr, 127 op/s
Nov 29 08:05:22 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Nov 29 08:05:22 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.043 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 247 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 21 MiB/s wr, 216 op/s
Nov 29 08:05:23 compute-0 sshd-session[293573]: Connection closed by authenticating user root 143.14.121.41 port 35146 [preauth]
Nov 29 08:05:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4185913650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.485 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.493 256736 DEBUG nova.compute.provider_tree [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.513 256736 DEBUG nova.scheduler.client.report [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.545 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.547 256736 DEBUG nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.748 256736 DEBUG nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.748 256736 DEBUG nova.network.neutron [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.790 256736 INFO nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:05:23 compute-0 ceph-mon[75050]: osdmap e401: 3 total, 3 up, 3 in
Nov 29 08:05:23 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4185913650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.844 256736 DEBUG nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:05:23 compute-0 nova_compute[256729]: 2025-11-29 08:05:23.989 256736 INFO nova.virt.block_device [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Booting with volume ed03ba2b-50f6-4b72-8e40-ced840493c2f at /dev/vda
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.050 256736 DEBUG nova.policy [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb2de7fb67042f89a025f1a3e872530', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:05:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:05:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4005318256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:05:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4005318256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.218 256736 DEBUG os_brick.utils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.220 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.239 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.240 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[d592a377-c9c8-4051-860a-4f5af8396937]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.241 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.252 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.252 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[ee91f2d2-b2c2-4c06-9598-4c1afbbdd2fc]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.254 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.269 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.269 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[f51e1031-72a8-4430-8dd8-c9de3e00dd7f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.272 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[05b26fde-edc2-4f34-ad82-43793a7a41e9]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.274 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.312 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.315 256736 DEBUG os_brick.initiator.connectors.lightos [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.316 256736 DEBUG os_brick.initiator.connectors.lightos [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.316 256736 DEBUG os_brick.initiator.connectors.lightos [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.317 256736 DEBUG os_brick.utils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] <== get_connector_properties: return (97ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.317 256736 DEBUG nova.virt.block_device [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Updating existing volume attachment record: 5855fa34-c23e-41c1-90da-63b5db729036 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:05:24 compute-0 sudo[293606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:24 compute-0 sudo[293606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:24 compute-0 sudo[293606]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Nov 29 08:05:24 compute-0 sudo[293631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:24 compute-0 sudo[293631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:24 compute-0 sudo[293631]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:24 compute-0 sudo[293656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:24 compute-0 sudo[293656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:24 compute-0 sudo[293656]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:24 compute-0 sudo[293681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:05:24 compute-0 sudo[293681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:24 compute-0 nova_compute[256729]: 2025-11-29 08:05:24.966 256736 DEBUG nova.network.neutron [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Successfully created port: 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:05:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Nov 29 08:05:24 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Nov 29 08:05:25 compute-0 ceph-mon[75050]: pgmap v1990: 305 pgs: 305 active+clean; 247 MiB data, 515 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 21 MiB/s wr, 216 op/s
Nov 29 08:05:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4005318256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:05:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4005318256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:05:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 281 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 814 KiB/s rd, 22 MiB/s wr, 287 op/s
Nov 29 08:05:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303274279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:25 compute-0 sudo[293681]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:05:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:05:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:05:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:05:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:05:25 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1ce091c2-3813-4cbe-a5c7-ac02c2404934 does not exist
Nov 29 08:05:25 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev bb744b52-d0bd-4e82-9a97-3167088ccc0b does not exist
Nov 29 08:05:25 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 258837b0-4a67-4ef1-b26b-456a76d9953d does not exist
Nov 29 08:05:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:05:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:05:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:05:25 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:05:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:05:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:25 compute-0 sudo[293736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:25 compute-0 sudo[293736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:25 compute-0 sudo[293736]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:25 compute-0 sudo[293761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:25 compute-0 sudo[293761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:25 compute-0 sudo[293761]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.705 256736 DEBUG nova.network.neutron [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Successfully updated port: 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.718 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "refresh_cache-d165576a-73f8-49f3-874e-2fe3aba30532" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.718 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquired lock "refresh_cache-d165576a-73f8-49f3-874e-2fe3aba30532" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.718 256736 DEBUG nova.network.neutron [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:05:25 compute-0 sudo[293786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:25 compute-0 sudo[293786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:25 compute-0 sudo[293786]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:25 compute-0 sudo[293811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:05:25 compute-0 sudo[293811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.870 256736 DEBUG nova.compute.manager [req-195fbdaa-9a7b-4150-9d41-eabd6b29cf27 req-0cbd8dec-26b4-4508-8717-cade487d4ef8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Received event network-changed-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.871 256736 DEBUG nova.compute.manager [req-195fbdaa-9a7b-4150-9d41-eabd6b29cf27 req-0cbd8dec-26b4-4508-8717-cade487d4ef8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Refreshing instance network info cache due to event network-changed-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.871 256736 DEBUG oslo_concurrency.lockutils [req-195fbdaa-9a7b-4150-9d41-eabd6b29cf27 req-0cbd8dec-26b4-4508-8717-cade487d4ef8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-d165576a-73f8-49f3-874e-2fe3aba30532" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.889 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.936 256736 DEBUG nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.937 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.938 256736 INFO nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Creating image(s)
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.938 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.938 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Ensure instance console log exists: /var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.939 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.939 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.939 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:25 compute-0 nova_compute[256729]: 2025-11-29 08:05:25.963 256736 DEBUG nova.network.neutron [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:05:26 compute-0 ceph-mon[75050]: osdmap e402: 3 total, 3 up, 3 in
Nov 29 08:05:26 compute-0 ceph-mon[75050]: pgmap v1992: 305 pgs: 305 active+clean; 281 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 814 KiB/s rd, 22 MiB/s wr, 287 op/s
Nov 29 08:05:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2303274279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:05:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:05:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:05:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:05:26 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:05:26 compute-0 podman[293877]: 2025-11-29 08:05:26.171313044 +0000 UTC m=+0.052070408 container create cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:05:26 compute-0 systemd[1]: Started libpod-conmon-cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7.scope.
Nov 29 08:05:26 compute-0 podman[293877]: 2025-11-29 08:05:26.155189989 +0000 UTC m=+0.035947333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:26 compute-0 podman[293877]: 2025-11-29 08:05:26.275103238 +0000 UTC m=+0.155860582 container init cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:05:26 compute-0 podman[293877]: 2025-11-29 08:05:26.288357093 +0000 UTC m=+0.169114447 container start cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 08:05:26 compute-0 podman[293877]: 2025-11-29 08:05:26.292972331 +0000 UTC m=+0.173729685 container attach cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:05:26 compute-0 brave_wozniak[293893]: 167 167
Nov 29 08:05:26 compute-0 systemd[1]: libpod-cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7.scope: Deactivated successfully.
Nov 29 08:05:26 compute-0 conmon[293893]: conmon cce8f588489c0d97e19f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7.scope/container/memory.events
Nov 29 08:05:26 compute-0 podman[293877]: 2025-11-29 08:05:26.300055917 +0000 UTC m=+0.180813251 container died cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 08:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-66507cc4d2eb11c82ed2d568e8dc798bcf8ac44e9cf68af858a2d60272f8598c-merged.mount: Deactivated successfully.
Nov 29 08:05:26 compute-0 podman[293877]: 2025-11-29 08:05:26.350207341 +0000 UTC m=+0.230964705 container remove cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 08:05:26 compute-0 systemd[1]: libpod-conmon-cce8f588489c0d97e19f53994e3660ed1e39ba05485331f4e4eae3da9335e1d7.scope: Deactivated successfully.
Nov 29 08:05:26 compute-0 podman[293915]: 2025-11-29 08:05:26.607720377 +0000 UTC m=+0.062045153 container create 13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 08:05:26 compute-0 systemd[1]: Started libpod-conmon-13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae.scope.
Nov 29 08:05:26 compute-0 podman[293915]: 2025-11-29 08:05:26.576248608 +0000 UTC m=+0.030573434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91d9c7088ca178c63dcdb77c7962e66bf62e1cad40e46945b3297798a6fc46b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91d9c7088ca178c63dcdb77c7962e66bf62e1cad40e46945b3297798a6fc46b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91d9c7088ca178c63dcdb77c7962e66bf62e1cad40e46945b3297798a6fc46b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91d9c7088ca178c63dcdb77c7962e66bf62e1cad40e46945b3297798a6fc46b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91d9c7088ca178c63dcdb77c7962e66bf62e1cad40e46945b3297798a6fc46b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:26 compute-0 podman[293915]: 2025-11-29 08:05:26.719670406 +0000 UTC m=+0.173995182 container init 13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 08:05:26 compute-0 podman[293915]: 2025-11-29 08:05:26.733824397 +0000 UTC m=+0.188149163 container start 13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:26 compute-0 podman[293915]: 2025-11-29 08:05:26.738454205 +0000 UTC m=+0.192778951 container attach 13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.912 256736 DEBUG nova.network.neutron [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Updating instance_info_cache with network_info: [{"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.937 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Releasing lock "refresh_cache-d165576a-73f8-49f3-874e-2fe3aba30532" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.938 256736 DEBUG nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Instance network_info: |[{"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.938 256736 DEBUG oslo_concurrency.lockutils [req-195fbdaa-9a7b-4150-9d41-eabd6b29cf27 req-0cbd8dec-26b4-4508-8717-cade487d4ef8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-d165576a-73f8-49f3-874e-2fe3aba30532" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.939 256736 DEBUG nova.network.neutron [req-195fbdaa-9a7b-4150-9d41-eabd6b29cf27 req-0cbd8dec-26b4-4508-8717-cade487d4ef8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Refreshing network info cache for port 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.944 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Start _get_guest_xml network_info=[{"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd165576a-73f8-49f3-874e-2fe3aba30532', 'attached_at': '', 'detached_at': '', 'volume_id': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'serial': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': '5855fa34-c23e-41c1-90da-63b5db729036', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:05:26 compute-0 sshd-session[293575]: Connection closed by authenticating user root 143.14.121.41 port 41954 [preauth]
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.955 256736 WARNING nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.962 256736 DEBUG nova.virt.libvirt.host [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.963 256736 DEBUG nova.virt.libvirt.host [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.967 256736 DEBUG nova.virt.libvirt.host [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.967 256736 DEBUG nova.virt.libvirt.host [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.968 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.968 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.968 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.969 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.969 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.969 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.969 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.970 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.970 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.970 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.970 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:05:26 compute-0 nova_compute[256729]: 2025-11-29 08:05:26.970 256736 DEBUG nova.virt.hardware [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.002 256736 DEBUG nova.storage.rbd_utils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image d165576a-73f8-49f3-874e-2fe3aba30532_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.006 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 720 KiB/s rd, 19 MiB/s wr, 272 op/s
Nov 29 08:05:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715562847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.519 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.686 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.722 256736 DEBUG os_brick.encryptors [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Using volume encryption metadata '{'encryption_key_id': 'b808f10b-209c-4f93-9da8-5ab82415340d', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd165576a-73f8-49f3-874e-2fe3aba30532', 'attached_at': '', 'detached_at': '', 'volume_id': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.725 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.742 256736 DEBUG barbicanclient.v1.secrets [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/b808f10b-209c-4f93-9da8-5ab82415340d get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.743 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.766 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.766 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.787 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.787 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.811 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.812 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:27 compute-0 naughty_ritchie[293931]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:05:27 compute-0 naughty_ritchie[293931]: --> relative data size: 1.0
Nov 29 08:05:27 compute-0 naughty_ritchie[293931]: --> All data devices are unavailable
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.838 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.839 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:27 compute-0 systemd[1]: libpod-13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae.scope: Deactivated successfully.
Nov 29 08:05:27 compute-0 systemd[1]: libpod-13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae.scope: Consumed 1.066s CPU time.
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.880 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.881 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.900 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.901 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:27 compute-0 podman[294002]: 2025-11-29 08:05:27.906361586 +0000 UTC m=+0.030620576 container died 13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.935 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.936 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.998 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:27 compute-0 nova_compute[256729]: 2025-11-29 08:05:27.999 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.018 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.019 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.044 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.044 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a91d9c7088ca178c63dcdb77c7962e66bf62e1cad40e46945b3297798a6fc46b-merged.mount: Deactivated successfully.
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.066 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.066 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.093 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.094 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.116 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.117 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.150 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.150 256736 INFO barbicanclient.base [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/b808f10b-209c-4f93-9da8-5ab82415340d
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.173 256736 DEBUG barbicanclient.client [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.174 256736 DEBUG nova.virt.libvirt.host [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <volume>ed03ba2b-50f6-4b72-8e40-ced840493c2f</volume>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   </usage>
Nov 29 08:05:28 compute-0 nova_compute[256729]: </secret>
Nov 29 08:05:28 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.316 256736 DEBUG nova.network.neutron [req-195fbdaa-9a7b-4150-9d41-eabd6b29cf27 req-0cbd8dec-26b4-4508-8717-cade487d4ef8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Updated VIF entry in instance network info cache for port 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.317 256736 DEBUG nova.network.neutron [req-195fbdaa-9a7b-4150-9d41-eabd6b29cf27 req-0cbd8dec-26b4-4508-8717-cade487d4ef8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Updating instance_info_cache with network_info: [{"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.344 256736 DEBUG oslo_concurrency.lockutils [req-195fbdaa-9a7b-4150-9d41-eabd6b29cf27 req-0cbd8dec-26b4-4508-8717-cade487d4ef8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-d165576a-73f8-49f3-874e-2fe3aba30532" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:28 compute-0 podman[294002]: 2025-11-29 08:05:28.47972175 +0000 UTC m=+0.603980720 container remove 13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:05:28 compute-0 systemd[1]: libpod-conmon-13247996a19f9227794faf380a9479ec20e5688173cf5cfcf4caeff0604b1bae.scope: Deactivated successfully.
Nov 29 08:05:28 compute-0 sudo[293811]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:28 compute-0 ceph-mon[75050]: pgmap v1993: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 720 KiB/s rd, 19 MiB/s wr, 272 op/s
Nov 29 08:05:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/715562847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.622 256736 DEBUG nova.virt.libvirt.vif [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1091562499',display_name='tempest-TransferEncryptedVolumeTest-server-1091562499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1091562499',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF/ioI52WgdhEBAbeM3RhpZNbUdNn18Xja5uOnO3NOZPUzsKxYrvXBByAxA/Dl5IK3nSUHQ9foFVWH8Ax4rgF1bIpX1xDfETzCAV2xOlgY9UnrjEKcSJoT+wgO+gA9frAA==',key_name='tempest-TransferEncryptedVolumeTest-1552823458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-skzb7ppx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:23Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=d165576a-73f8-49f3-874e-2fe3aba30532,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.622 256736 DEBUG nova.network.os_vif_util [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.623 256736 DEBUG nova.network.os_vif_util [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:20:e8,bridge_name='br-int',has_traffic_filtering=True,id=0aa65aa8-efb0-46a2-88e7-a95ca258d9e3,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0aa65aa8-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.625 256736 DEBUG nova.objects.instance [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lazy-loading 'pci_devices' on Instance uuid d165576a-73f8-49f3-874e-2fe3aba30532 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.638 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <uuid>d165576a-73f8-49f3-874e-2fe3aba30532</uuid>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <name>instance-00000016</name>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1091562499</nova:name>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:05:26</nova:creationTime>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <nova:user uuid="2cb2de7fb67042f89a025f1a3e872530">tempest-TransferEncryptedVolumeTest-2049180676-project-member</nova:user>
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <nova:project uuid="00f4c1f7964a4e5fbe3db5be46b9676e">tempest-TransferEncryptedVolumeTest-2049180676</nova:project>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <nova:port uuid="0aa65aa8-efb0-46a2-88e7-a95ca258d9e3">
Nov 29 08:05:28 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <system>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <entry name="serial">d165576a-73f8-49f3-874e-2fe3aba30532</entry>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <entry name="uuid">d165576a-73f8-49f3-874e-2fe3aba30532</entry>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     </system>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <os>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   </os>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <features>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   </features>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/d165576a-73f8-49f3-874e-2fe3aba30532_disk.config">
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       </source>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-ed03ba2b-50f6-4b72-8e40-ced840493c2f">
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       </source>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <serial>ed03ba2b-50f6-4b72-8e40-ced840493c2f</serial>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <encryption format="luks">
Nov 29 08:05:28 compute-0 nova_compute[256729]:         <secret type="passphrase" uuid="80b03e12-af85-4370-8fe0-b1e4719e5cbe"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       </encryption>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:06:20:e8"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <target dev="tap0aa65aa8-ef"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532/console.log" append="off"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <video>
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     </video>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:05:28 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:05:28 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:05:28 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:05:28 compute-0 nova_compute[256729]: </domain>
Nov 29 08:05:28 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.639 256736 DEBUG nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Preparing to wait for external event network-vif-plugged-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.639 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.639 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.639 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.640 256736 DEBUG nova.virt.libvirt.vif [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1091562499',display_name='tempest-TransferEncryptedVolumeTest-server-1091562499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1091562499',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF/ioI52WgdhEBAbeM3RhpZNbUdNn18Xja5uOnO3NOZPUzsKxYrvXBByAxA/Dl5IK3nSUHQ9foFVWH8Ax4rgF1bIpX1xDfETzCAV2xOlgY9UnrjEKcSJoT+wgO+gA9frAA==',key_name='tempest-TransferEncryptedVolumeTest-1552823458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-skzb7ppx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:23Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=d165576a-73f8-49f3-874e-2fe3aba30532,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.640 256736 DEBUG nova.network.os_vif_util [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.641 256736 DEBUG nova.network.os_vif_util [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:20:e8,bridge_name='br-int',has_traffic_filtering=True,id=0aa65aa8-efb0-46a2-88e7-a95ca258d9e3,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0aa65aa8-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.642 256736 DEBUG os_vif [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:20:e8,bridge_name='br-int',has_traffic_filtering=True,id=0aa65aa8-efb0-46a2-88e7-a95ca258d9e3,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0aa65aa8-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.642 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.642 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.643 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.646 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.646 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0aa65aa8-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.647 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0aa65aa8-ef, col_values=(('external_ids', {'iface-id': '0aa65aa8-efb0-46a2-88e7-a95ca258d9e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:20:e8', 'vm-uuid': 'd165576a-73f8-49f3-874e-2fe3aba30532'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.650 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.651 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:28 compute-0 NetworkManager[48962]: <info>  [1764403528.6519] manager: (tap0aa65aa8-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Nov 29 08:05:28 compute-0 sudo[294017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.657 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.659 256736 INFO os_vif [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:20:e8,bridge_name='br-int',has_traffic_filtering=True,id=0aa65aa8-efb0-46a2-88e7-a95ca258d9e3,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0aa65aa8-ef')
Nov 29 08:05:28 compute-0 sudo[294017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:28 compute-0 sudo[294017]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:28 compute-0 sudo[294045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:28 compute-0 sudo[294045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:28 compute-0 sudo[294045]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:28 compute-0 sudo[294070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:28 compute-0 sudo[294070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:28 compute-0 sudo[294070]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:28 compute-0 sudo[294095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:05:28 compute-0 sudo[294095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.898 256736 DEBUG oslo_concurrency.lockutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "7b518c20-fd37-4e46-af6a-11524b767485" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.900 256736 DEBUG oslo_concurrency.lockutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.900 256736 DEBUG oslo_concurrency.lockutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "7b518c20-fd37-4e46-af6a-11524b767485-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.901 256736 DEBUG oslo_concurrency.lockutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.901 256736 DEBUG oslo_concurrency.lockutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.904 256736 INFO nova.compute.manager [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Terminating instance
Nov 29 08:05:28 compute-0 nova_compute[256729]: 2025-11-29 08:05:28.906 256736 DEBUG nova.compute.manager [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.042 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.042 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.043 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No VIF found with MAC fa:16:3e:06:20:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.043 256736 INFO nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Using config drive
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.070 256736 DEBUG nova.storage.rbd_utils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image d165576a-73f8-49f3-874e-2fe3aba30532_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 15 MiB/s wr, 224 op/s
Nov 29 08:05:29 compute-0 kernel: tap2ce38729-f9 (unregistering): left promiscuous mode
Nov 29 08:05:29 compute-0 NetworkManager[48962]: <info>  [1764403529.1001] device (tap2ce38729-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.112 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.124 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:29 compute-0 ovn_controller[153383]: 2025-11-29T08:05:29Z|00210|binding|INFO|Releasing lport 2ce38729-f90c-40bf-aeca-bbe09b973bbb from this chassis (sb_readonly=0)
Nov 29 08:05:29 compute-0 ovn_controller[153383]: 2025-11-29T08:05:29Z|00211|binding|INFO|Setting lport 2ce38729-f90c-40bf-aeca-bbe09b973bbb down in Southbound
Nov 29 08:05:29 compute-0 ovn_controller[153383]: 2025-11-29T08:05:29Z|00212|binding|INFO|Removing iface tap2ce38729-f9 ovn-installed in OVS
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.127 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:29.133 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:43:d9 10.100.0.4'], port_security=['fa:16:3e:6e:43:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7b518c20-fd37-4e46-af6a-11524b767485', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '284fde66-e9d8-4738-b856-2e805436581e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=2ce38729-f90c-40bf-aeca-bbe09b973bbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:29.136 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 2ce38729-f90c-40bf-aeca-bbe09b973bbb in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 unbound from our chassis
Nov 29 08:05:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:29.138 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d9c390c-362a-41a5-93b0-23344eb99ae5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:05:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:29.142 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ff4c63-3645-4c0e-a177-572f76cbbf6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:29.143 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace which is not needed anymore
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.152 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:29 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 29 08:05:29 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 13.982s CPU time.
Nov 29 08:05:29 compute-0 systemd-machined[217781]: Machine qemu-21-instance-00000015 terminated.
Nov 29 08:05:29 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[293554]: [NOTICE]   (293558) : haproxy version is 2.8.14-c23fe91
Nov 29 08:05:29 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[293554]: [NOTICE]   (293558) : path to executable is /usr/sbin/haproxy
Nov 29 08:05:29 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[293554]: [WARNING]  (293558) : Exiting Master process...
Nov 29 08:05:29 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[293554]: [ALERT]    (293558) : Current worker (293560) exited with code 143 (Terminated)
Nov 29 08:05:29 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[293554]: [WARNING]  (293558) : All workers exited. Exiting... (0)
Nov 29 08:05:29 compute-0 systemd[1]: libpod-1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936.scope: Deactivated successfully.
Nov 29 08:05:29 compute-0 podman[294199]: 2025-11-29 08:05:29.343302923 +0000 UTC m=+0.103669562 container died 1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.348 256736 INFO nova.virt.libvirt.driver [-] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Instance destroyed successfully.
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.348 256736 DEBUG nova.objects.instance [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'resources' on Instance uuid 7b518c20-fd37-4e46-af6a-11524b767485 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:29 compute-0 podman[294196]: 2025-11-29 08:05:29.25660497 +0000 UTC m=+0.023031397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.364 256736 DEBUG nova.virt.libvirt.vif [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-205175967',display_name='tempest-TestVolumeBootPattern-server-205175967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-205175967',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRhGhCDf+2DWWqDuvRpS/JaOK+f/CbMMIs9mX1kyTRqTPCFubI8ju/4twf4g9TbzLiRX/BzWwQ/uPnV3ZkV8vI7PffevvM5uIZzGBjdTxd3Z49lVgwpoVKRmE3GzO1NBg==',key_name='tempest-TestVolumeBootPattern-556618908',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-ez9i1qjk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:08Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=7b518c20-fd37-4e46-af6a-11524b767485,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.364 256736 DEBUG nova.network.os_vif_util [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "address": "fa:16:3e:6e:43:d9", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ce38729-f9", "ovs_interfaceid": "2ce38729-f90c-40bf-aeca-bbe09b973bbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.365 256736 DEBUG nova.network.os_vif_util [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:43:d9,bridge_name='br-int',has_traffic_filtering=True,id=2ce38729-f90c-40bf-aeca-bbe09b973bbb,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce38729-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.366 256736 DEBUG os_vif [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:43:d9,bridge_name='br-int',has_traffic_filtering=True,id=2ce38729-f90c-40bf-aeca-bbe09b973bbb,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce38729-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.368 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.369 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ce38729-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.372 256736 DEBUG nova.compute.manager [req-d6e250cd-08bf-4c4c-86f7-3d50535b9101 req-b9c3ae1a-a1b2-47db-b551-7414f398df48 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received event network-vif-unplugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.373 256736 DEBUG oslo_concurrency.lockutils [req-d6e250cd-08bf-4c4c-86f7-3d50535b9101 req-b9c3ae1a-a1b2-47db-b551-7414f398df48 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "7b518c20-fd37-4e46-af6a-11524b767485-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.373 256736 DEBUG oslo_concurrency.lockutils [req-d6e250cd-08bf-4c4c-86f7-3d50535b9101 req-b9c3ae1a-a1b2-47db-b551-7414f398df48 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.374 256736 DEBUG oslo_concurrency.lockutils [req-d6e250cd-08bf-4c4c-86f7-3d50535b9101 req-b9c3ae1a-a1b2-47db-b551-7414f398df48 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.374 256736 DEBUG nova.compute.manager [req-d6e250cd-08bf-4c4c-86f7-3d50535b9101 req-b9c3ae1a-a1b2-47db-b551-7414f398df48 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] No waiting events found dispatching network-vif-unplugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.374 256736 DEBUG nova.compute.manager [req-d6e250cd-08bf-4c4c-86f7-3d50535b9101 req-b9c3ae1a-a1b2-47db-b551-7414f398df48 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received event network-vif-unplugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.375 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.379 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.381 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.384 256736 INFO os_vif [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:43:d9,bridge_name='br-int',has_traffic_filtering=True,id=2ce38729-f90c-40bf-aeca-bbe09b973bbb,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ce38729-f9')
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.405 256736 INFO nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Creating config drive at /var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532/disk.config
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.412 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbjkf8pl4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.546 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbjkf8pl4" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:29 compute-0 podman[294196]: 2025-11-29 08:05:29.622238001 +0000 UTC m=+0.388664418 container create 126a66448ffb53d40c9a2c15418e8b1d69640fc7e4d21472dbbef4b56b6e34e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jones, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.638 256736 DEBUG nova.storage.rbd_utils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image d165576a-73f8-49f3-874e-2fe3aba30532_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:29 compute-0 nova_compute[256729]: 2025-11-29 08:05:29.643 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532/disk.config d165576a-73f8-49f3-874e-2fe3aba30532_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:29 compute-0 sshd-session[293974]: Connection closed by authenticating user root 143.14.121.41 port 41960 [preauth]
Nov 29 08:05:29 compute-0 systemd[1]: Started libpod-conmon-126a66448ffb53d40c9a2c15418e8b1d69640fc7e4d21472dbbef4b56b6e34e5.scope.
Nov 29 08:05:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Nov 29 08:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936-userdata-shm.mount: Deactivated successfully.
Nov 29 08:05:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Nov 29 08:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-98684a15533a0935b1144221c4023c328a983ae226062ea5ab014c41948e456e-merged.mount: Deactivated successfully.
Nov 29 08:05:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Nov 29 08:05:30 compute-0 podman[294196]: 2025-11-29 08:05:30.282309297 +0000 UTC m=+1.048735804 container init 126a66448ffb53d40c9a2c15418e8b1d69640fc7e4d21472dbbef4b56b6e34e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:05:30 compute-0 podman[294196]: 2025-11-29 08:05:30.293916217 +0000 UTC m=+1.060342624 container start 126a66448ffb53d40c9a2c15418e8b1d69640fc7e4d21472dbbef4b56b6e34e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:05:30 compute-0 great_jones[294307]: 167 167
Nov 29 08:05:30 compute-0 systemd[1]: libpod-126a66448ffb53d40c9a2c15418e8b1d69640fc7e4d21472dbbef4b56b6e34e5.scope: Deactivated successfully.
Nov 29 08:05:30 compute-0 podman[294199]: 2025-11-29 08:05:30.314208127 +0000 UTC m=+1.074574796 container cleanup 1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:05:30 compute-0 systemd[1]: libpod-conmon-1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936.scope: Deactivated successfully.
Nov 29 08:05:30 compute-0 podman[294196]: 2025-11-29 08:05:30.533285543 +0000 UTC m=+1.299712050 container attach 126a66448ffb53d40c9a2c15418e8b1d69640fc7e4d21472dbbef4b56b6e34e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jones, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:05:30 compute-0 podman[294196]: 2025-11-29 08:05:30.534720182 +0000 UTC m=+1.301146619 container died 126a66448ffb53d40c9a2c15418e8b1d69640fc7e4d21472dbbef4b56b6e34e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 08:05:30 compute-0 ceph-mon[75050]: pgmap v1994: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 582 KiB/s rd, 15 MiB/s wr, 224 op/s
Nov 29 08:05:30 compute-0 ceph-mon[75050]: osdmap e403: 3 total, 3 up, 3 in
Nov 29 08:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-134dd9ab3e0f2ad3458707e292b6b2e6632420a8f066159a619610439e1d1b5d-merged.mount: Deactivated successfully.
Nov 29 08:05:30 compute-0 nova_compute[256729]: 2025-11-29 08:05:30.744 256736 DEBUG oslo_concurrency.processutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532/disk.config d165576a-73f8-49f3-874e-2fe3aba30532_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:30 compute-0 nova_compute[256729]: 2025-11-29 08:05:30.746 256736 INFO nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Deleting local config drive /var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532/disk.config because it was imported into RBD.
Nov 29 08:05:30 compute-0 kernel: tap0aa65aa8-ef: entered promiscuous mode
Nov 29 08:05:30 compute-0 NetworkManager[48962]: <info>  [1764403530.8193] manager: (tap0aa65aa8-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Nov 29 08:05:30 compute-0 systemd-udevd[294163]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:05:30 compute-0 NetworkManager[48962]: <info>  [1764403530.8403] device (tap0aa65aa8-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:05:30 compute-0 NetworkManager[48962]: <info>  [1764403530.8417] device (tap0aa65aa8-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:05:30 compute-0 ovn_controller[153383]: 2025-11-29T08:05:30Z|00213|binding|INFO|Claiming lport 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 for this chassis.
Nov 29 08:05:30 compute-0 ovn_controller[153383]: 2025-11-29T08:05:30Z|00214|binding|INFO|0aa65aa8-efb0-46a2-88e7-a95ca258d9e3: Claiming fa:16:3e:06:20:e8 10.100.0.13
Nov 29 08:05:30 compute-0 nova_compute[256729]: 2025-11-29 08:05:30.873 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:30 compute-0 systemd-machined[217781]: New machine qemu-22-instance-00000016.
Nov 29 08:05:30 compute-0 ovn_controller[153383]: 2025-11-29T08:05:30Z|00215|binding|INFO|Setting lport 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 ovn-installed in OVS
Nov 29 08:05:30 compute-0 nova_compute[256729]: 2025-11-29 08:05:30.899 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:30 compute-0 nova_compute[256729]: 2025-11-29 08:05:30.901 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:30 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Nov 29 08:05:30 compute-0 ovn_controller[153383]: 2025-11-29T08:05:30Z|00216|binding|INFO|Setting lport 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 up in Southbound
Nov 29 08:05:30 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:30.975 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:20:e8 10.100.0.13'], port_security=['fa:16:3e:06:20:e8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd165576a-73f8-49f3-874e-2fe3aba30532', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '377cba5c-a444-4939-9e65-f24eadd0abbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=357216b9-f046-4273-a2c2-2385abe848ac, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=0aa65aa8-efb0-46a2-88e7-a95ca258d9e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 3.7 MiB/s wr, 108 op/s
Nov 29 08:05:31 compute-0 podman[294196]: 2025-11-29 08:05:31.109378481 +0000 UTC m=+1.875804928 container remove 126a66448ffb53d40c9a2c15418e8b1d69640fc7e4d21472dbbef4b56b6e34e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_jones, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.472 256736 DEBUG nova.compute.manager [req-0b7ca80a-d1c1-4ec1-bc50-540099be09a7 req-72efe01f-158d-4621-90dc-ed1631a0d138 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Received event network-vif-plugged-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.473 256736 DEBUG oslo_concurrency.lockutils [req-0b7ca80a-d1c1-4ec1-bc50-540099be09a7 req-72efe01f-158d-4621-90dc-ed1631a0d138 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.474 256736 DEBUG oslo_concurrency.lockutils [req-0b7ca80a-d1c1-4ec1-bc50-540099be09a7 req-72efe01f-158d-4621-90dc-ed1631a0d138 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.474 256736 DEBUG oslo_concurrency.lockutils [req-0b7ca80a-d1c1-4ec1-bc50-540099be09a7 req-72efe01f-158d-4621-90dc-ed1631a0d138 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.475 256736 DEBUG nova.compute.manager [req-0b7ca80a-d1c1-4ec1-bc50-540099be09a7 req-72efe01f-158d-4621-90dc-ed1631a0d138 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Processing event network-vif-plugged-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:05:31 compute-0 podman[294325]: 2025-11-29 08:05:31.516368204 +0000 UTC m=+1.159083339 container remove 1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.528 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[bd35e795-fda3-4b5b-b697-9b52ad79b849]: (4, ('Sat Nov 29 08:05:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936)\n1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936\nSat Nov 29 08:05:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936)\n1ebf0c2b48e689e86c089447a2f8deb52acd0f8c9b545b1d746321f10df2a936\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 systemd[1]: libpod-conmon-126a66448ffb53d40c9a2c15418e8b1d69640fc7e4d21472dbbef4b56b6e34e5.scope: Deactivated successfully.
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.532 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[274f108c-3efb-421c-b597-58af244127f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.534 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:31 compute-0 kernel: tap2d9c390c-30: left promiscuous mode
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.541 256736 DEBUG nova.compute.manager [req-fe7e8608-4bf5-47bc-8b3e-c8177e732aa1 req-d0e6dbdf-ffc5-4f28-ac7f-d4b24e7184f3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received event network-vif-plugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.541 256736 DEBUG oslo_concurrency.lockutils [req-fe7e8608-4bf5-47bc-8b3e-c8177e732aa1 req-d0e6dbdf-ffc5-4f28-ac7f-d4b24e7184f3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "7b518c20-fd37-4e46-af6a-11524b767485-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.541 256736 DEBUG oslo_concurrency.lockutils [req-fe7e8608-4bf5-47bc-8b3e-c8177e732aa1 req-d0e6dbdf-ffc5-4f28-ac7f-d4b24e7184f3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.542 256736 DEBUG oslo_concurrency.lockutils [req-fe7e8608-4bf5-47bc-8b3e-c8177e732aa1 req-d0e6dbdf-ffc5-4f28-ac7f-d4b24e7184f3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.542 256736 DEBUG nova.compute.manager [req-fe7e8608-4bf5-47bc-8b3e-c8177e732aa1 req-d0e6dbdf-ffc5-4f28-ac7f-d4b24e7184f3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] No waiting events found dispatching network-vif-plugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.542 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c5ccfa-e703-4343-9122-6010581aaf6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.542 256736 WARNING nova.compute.manager [req-fe7e8608-4bf5-47bc-8b3e-c8177e732aa1 req-d0e6dbdf-ffc5-4f28-ac7f-d4b24e7184f3 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received unexpected event network-vif-plugged-2ce38729-f90c-40bf-aeca-bbe09b973bbb for instance with vm_state active and task_state deleting.
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.543 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:31 compute-0 podman[294371]: 2025-11-29 08:05:31.557456917 +0000 UTC m=+0.280439290 container create 488208cfb15ce6be0cffbd92ea481835b7f5e605ea8f92f7039938b188c93ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khorana, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.567 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.568 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[54056978-5639-466d-b905-4ee0d052e02e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.570 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f5016790-3b15-4da1-a0f5-539f5ae8ac32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.594 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e2903a46-1862-4bde-aac5-f4187ee3ba63]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582198, 'reachable_time': 43174, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294424, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d9c390c\x2d362a\x2d41a5\x2d93b0\x2d23344eb99ae5.mount: Deactivated successfully.
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.599 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.599 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[b6904870-ab7d-4648-852a-fd27a996d368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.601 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 in datapath 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c unbound from our chassis
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.603 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:05:31 compute-0 podman[294371]: 2025-11-29 08:05:31.520879468 +0000 UTC m=+0.243861911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:31 compute-0 systemd[1]: Started libpod-conmon-488208cfb15ce6be0cffbd92ea481835b7f5e605ea8f92f7039938b188c93ddf.scope.
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.620 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[06e00411-e9f2-4683-bed4-c1fea96b54be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.622 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45f1bbc0-c1 in ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.624 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45f1bbc0-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.624 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4a5d36-8e45-4b6c-ab7f-eeb8200d64c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.625 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3f66db-892a-4077-a32b-3ae152b8707b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.641 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[d9823cfe-3733-461a-9264-9af4d7f76887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baaff02e0f6ad8464ea80fa6837ebd1a393b4b66628ed194599d92caa29ef04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baaff02e0f6ad8464ea80fa6837ebd1a393b4b66628ed194599d92caa29ef04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baaff02e0f6ad8464ea80fa6837ebd1a393b4b66628ed194599d92caa29ef04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baaff02e0f6ad8464ea80fa6837ebd1a393b4b66628ed194599d92caa29ef04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.675 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[db017597-1ae7-4b69-bf8e-cd76fc9f06e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 podman[294371]: 2025-11-29 08:05:31.720186418 +0000 UTC m=+0.443168831 container init 488208cfb15ce6be0cffbd92ea481835b7f5e605ea8f92f7039938b188c93ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khorana, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.720 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd8f7d9-8ec0-494b-a8e8-0b02e66d8b41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 podman[294371]: 2025-11-29 08:05:31.728531128 +0000 UTC m=+0.451513501 container start 488208cfb15ce6be0cffbd92ea481835b7f5e605ea8f92f7039938b188c93ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khorana, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.729 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[51028f5f-58fc-4cbb-afb2-1a37d6dffd2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 NetworkManager[48962]: <info>  [1764403531.7328] manager: (tap45f1bbc0-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Nov 29 08:05:31 compute-0 podman[294371]: 2025-11-29 08:05:31.756152331 +0000 UTC m=+0.479134764 container attach 488208cfb15ce6be0cffbd92ea481835b7f5e605ea8f92f7039938b188c93ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khorana, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.771 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[249fd620-2f58-42a4-9ca0-97fcf3d3d9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.778 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[f9658b6c-3f4a-4998-bb3d-1260c44813b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 NetworkManager[48962]: <info>  [1764403531.8021] device (tap45f1bbc0-c0): carrier: link connected
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.812 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[f7081458-fbf2-41c4-8181-ac9a34ed881f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.831 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[aa135974-3645-4cc0-8f72-75cb76e0825f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f1bbc0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:b9:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584686, 'reachable_time': 24100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294457, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.848 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bf65a4-8dbd-4b27-bbe3-2f1c86563577]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:b9ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584686, 'tstamp': 584686}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294458, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.865 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8228ce-2415-4a75-8664-a55c1b8d4001]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f1bbc0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:b9:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584686, 'reachable_time': 24100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294459, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.900 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fd9935-7d5d-497e-9a02-97fda4587b85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.926 256736 INFO nova.virt.libvirt.driver [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Deleting instance files /var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485_del
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.928 256736 INFO nova.virt.libvirt.driver [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Deletion of /var/lib/nova/instances/7b518c20-fd37-4e46-af6a-11524b767485_del complete
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.985 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc906d1-898b-41c8-a896-ee5cb3df8e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.988 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f1bbc0-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.989 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.989 256736 INFO nova.compute.manager [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Took 3.08 seconds to destroy the instance on the hypervisor.
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.990 256736 DEBUG oslo.service.loopingcall [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.990 256736 DEBUG nova.compute.manager [-] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:05:31 compute-0 nova_compute[256729]: 2025-11-29 08:05:31.990 256736 DEBUG nova.network.neutron [-] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:05:31 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:31.989 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45f1bbc0-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:32 compute-0 NetworkManager[48962]: <info>  [1764403532.0094] manager: (tap45f1bbc0-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Nov 29 08:05:32 compute-0 nova_compute[256729]: 2025-11-29 08:05:32.008 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:32 compute-0 kernel: tap45f1bbc0-c0: entered promiscuous mode
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:32.017 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45f1bbc0-c0, col_values=(('external_ids', {'iface-id': '1506b576-854d-4118-b808-0e5e32d85d28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:32 compute-0 ovn_controller[153383]: 2025-11-29T08:05:32Z|00217|binding|INFO|Releasing lport 1506b576-854d-4118-b808-0e5e32d85d28 from this chassis (sb_readonly=0)
Nov 29 08:05:32 compute-0 nova_compute[256729]: 2025-11-29 08:05:32.018 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:32 compute-0 nova_compute[256729]: 2025-11-29 08:05:32.020 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:32.022 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:32.038 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[54d4fb03-4913-449e-b1ad-6ce7a79dba36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:32.039 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:05:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:32.040 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'env', 'PROCESS_TAG=haproxy-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:05:32 compute-0 nova_compute[256729]: 2025-11-29 08:05:32.045 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:32 compute-0 clever_khorana[294428]: {
Nov 29 08:05:32 compute-0 clever_khorana[294428]:     "0": [
Nov 29 08:05:32 compute-0 clever_khorana[294428]:         {
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "devices": [
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "/dev/loop3"
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             ],
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_name": "ceph_lv0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_size": "21470642176",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "name": "ceph_lv0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "tags": {
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.cluster_name": "ceph",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.crush_device_class": "",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.encrypted": "0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.osd_id": "0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.type": "block",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.vdo": "0"
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             },
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "type": "block",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "vg_name": "ceph_vg0"
Nov 29 08:05:32 compute-0 clever_khorana[294428]:         }
Nov 29 08:05:32 compute-0 clever_khorana[294428]:     ],
Nov 29 08:05:32 compute-0 clever_khorana[294428]:     "1": [
Nov 29 08:05:32 compute-0 clever_khorana[294428]:         {
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "devices": [
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "/dev/loop4"
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             ],
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_name": "ceph_lv1",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_size": "21470642176",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "name": "ceph_lv1",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "tags": {
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.cluster_name": "ceph",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.crush_device_class": "",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.encrypted": "0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.osd_id": "1",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.type": "block",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.vdo": "0"
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             },
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "type": "block",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "vg_name": "ceph_vg1"
Nov 29 08:05:32 compute-0 clever_khorana[294428]:         }
Nov 29 08:05:32 compute-0 clever_khorana[294428]:     ],
Nov 29 08:05:32 compute-0 clever_khorana[294428]:     "2": [
Nov 29 08:05:32 compute-0 clever_khorana[294428]:         {
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "devices": [
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "/dev/loop5"
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             ],
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_name": "ceph_lv2",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_size": "21470642176",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "name": "ceph_lv2",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "tags": {
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.cluster_name": "ceph",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.crush_device_class": "",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.encrypted": "0",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.osd_id": "2",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.type": "block",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:                 "ceph.vdo": "0"
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             },
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "type": "block",
Nov 29 08:05:32 compute-0 clever_khorana[294428]:             "vg_name": "ceph_vg2"
Nov 29 08:05:32 compute-0 clever_khorana[294428]:         }
Nov 29 08:05:32 compute-0 clever_khorana[294428]:     ]
Nov 29 08:05:32 compute-0 clever_khorana[294428]: }
Nov 29 08:05:32 compute-0 systemd[1]: libpod-488208cfb15ce6be0cffbd92ea481835b7f5e605ea8f92f7039938b188c93ddf.scope: Deactivated successfully.
Nov 29 08:05:32 compute-0 podman[294371]: 2025-11-29 08:05:32.532144516 +0000 UTC m=+1.255126919 container died 488208cfb15ce6be0cffbd92ea481835b7f5e605ea8f92f7039938b188c93ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:05:32 compute-0 podman[294491]: 2025-11-29 08:05:32.455520002 +0000 UTC m=+0.042208996 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:05:32 compute-0 podman[294491]: 2025-11-29 08:05:32.648292732 +0000 UTC m=+0.234981636 container create a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0baaff02e0f6ad8464ea80fa6837ebd1a393b4b66628ed194599d92caa29ef04-merged.mount: Deactivated successfully.
Nov 29 08:05:32 compute-0 nova_compute[256729]: 2025-11-29 08:05:32.688 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:32 compute-0 systemd[1]: Started libpod-conmon-a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4.scope.
Nov 29 08:05:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a53a2f18dd5642584eab60066f4e6b3d30a8cdd4454d9eb5fb8d5746b188d4c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:32 compute-0 ceph-mon[75050]: pgmap v1996: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 3.7 MiB/s wr, 108 op/s
Nov 29 08:05:32 compute-0 podman[294491]: 2025-11-29 08:05:32.810617892 +0000 UTC m=+0.397306846 container init a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:05:32 compute-0 podman[294491]: 2025-11-29 08:05:32.821534283 +0000 UTC m=+0.408223197 container start a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 08:05:32 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[294520]: [NOTICE]   (294524) : New worker (294526) forked
Nov 29 08:05:32 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[294520]: [NOTICE]   (294524) : Loading success.
Nov 29 08:05:32 compute-0 podman[294371]: 2025-11-29 08:05:32.866168655 +0000 UTC m=+1.589151028 container remove 488208cfb15ce6be0cffbd92ea481835b7f5e605ea8f92f7039938b188c93ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khorana, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 08:05:32 compute-0 sudo[294095]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:32 compute-0 systemd[1]: libpod-conmon-488208cfb15ce6be0cffbd92ea481835b7f5e605ea8f92f7039938b188c93ddf.scope: Deactivated successfully.
Nov 29 08:05:32 compute-0 sudo[294535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:32 compute-0 sudo[294535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:32 compute-0 sudo[294535]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:33 compute-0 sudo[294560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:05:33 compute-0 sudo[294560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:33 compute-0 sudo[294560]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 47 KiB/s wr, 47 op/s
Nov 29 08:05:33 compute-0 sudo[294585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:33 compute-0 sudo[294585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:33 compute-0 sudo[294585]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.157 256736 DEBUG nova.network.neutron [-] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.189 256736 INFO nova.compute.manager [-] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Took 1.20 seconds to deallocate network for instance.
Nov 29 08:05:33 compute-0 sudo[294610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:05:33 compute-0 sudo[294610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:33 compute-0 podman[294676]: 2025-11-29 08:05:33.570439691 +0000 UTC m=+0.043884453 container create 1b2b6cc79267c898ebfdf9bc8c856769566456b726d5a8d5d8efc4fedc58b5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.591 256736 INFO nova.compute.manager [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Took 0.40 seconds to detach 1 volumes for instance.
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.605 256736 DEBUG nova.compute.manager [req-f1da18b6-cc20-4665-b4eb-5b0aae81a4bc req-17db9559-f740-4c6e-9648-6a2dc1cf1505 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Received event network-vif-plugged-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.605 256736 DEBUG oslo_concurrency.lockutils [req-f1da18b6-cc20-4665-b4eb-5b0aae81a4bc req-17db9559-f740-4c6e-9648-6a2dc1cf1505 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.606 256736 DEBUG oslo_concurrency.lockutils [req-f1da18b6-cc20-4665-b4eb-5b0aae81a4bc req-17db9559-f740-4c6e-9648-6a2dc1cf1505 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.606 256736 DEBUG oslo_concurrency.lockutils [req-f1da18b6-cc20-4665-b4eb-5b0aae81a4bc req-17db9559-f740-4c6e-9648-6a2dc1cf1505 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.606 256736 DEBUG nova.compute.manager [req-f1da18b6-cc20-4665-b4eb-5b0aae81a4bc req-17db9559-f740-4c6e-9648-6a2dc1cf1505 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] No waiting events found dispatching network-vif-plugged-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.606 256736 WARNING nova.compute.manager [req-f1da18b6-cc20-4665-b4eb-5b0aae81a4bc req-17db9559-f740-4c6e-9648-6a2dc1cf1505 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Received unexpected event network-vif-plugged-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 for instance with vm_state building and task_state spawning.
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.607 256736 DEBUG nova.compute.manager [req-f1da18b6-cc20-4665-b4eb-5b0aae81a4bc req-17db9559-f740-4c6e-9648-6a2dc1cf1505 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Received event network-vif-deleted-2ce38729-f90c-40bf-aeca-bbe09b973bbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:33 compute-0 systemd[1]: Started libpod-conmon-1b2b6cc79267c898ebfdf9bc8c856769566456b726d5a8d5d8efc4fedc58b5a7.scope.
Nov 29 08:05:33 compute-0 podman[294676]: 2025-11-29 08:05:33.556692761 +0000 UTC m=+0.030137543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.659 256736 DEBUG oslo_concurrency.lockutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.659 256736 DEBUG oslo_concurrency.lockutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:33 compute-0 podman[294676]: 2025-11-29 08:05:33.671814608 +0000 UTC m=+0.145259370 container init 1b2b6cc79267c898ebfdf9bc8c856769566456b726d5a8d5d8efc4fedc58b5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:05:33 compute-0 podman[294676]: 2025-11-29 08:05:33.685244809 +0000 UTC m=+0.158689561 container start 1b2b6cc79267c898ebfdf9bc8c856769566456b726d5a8d5d8efc4fedc58b5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chandrasekhar, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:05:33 compute-0 podman[294676]: 2025-11-29 08:05:33.688094848 +0000 UTC m=+0.161539630 container attach 1b2b6cc79267c898ebfdf9bc8c856769566456b726d5a8d5d8efc4fedc58b5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:05:33 compute-0 blissful_chandrasekhar[294692]: 167 167
Nov 29 08:05:33 compute-0 systemd[1]: libpod-1b2b6cc79267c898ebfdf9bc8c856769566456b726d5a8d5d8efc4fedc58b5a7.scope: Deactivated successfully.
Nov 29 08:05:33 compute-0 podman[294676]: 2025-11-29 08:05:33.690631038 +0000 UTC m=+0.164075810 container died 1b2b6cc79267c898ebfdf9bc8c856769566456b726d5a8d5d8efc4fedc58b5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 08:05:33 compute-0 sshd-session[294310]: Connection closed by authenticating user root 143.14.121.41 port 41974 [preauth]
Nov 29 08:05:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9ab9ccafae7b8402cb53e9a89d01b9fc85023a095c604ed67fec76370307361-merged.mount: Deactivated successfully.
Nov 29 08:05:33 compute-0 podman[294676]: 2025-11-29 08:05:33.727829634 +0000 UTC m=+0.201274396 container remove 1b2b6cc79267c898ebfdf9bc8c856769566456b726d5a8d5d8efc4fedc58b5a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chandrasekhar, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 08:05:33 compute-0 systemd[1]: libpod-conmon-1b2b6cc79267c898ebfdf9bc8c856769566456b726d5a8d5d8efc4fedc58b5a7.scope: Deactivated successfully.
Nov 29 08:05:33 compute-0 nova_compute[256729]: 2025-11-29 08:05:33.751 256736 DEBUG oslo_concurrency.processutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:33 compute-0 podman[294719]: 2025-11-29 08:05:33.880297662 +0000 UTC m=+0.036528549 container create 99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:05:33 compute-0 systemd[1]: Started libpod-conmon-99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0.scope.
Nov 29 08:05:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77d906ce2b8758093aa489c9d6fefcca892d96b53ef5f5867029d8dc727553a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77d906ce2b8758093aa489c9d6fefcca892d96b53ef5f5867029d8dc727553a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77d906ce2b8758093aa489c9d6fefcca892d96b53ef5f5867029d8dc727553a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77d906ce2b8758093aa489c9d6fefcca892d96b53ef5f5867029d8dc727553a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:33 compute-0 podman[294719]: 2025-11-29 08:05:33.865909604 +0000 UTC m=+0.022140511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:05:33 compute-0 podman[294719]: 2025-11-29 08:05:33.972486886 +0000 UTC m=+0.128717843 container init 99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:05:33 compute-0 podman[294719]: 2025-11-29 08:05:33.988379445 +0000 UTC m=+0.144610332 container start 99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 08:05:34 compute-0 podman[294719]: 2025-11-29 08:05:34.001687412 +0000 UTC m=+0.157918339 container attach 99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.076 256736 DEBUG nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.078 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403534.0757263, d165576a-73f8-49f3-874e-2fe3aba30532 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.078 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] VM Started (Lifecycle Event)
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.082 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.086 256736 INFO nova.virt.libvirt.driver [-] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Instance spawned successfully.
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.086 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.105 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.111 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.114 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.114 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.114 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.115 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.115 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.116 256736 DEBUG nova.virt.libvirt.driver [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.152 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.152 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403534.0771344, d165576a-73f8-49f3-874e-2fe3aba30532 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.152 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] VM Paused (Lifecycle Event)
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.183 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.189 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403534.0809429, d165576a-73f8-49f3-874e-2fe3aba30532 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.190 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] VM Resumed (Lifecycle Event)
Nov 29 08:05:34 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:34 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4189746133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.197 256736 INFO nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Took 8.26 seconds to spawn the instance on the hypervisor.
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.198 256736 DEBUG nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.213 256736 DEBUG oslo_concurrency.processutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.219 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.225 256736 DEBUG nova.compute.provider_tree [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.230 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.270 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.273 256736 DEBUG nova.scheduler.client.report [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.307 256736 DEBUG oslo_concurrency.lockutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.315 256736 INFO nova.compute.manager [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Took 11.60 seconds to build instance.
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.338 256736 DEBUG oslo_concurrency.lockutils [None req-78b1e51d-635d-4b5e-9b3b-f5b10830353e 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.342 256736 INFO nova.scheduler.client.report [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Deleted allocations for instance 7b518c20-fd37-4e46-af6a-11524b767485
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.376 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:34 compute-0 nova_compute[256729]: 2025-11-29 08:05:34.447 256736 DEBUG oslo_concurrency.lockutils [None req-c2f44890-5b1e-405d-88ba-8906ce6cb290 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "7b518c20-fd37-4e46-af6a-11524b767485" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:34 compute-0 ceph-mon[75050]: pgmap v1997: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 47 KiB/s wr, 47 op/s
Nov 29 08:05:34 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4189746133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]: {
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "osd_id": 2,
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "type": "bluestore"
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:     },
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "osd_id": 1,
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "type": "bluestore"
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:     },
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "osd_id": 0,
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:         "type": "bluestore"
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]:     }
Nov 29 08:05:34 compute-0 serene_mcnulty[294755]: }
Nov 29 08:05:35 compute-0 systemd[1]: libpod-99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0.scope: Deactivated successfully.
Nov 29 08:05:35 compute-0 podman[294719]: 2025-11-29 08:05:35.015491671 +0000 UTC m=+1.171722558 container died 99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:05:35 compute-0 systemd[1]: libpod-99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0.scope: Consumed 1.016s CPU time.
Nov 29 08:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a77d906ce2b8758093aa489c9d6fefcca892d96b53ef5f5867029d8dc727553a-merged.mount: Deactivated successfully.
Nov 29 08:05:35 compute-0 podman[294719]: 2025-11-29 08:05:35.07998744 +0000 UTC m=+1.236218337 container remove 99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:05:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 53 KiB/s wr, 52 op/s
Nov 29 08:05:35 compute-0 systemd[1]: libpod-conmon-99c7830c01960666249c8d16d6fe9deabe0c0956bb5f67e2aca01b09cc385cb0.scope: Deactivated successfully.
Nov 29 08:05:35 compute-0 sudo[294610]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:05:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:05:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:05:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:05:35 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c2615884-2748-4bf9-8bf1-28808c14aba7 does not exist
Nov 29 08:05:35 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev e43a95a3-3d1f-4a06-a20a-d5947702447c does not exist
Nov 29 08:05:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:35 compute-0 sudo[294808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:05:35 compute-0 sudo[294808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:35 compute-0 sudo[294808]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:35 compute-0 sudo[294833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:05:35 compute-0 sudo[294833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:05:35 compute-0 sudo[294833]: pam_unix(sudo:session): session closed for user root
Nov 29 08:05:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:05:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:05:36 compute-0 ceph-mon[75050]: pgmap v1998: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 53 KiB/s wr, 52 op/s
Nov 29 08:05:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:05:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:05:36 compute-0 sshd-session[294655]: Connection closed by authenticating user root 143.14.121.41 port 43518 [preauth]
Nov 29 08:05:36 compute-0 podman[294860]: 2025-11-29 08:05:36.706038315 +0000 UTC m=+0.059119633 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:05:36 compute-0 podman[294859]: 2025-11-29 08:05:36.716178915 +0000 UTC m=+0.071939517 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:05:36 compute-0 podman[294858]: 2025-11-29 08:05:36.733934525 +0000 UTC m=+0.102359646 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:05:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 52 KiB/s wr, 90 op/s
Nov 29 08:05:37 compute-0 nova_compute[256729]: 2025-11-29 08:05:37.723 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:38 compute-0 ceph-mon[75050]: pgmap v1999: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 52 KiB/s wr, 90 op/s
Nov 29 08:05:38 compute-0 nova_compute[256729]: 2025-11-29 08:05:38.767 256736 DEBUG nova.compute.manager [req-4d545e01-78c9-498a-9a23-34e32ac37611 req-f2adde3b-f9d5-49aa-acc8-e9a11199ef9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Received event network-changed-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:38 compute-0 nova_compute[256729]: 2025-11-29 08:05:38.767 256736 DEBUG nova.compute.manager [req-4d545e01-78c9-498a-9a23-34e32ac37611 req-f2adde3b-f9d5-49aa-acc8-e9a11199ef9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Refreshing instance network info cache due to event network-changed-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:38 compute-0 nova_compute[256729]: 2025-11-29 08:05:38.767 256736 DEBUG oslo_concurrency.lockutils [req-4d545e01-78c9-498a-9a23-34e32ac37611 req-f2adde3b-f9d5-49aa-acc8-e9a11199ef9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-d165576a-73f8-49f3-874e-2fe3aba30532" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:38 compute-0 nova_compute[256729]: 2025-11-29 08:05:38.767 256736 DEBUG oslo_concurrency.lockutils [req-4d545e01-78c9-498a-9a23-34e32ac37611 req-f2adde3b-f9d5-49aa-acc8-e9a11199ef9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-d165576a-73f8-49f3-874e-2fe3aba30532" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:38 compute-0 nova_compute[256729]: 2025-11-29 08:05:38.768 256736 DEBUG nova.network.neutron [req-4d545e01-78c9-498a-9a23-34e32ac37611 req-f2adde3b-f9d5-49aa-acc8-e9a11199ef9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Refreshing network info cache for port 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 40 KiB/s wr, 105 op/s
Nov 29 08:05:39 compute-0 nova_compute[256729]: 2025-11-29 08:05:39.380 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:40 compute-0 ceph-mon[75050]: pgmap v2000: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 40 KiB/s wr, 105 op/s
Nov 29 08:05:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:40 compute-0 nova_compute[256729]: 2025-11-29 08:05:40.944 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "10a1a099-bf1a-4195-9186-8f440437a1ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:40 compute-0 nova_compute[256729]: 2025-11-29 08:05:40.947 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:40 compute-0 nova_compute[256729]: 2025-11-29 08:05:40.974 256736 DEBUG nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:05:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 37 KiB/s wr, 97 op/s
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.097 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.098 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.112 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.112 256736 INFO nova.compute.claims [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.131 256736 DEBUG nova.network.neutron [req-4d545e01-78c9-498a-9a23-34e32ac37611 req-f2adde3b-f9d5-49aa-acc8-e9a11199ef9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Updated VIF entry in instance network info cache for port 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.132 256736 DEBUG nova.network.neutron [req-4d545e01-78c9-498a-9a23-34e32ac37611 req-f2adde3b-f9d5-49aa-acc8-e9a11199ef9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Updating instance_info_cache with network_info: [{"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:41 compute-0 sshd-session[294918]: Connection closed by authenticating user root 143.14.121.41 port 43534 [preauth]
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.165 256736 DEBUG oslo_concurrency.lockutils [req-4d545e01-78c9-498a-9a23-34e32ac37611 req-f2adde3b-f9d5-49aa-acc8-e9a11199ef9c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-d165576a-73f8-49f3-874e-2fe3aba30532" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.294 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633439302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.691 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.699 256736 DEBUG nova.compute.provider_tree [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.805 256736 DEBUG nova.scheduler.client.report [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.945 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:41 compute-0 nova_compute[256729]: 2025-11-29 08:05:41.947 256736 DEBUG nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.000 256736 DEBUG nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.001 256736 DEBUG nova.network.neutron [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.158 256736 DEBUG nova.policy [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9664e420085d412aae898a6ec021b24f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dfb6854e99614af5b8df420841fde0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.165 256736 INFO nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.218 256736 DEBUG nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:05:42 compute-0 ceph-mon[75050]: pgmap v2001: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 37 KiB/s wr, 97 op/s
Nov 29 08:05:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/633439302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.269 256736 INFO nova.virt.block_device [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Booting with volume 1614a07d-d62c-4dae-8875-9c623d26ae7c at /dev/vda
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.504 256736 DEBUG os_brick.utils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.507 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.528 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.529 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[07103284-c80d-47da-b2c6-6fca791c35fe]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.530 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.545 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.546 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[08449883-2a30-4bca-a86c-5adbada78fc8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.547 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.563 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.564 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[fa115d92-d870-44a7-bd70-198f76f45d4d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.565 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf934d4-90d0-4f59-b936-f09315d8ba98]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.566 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.599 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.601 256736 DEBUG os_brick.initiator.connectors.lightos [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.602 256736 DEBUG os_brick.initiator.connectors.lightos [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.602 256736 DEBUG os_brick.initiator.connectors.lightos [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.602 256736 DEBUG os_brick.utils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.603 256736 DEBUG nova.virt.block_device [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updating existing volume attachment record: dedc88d4-c336-4da9-9291-443ac57db43d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:05:42 compute-0 nova_compute[256729]: 2025-11-29 08:05:42.724 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 34 KiB/s wr, 88 op/s
Nov 29 08:05:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3374169918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3374169918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:43 compute-0 sshd-session[294921]: Connection closed by authenticating user root 143.14.121.41 port 43548 [preauth]
Nov 29 08:05:44 compute-0 nova_compute[256729]: 2025-11-29 08:05:44.347 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403529.3450427, 7b518c20-fd37-4e46-af6a-11524b767485 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:44 compute-0 nova_compute[256729]: 2025-11-29 08:05:44.348 256736 INFO nova.compute.manager [-] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] VM Stopped (Lifecycle Event)
Nov 29 08:05:44 compute-0 ceph-mon[75050]: pgmap v2002: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 34 KiB/s wr, 88 op/s
Nov 29 08:05:44 compute-0 nova_compute[256729]: 2025-11-29 08:05:44.384 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 08:05:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.521 256736 DEBUG nova.compute.manager [None req-a9ea4213-9ee5-4a9e-80d4-eb9cca883d14 - - - - - -] [instance: 7b518c20-fd37-4e46-af6a-11524b767485] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.584 256736 DEBUG nova.network.neutron [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Successfully created port: 55b6aa9b-29fc-4f6b-9ae5-885c514941fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.783 256736 DEBUG nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.785 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.786 256736 INFO nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Creating image(s)
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.786 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.787 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Ensure instance console log exists: /var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.788 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.788 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:45 compute-0 nova_compute[256729]: 2025-11-29 08:05:45.788 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:46 compute-0 ceph-mon[75050]: pgmap v2003: 305 pgs: 305 active+clean; 281 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 08:05:46 compute-0 ovn_controller[153383]: 2025-11-29T08:05:46Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:20:e8 10.100.0.13
Nov 29 08:05:46 compute-0 ovn_controller[153383]: 2025-11-29T08:05:46Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:20:e8 10.100.0.13
Nov 29 08:05:46 compute-0 nova_compute[256729]: 2025-11-29 08:05:46.906 256736 DEBUG nova.network.neutron [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Successfully updated port: 55b6aa9b-29fc-4f6b-9ae5-885c514941fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:05:46 compute-0 nova_compute[256729]: 2025-11-29 08:05:46.956 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:46 compute-0 nova_compute[256729]: 2025-11-29 08:05:46.957 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquired lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:46 compute-0 nova_compute[256729]: 2025-11-29 08:05:46.957 256736 DEBUG nova.network.neutron [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:05:47 compute-0 nova_compute[256729]: 2025-11-29 08:05:47.075 256736 DEBUG nova.compute.manager [req-00f8663f-eb64-4709-ab91-16b77ac84264 req-69870b31-5811-43a0-bd83-c3d11497f27c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received event network-changed-55b6aa9b-29fc-4f6b-9ae5-885c514941fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:47 compute-0 nova_compute[256729]: 2025-11-29 08:05:47.075 256736 DEBUG nova.compute.manager [req-00f8663f-eb64-4709-ab91-16b77ac84264 req-69870b31-5811-43a0-bd83-c3d11497f27c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Refreshing instance network info cache due to event network-changed-55b6aa9b-29fc-4f6b-9ae5-885c514941fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:47 compute-0 nova_compute[256729]: 2025-11-29 08:05:47.076 256736 DEBUG oslo_concurrency.lockutils [req-00f8663f-eb64-4709-ab91-16b77ac84264 req-69870b31-5811-43a0-bd83-c3d11497f27c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 285 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 92 op/s
Nov 29 08:05:47 compute-0 nova_compute[256729]: 2025-11-29 08:05:47.103 256736 DEBUG nova.network.neutron [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:05:47 compute-0 nova_compute[256729]: 2025-11-29 08:05:47.726 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:47 compute-0 sshd-session[294951]: Connection closed by authenticating user root 143.14.121.41 port 39248 [preauth]
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.378 256736 DEBUG nova.network.neutron [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updating instance_info_cache with network_info: [{"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:48 compute-0 ceph-mon[75050]: pgmap v2004: 305 pgs: 305 active+clean; 285 MiB data, 589 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 92 op/s
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.647 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Releasing lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.648 256736 DEBUG nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Instance network_info: |[{"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.649 256736 DEBUG oslo_concurrency.lockutils [req-00f8663f-eb64-4709-ab91-16b77ac84264 req-69870b31-5811-43a0-bd83-c3d11497f27c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.649 256736 DEBUG nova.network.neutron [req-00f8663f-eb64-4709-ab91-16b77ac84264 req-69870b31-5811-43a0-bd83-c3d11497f27c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Refreshing network info cache for port 55b6aa9b-29fc-4f6b-9ae5-885c514941fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.655 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Start _get_guest_xml network_info=[{"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1614a07d-d62c-4dae-8875-9c623d26ae7c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1614a07d-d62c-4dae-8875-9c623d26ae7c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '10a1a099-bf1a-4195-9186-8f440437a1ce', 'attached_at': '', 'detached_at': '', 'volume_id': '1614a07d-d62c-4dae-8875-9c623d26ae7c', 'serial': '1614a07d-d62c-4dae-8875-9c623d26ae7c'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': 'dedc88d4-c336-4da9-9291-443ac57db43d', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.662 256736 WARNING nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.667 256736 DEBUG nova.virt.libvirt.host [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.668 256736 DEBUG nova.virt.libvirt.host [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.672 256736 DEBUG nova.virt.libvirt.host [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.673 256736 DEBUG nova.virt.libvirt.host [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.674 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.674 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.675 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.675 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.676 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.676 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.676 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.677 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.677 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.677 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.678 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.678 256736 DEBUG nova.virt.hardware [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.716 256736 DEBUG nova.storage.rbd_utils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image 10a1a099-bf1a-4195-9186-8f440437a1ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:48 compute-0 nova_compute[256729]: 2025-11-29 08:05:48.720 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 306 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.3 MiB/s wr, 68 op/s
Nov 29 08:05:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:05:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2140318556' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.134 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.233 256736 DEBUG nova.virt.libvirt.vif [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-870934529',display_name='tempest-TestVolumeBootPattern-server-870934529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-870934529',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRhGhCDf+2DWWqDuvRpS/JaOK+f/CbMMIs9mX1kyTRqTPCFubI8ju/4twf4g9TbzLiRX/BzWwQ/uPnV3ZkV8vI7PffevvM5uIZzGBjdTxd3Z49lVgwpoVKRmE3GzO1NBg==',key_name='tempest-TestVolumeBootPattern-556618908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-emhf3y7m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:42Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=10a1a099-bf1a-4195-9186-8f440437a1ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.234 256736 DEBUG nova.network.os_vif_util [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.236 256736 DEBUG nova.network.os_vif_util [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=55b6aa9b-29fc-4f6b-9ae5-885c514941fa,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55b6aa9b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.238 256736 DEBUG nova.objects.instance [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'pci_devices' on Instance uuid 10a1a099-bf1a-4195-9186-8f440437a1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.255 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <uuid>10a1a099-bf1a-4195-9186-8f440437a1ce</uuid>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <name>instance-00000017</name>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <nova:name>tempest-TestVolumeBootPattern-server-870934529</nova:name>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:05:48</nova:creationTime>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <nova:user uuid="9664e420085d412aae898a6ec021b24f">tempest-TestVolumeBootPattern-776329285-project-member</nova:user>
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <nova:project uuid="dfb6854e99614af5b8df420841fde0db">tempest-TestVolumeBootPattern-776329285</nova:project>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <nova:port uuid="55b6aa9b-29fc-4f6b-9ae5-885c514941fa">
Nov 29 08:05:49 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <system>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <entry name="serial">10a1a099-bf1a-4195-9186-8f440437a1ce</entry>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <entry name="uuid">10a1a099-bf1a-4195-9186-8f440437a1ce</entry>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     </system>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <os>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   </os>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <features>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   </features>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/10a1a099-bf1a-4195-9186-8f440437a1ce_disk.config">
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       </source>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-1614a07d-d62c-4dae-8875-9c623d26ae7c">
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       </source>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:05:49 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <serial>1614a07d-d62c-4dae-8875-9c623d26ae7c</serial>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:c8:3a:8c"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <target dev="tap55b6aa9b-29"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce/console.log" append="off"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <video>
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     </video>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:05:49 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:05:49 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:05:49 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:05:49 compute-0 nova_compute[256729]: </domain>
Nov 29 08:05:49 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.257 256736 DEBUG nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Preparing to wait for external event network-vif-plugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.257 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.257 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.258 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.258 256736 DEBUG nova.virt.libvirt.vif [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-870934529',display_name='tempest-TestVolumeBootPattern-server-870934529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-870934529',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRhGhCDf+2DWWqDuvRpS/JaOK+f/CbMMIs9mX1kyTRqTPCFubI8ju/4twf4g9TbzLiRX/BzWwQ/uPnV3ZkV8vI7PffevvM5uIZzGBjdTxd3Z49lVgwpoVKRmE3GzO1NBg==',key_name='tempest-TestVolumeBootPattern-556618908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-emhf3y7m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:42Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=10a1a099-bf1a-4195-9186-8f440437a1ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.259 256736 DEBUG nova.network.os_vif_util [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.259 256736 DEBUG nova.network.os_vif_util [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=55b6aa9b-29fc-4f6b-9ae5-885c514941fa,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55b6aa9b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.259 256736 DEBUG os_vif [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=55b6aa9b-29fc-4f6b-9ae5-885c514941fa,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55b6aa9b-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.260 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.260 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.261 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.265 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.266 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55b6aa9b-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.266 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55b6aa9b-29, col_values=(('external_ids', {'iface-id': '55b6aa9b-29fc-4f6b-9ae5-885c514941fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:3a:8c', 'vm-uuid': '10a1a099-bf1a-4195-9186-8f440437a1ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.302 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:49 compute-0 NetworkManager[48962]: <info>  [1764403549.3042] manager: (tap55b6aa9b-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.305 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.311 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.312 256736 INFO os_vif [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=55b6aa9b-29fc-4f6b-9ae5-885c514941fa,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55b6aa9b-29')
Nov 29 08:05:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2140318556' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.686 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.687 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.687 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No VIF found with MAC fa:16:3e:c8:3a:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.688 256736 INFO nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Using config drive
Nov 29 08:05:49 compute-0 nova_compute[256729]: 2025-11-29 08:05:49.724 256736 DEBUG nova.storage.rbd_utils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image 10a1a099-bf1a-4195-9186-8f440437a1ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.151 256736 INFO nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Creating config drive at /var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce/disk.config
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.155 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkbe59nne execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.299 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkbe59nne" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.331 256736 DEBUG nova.storage.rbd_utils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image 10a1a099-bf1a-4195-9186-8f440437a1ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.335 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce/disk.config 10a1a099-bf1a-4195-9186-8f440437a1ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:50 compute-0 ceph-mon[75050]: pgmap v2005: 305 pgs: 305 active+clean; 306 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.3 MiB/s wr, 68 op/s
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.508 256736 DEBUG oslo_concurrency.processutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce/disk.config 10a1a099-bf1a-4195-9186-8f440437a1ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.509 256736 INFO nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Deleting local config drive /var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce/disk.config because it was imported into RBD.
Nov 29 08:05:50 compute-0 kernel: tap55b6aa9b-29: entered promiscuous mode
Nov 29 08:05:50 compute-0 NetworkManager[48962]: <info>  [1764403550.5655] manager: (tap55b6aa9b-29): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Nov 29 08:05:50 compute-0 ovn_controller[153383]: 2025-11-29T08:05:50Z|00218|binding|INFO|Claiming lport 55b6aa9b-29fc-4f6b-9ae5-885c514941fa for this chassis.
Nov 29 08:05:50 compute-0 ovn_controller[153383]: 2025-11-29T08:05:50Z|00219|binding|INFO|55b6aa9b-29fc-4f6b-9ae5-885c514941fa: Claiming fa:16:3e:c8:3a:8c 10.100.0.10
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.567 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.577 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:3a:8c 10.100.0.10'], port_security=['fa:16:3e:c8:3a:8c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '10a1a099-bf1a-4195-9186-8f440437a1ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '284fde66-e9d8-4738-b856-2e805436581e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=55b6aa9b-29fc-4f6b-9ae5-885c514941fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.579 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 55b6aa9b-29fc-4f6b-9ae5-885c514941fa in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 bound to our chassis
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.581 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.594 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[05c5dfb8-4fc0-4eda-99ba-fe029fd930a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.595 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d9c390c-31 in ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.597 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d9c390c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.598 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab7691d-e7f5-4c7b-b29c-9273d7daccc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.598 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d756c89d-35b2-4f05-bf75-f0ee1cb4c6ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_controller[153383]: 2025-11-29T08:05:50Z|00220|binding|INFO|Setting lport 55b6aa9b-29fc-4f6b-9ae5-885c514941fa ovn-installed in OVS
Nov 29 08:05:50 compute-0 ovn_controller[153383]: 2025-11-29T08:05:50Z|00221|binding|INFO|Setting lport 55b6aa9b-29fc-4f6b-9ae5-885c514941fa up in Southbound
Nov 29 08:05:50 compute-0 systemd-machined[217781]: New machine qemu-23-instance-00000017.
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.609 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.615 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[8bed6f5a-aa94-460b-8001-9a2eb6a9b7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.617 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.630 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[af5d74c8-f145-403c-8fa9-dedc195e45de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Nov 29 08:05:50 compute-0 systemd-udevd[295074]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:05:50 compute-0 NetworkManager[48962]: <info>  [1764403550.6693] device (tap55b6aa9b-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:05:50 compute-0 NetworkManager[48962]: <info>  [1764403550.6721] device (tap55b6aa9b-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.674 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7282834a-abc0-4338-b3a6-4455b12afe8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.680 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[deba37ca-143a-48e9-a828-af6a4e496006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 systemd-udevd[295076]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:05:50 compute-0 NetworkManager[48962]: <info>  [1764403550.6831] manager: (tap2d9c390c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.718 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[ca27b5b9-c9aa-4bd3-a1b4-733a64ce23b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.723 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc54fd6-bfb5-4db3-8792-e945fd6ad54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 NetworkManager[48962]: <info>  [1764403550.7466] device (tap2d9c390c-30): carrier: link connected
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.754 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[6fee0bfa-0bb9-45cc-97e7-3793fa419c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.774 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[db95de3f-629c-4905-bf84-3a3589127630]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586581, 'reachable_time': 18771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295102, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.795 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[487973de-ab0a-48f4-b367-df020a5db9bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:2407'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586581, 'tstamp': 586581}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295103, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.810 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6079a3-5852-40c9-8f61-845043b93493]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586581, 'reachable_time': 18771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295104, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.850 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[60eca98d-1b49-4862-90bd-11d7cd712ffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.919 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[804a3ec2-127b-4539-8d4c-6617b4b6acd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.921 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.921 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.922 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d9c390c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.924 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:50 compute-0 kernel: tap2d9c390c-30: entered promiscuous mode
Nov 29 08:05:50 compute-0 NetworkManager[48962]: <info>  [1764403550.9269] manager: (tap2d9c390c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.929 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.930 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d9c390c-30, col_values=(('external_ids', {'iface-id': '30965993-2787-409a-9e74-8cf68d39c3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.931 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:50 compute-0 ovn_controller[153383]: 2025-11-29T08:05:50Z|00222|binding|INFO|Releasing lport 30965993-2787-409a-9e74-8cf68d39c3b3 from this chassis (sb_readonly=0)
Nov 29 08:05:50 compute-0 nova_compute[256729]: 2025-11-29 08:05:50.958 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.959 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.960 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2b40cf-a36d-43cc-85fe-782ca92d9b14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.961 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/2d9c390c-362a-41a5-93b0-23344eb99ae5.pid.haproxy
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:05:50 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:50.962 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'env', 'PROCESS_TAG=haproxy-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d9c390c-362a-41a5-93b0-23344eb99ae5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:05:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 306 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 501 KiB/s rd, 3.3 MiB/s wr, 52 op/s
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.225 256736 DEBUG nova.network.neutron [req-00f8663f-eb64-4709-ab91-16b77ac84264 req-69870b31-5811-43a0-bd83-c3d11497f27c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updated VIF entry in instance network info cache for port 55b6aa9b-29fc-4f6b-9ae5-885c514941fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.226 256736 DEBUG nova.network.neutron [req-00f8663f-eb64-4709-ab91-16b77ac84264 req-69870b31-5811-43a0-bd83-c3d11497f27c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updating instance_info_cache with network_info: [{"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.265 256736 DEBUG oslo_concurrency.lockutils [req-00f8663f-eb64-4709-ab91-16b77ac84264 req-69870b31-5811-43a0-bd83-c3d11497f27c ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.330 256736 DEBUG nova.compute.manager [req-0bdce91b-57ef-4e35-b089-d21380b55776 req-b5c1cff0-998b-4067-9721-699473f6a3a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received event network-vif-plugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.331 256736 DEBUG oslo_concurrency.lockutils [req-0bdce91b-57ef-4e35-b089-d21380b55776 req-b5c1cff0-998b-4067-9721-699473f6a3a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.332 256736 DEBUG oslo_concurrency.lockutils [req-0bdce91b-57ef-4e35-b089-d21380b55776 req-b5c1cff0-998b-4067-9721-699473f6a3a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.332 256736 DEBUG oslo_concurrency.lockutils [req-0bdce91b-57ef-4e35-b089-d21380b55776 req-b5c1cff0-998b-4067-9721-699473f6a3a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.333 256736 DEBUG nova.compute.manager [req-0bdce91b-57ef-4e35-b089-d21380b55776 req-b5c1cff0-998b-4067-9721-699473f6a3a9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Processing event network-vif-plugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.429 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:51 compute-0 podman[295154]: 2025-11-29 08:05:51.364820498 +0000 UTC m=+0.037183897 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.675 256736 DEBUG nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.676 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403551.6745014, 10a1a099-bf1a-4195-9186-8f440437a1ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.677 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] VM Started (Lifecycle Event)
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.681 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.686 256736 INFO nova.virt.libvirt.driver [-] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Instance spawned successfully.
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.686 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.735 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.749 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.754 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.755 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.755 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.755 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.756 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.756 256736 DEBUG nova.virt.libvirt.driver [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:05:51 compute-0 podman[295154]: 2025-11-29 08:05:51.761295359 +0000 UTC m=+0.433658748 container create 13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.798 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.799 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403551.675567, 10a1a099-bf1a-4195-9186-8f440437a1ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.799 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] VM Paused (Lifecycle Event)
Nov 29 08:05:51 compute-0 systemd[1]: Started libpod-conmon-13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509.scope.
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.864 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.869 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403551.6808653, 10a1a099-bf1a-4195-9186-8f440437a1ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.870 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] VM Resumed (Lifecycle Event)
Nov 29 08:05:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:05:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dff588f7cb94da208f5f14329a78d99111fd289969d68ff77d484b0d9dc891/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.883 256736 INFO nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Took 6.10 seconds to spawn the instance on the hypervisor.
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.883 256736 DEBUG nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.894 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:05:51 compute-0 nova_compute[256729]: 2025-11-29 08:05:51.898 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:05:52 compute-0 podman[295154]: 2025-11-29 08:05:52.007278118 +0000 UTC m=+0.679641547 container init 13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:05:52 compute-0 nova_compute[256729]: 2025-11-29 08:05:52.007 256736 INFO nova.compute.manager [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Took 10.94 seconds to build instance.
Nov 29 08:05:52 compute-0 podman[295154]: 2025-11-29 08:05:52.014462646 +0000 UTC m=+0.686826035 container start 13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:05:52 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[295194]: [NOTICE]   (295198) : New worker (295200) forked
Nov 29 08:05:52 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[295194]: [NOTICE]   (295198) : Loading success.
Nov 29 08:05:52 compute-0 nova_compute[256729]: 2025-11-29 08:05:52.111 256736 DEBUG oslo_concurrency.lockutils [None req-80e7932d-ae94-4fad-bbc4-e7dc1badacd0 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:52 compute-0 ceph-mon[75050]: pgmap v2006: 305 pgs: 305 active+clean; 306 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 501 KiB/s rd, 3.3 MiB/s wr, 52 op/s
Nov 29 08:05:52 compute-0 nova_compute[256729]: 2025-11-29 08:05:52.729 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 350 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 5.8 MiB/s wr, 84 op/s
Nov 29 08:05:53 compute-0 nova_compute[256729]: 2025-11-29 08:05:53.523 256736 DEBUG nova.compute.manager [req-95fe9578-b686-40b6-be04-dc44ab2bc7ad req-7344d34c-b0fa-49bd-bea3-a32dc372c933 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received event network-vif-plugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:53 compute-0 nova_compute[256729]: 2025-11-29 08:05:53.524 256736 DEBUG oslo_concurrency.lockutils [req-95fe9578-b686-40b6-be04-dc44ab2bc7ad req-7344d34c-b0fa-49bd-bea3-a32dc372c933 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:53 compute-0 nova_compute[256729]: 2025-11-29 08:05:53.524 256736 DEBUG oslo_concurrency.lockutils [req-95fe9578-b686-40b6-be04-dc44ab2bc7ad req-7344d34c-b0fa-49bd-bea3-a32dc372c933 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:53 compute-0 nova_compute[256729]: 2025-11-29 08:05:53.524 256736 DEBUG oslo_concurrency.lockutils [req-95fe9578-b686-40b6-be04-dc44ab2bc7ad req-7344d34c-b0fa-49bd-bea3-a32dc372c933 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:53 compute-0 nova_compute[256729]: 2025-11-29 08:05:53.524 256736 DEBUG nova.compute.manager [req-95fe9578-b686-40b6-be04-dc44ab2bc7ad req-7344d34c-b0fa-49bd-bea3-a32dc372c933 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] No waiting events found dispatching network-vif-plugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:05:53 compute-0 nova_compute[256729]: 2025-11-29 08:05:53.525 256736 WARNING nova.compute.manager [req-95fe9578-b686-40b6-be04-dc44ab2bc7ad req-7344d34c-b0fa-49bd-bea3-a32dc372c933 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received unexpected event network-vif-plugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa for instance with vm_state active and task_state None.
Nov 29 08:05:53 compute-0 sshd-session[294953]: Connection closed by authenticating user root 143.14.121.41 port 39264 [preauth]
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.303 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:54 compute-0 ceph-mon[75050]: pgmap v2007: 305 pgs: 305 active+clean; 350 MiB data, 626 MiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 5.8 MiB/s wr, 84 op/s
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.839 256736 DEBUG oslo_concurrency.lockutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "d165576a-73f8-49f3-874e-2fe3aba30532" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.840 256736 DEBUG oslo_concurrency.lockutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.841 256736 DEBUG oslo_concurrency.lockutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.841 256736 DEBUG oslo_concurrency.lockutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.842 256736 DEBUG oslo_concurrency.lockutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.843 256736 INFO nova.compute.manager [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Terminating instance
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.846 256736 DEBUG nova.compute.manager [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:05:54 compute-0 kernel: tap0aa65aa8-ef (unregistering): left promiscuous mode
Nov 29 08:05:54 compute-0 NetworkManager[48962]: <info>  [1764403554.8976] device (tap0aa65aa8-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:05:54 compute-0 ovn_controller[153383]: 2025-11-29T08:05:54Z|00223|binding|INFO|Releasing lport 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 from this chassis (sb_readonly=0)
Nov 29 08:05:54 compute-0 ovn_controller[153383]: 2025-11-29T08:05:54Z|00224|binding|INFO|Setting lport 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 down in Southbound
Nov 29 08:05:54 compute-0 ovn_controller[153383]: 2025-11-29T08:05:54Z|00225|binding|INFO|Removing iface tap0aa65aa8-ef ovn-installed in OVS
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.924 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:54 compute-0 nova_compute[256729]: 2025-11-29 08:05:54.939 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:54 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 29 08:05:54 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 16.546s CPU time.
Nov 29 08:05:54 compute-0 systemd-machined[217781]: Machine qemu-22-instance-00000016 terminated.
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:54.964 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:20:e8 10.100.0.13'], port_security=['fa:16:3e:06:20:e8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd165576a-73f8-49f3-874e-2fe3aba30532', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '377cba5c-a444-4939-9e65-f24eadd0abbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=357216b9-f046-4273-a2c2-2385abe848ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=0aa65aa8-efb0-46a2-88e7-a95ca258d9e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:54.965 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 in datapath 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c unbound from our chassis
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:54.967 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:54.968 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5ddae000-3413-464b-b669-17c681e7d667]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:54.968 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c namespace which is not needed anymore
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.088 256736 INFO nova.virt.libvirt.driver [-] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Instance destroyed successfully.
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.089 256736 DEBUG nova.objects.instance [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lazy-loading 'resources' on Instance uuid d165576a-73f8-49f3-874e-2fe3aba30532 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:05:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 350 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.8 MiB/s wr, 112 op/s
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.110 256736 DEBUG nova.virt.libvirt.vif [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1091562499',display_name='tempest-TransferEncryptedVolumeTest-server-1091562499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1091562499',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF/ioI52WgdhEBAbeM3RhpZNbUdNn18Xja5uOnO3NOZPUzsKxYrvXBByAxA/Dl5IK3nSUHQ9foFVWH8Ax4rgF1bIpX1xDfETzCAV2xOlgY9UnrjEKcSJoT+wgO+gA9frAA==',key_name='tempest-TransferEncryptedVolumeTest-1552823458',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-skzb7ppx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:34Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=d165576a-73f8-49f3-874e-2fe3aba30532,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.110 256736 DEBUG nova.network.os_vif_util [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "address": "fa:16:3e:06:20:e8", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aa65aa8-ef", "ovs_interfaceid": "0aa65aa8-efb0-46a2-88e7-a95ca258d9e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.112 256736 DEBUG nova.network.os_vif_util [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:20:e8,bridge_name='br-int',has_traffic_filtering=True,id=0aa65aa8-efb0-46a2-88e7-a95ca258d9e3,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0aa65aa8-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.112 256736 DEBUG os_vif [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:20:e8,bridge_name='br-int',has_traffic_filtering=True,id=0aa65aa8-efb0-46a2-88e7-a95ca258d9e3,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0aa65aa8-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.115 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.115 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0aa65aa8-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.120 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.124 256736 INFO os_vif [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:20:e8,bridge_name='br-int',has_traffic_filtering=True,id=0aa65aa8-efb0-46a2-88e7-a95ca258d9e3,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0aa65aa8-ef')
Nov 29 08:05:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:05:55 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[294520]: [NOTICE]   (294524) : haproxy version is 2.8.14-c23fe91
Nov 29 08:05:55 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[294520]: [NOTICE]   (294524) : path to executable is /usr/sbin/haproxy
Nov 29 08:05:55 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[294520]: [WARNING]  (294524) : Exiting Master process...
Nov 29 08:05:55 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[294520]: [ALERT]    (294524) : Current worker (294526) exited with code 143 (Terminated)
Nov 29 08:05:55 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[294520]: [WARNING]  (294524) : All workers exited. Exiting... (0)
Nov 29 08:05:55 compute-0 systemd[1]: libpod-a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4.scope: Deactivated successfully.
Nov 29 08:05:55 compute-0 podman[295239]: 2025-11-29 08:05:55.173219649 +0000 UTC m=+0.070382563 container died a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4-userdata-shm.mount: Deactivated successfully.
Nov 29 08:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a53a2f18dd5642584eab60066f4e6b3d30a8cdd4454d9eb5fb8d5746b188d4c-merged.mount: Deactivated successfully.
Nov 29 08:05:55 compute-0 podman[295239]: 2025-11-29 08:05:55.223765814 +0000 UTC m=+0.120928698 container cleanup a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:05:55 compute-0 systemd[1]: libpod-conmon-a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4.scope: Deactivated successfully.
Nov 29 08:05:55 compute-0 podman[295292]: 2025-11-29 08:05:55.305774868 +0000 UTC m=+0.055081111 container remove a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:05:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:55.311 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b3844a8e-2e53-43ef-9f36-0607d776b440]: (4, ('Sat Nov 29 08:05:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c (a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4)\na1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4\nSat Nov 29 08:05:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c (a1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4)\na1ac7759720224d5f861613474506a4cb382b4028c7a7dd7bdba6d91714b6af4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:55.315 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[51cd3f8f-9c78-45c2-9065-55296c34cb01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:55.316 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f1bbc0-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.319 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:55 compute-0 kernel: tap45f1bbc0-c0: left promiscuous mode
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.336 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:55.338 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c813f6f4-a26a-49ab-b42f-d32950041a2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:55.374 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a0a927-e154-45c2-be59-cf3b127b8340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:55.376 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e83c08-ef6d-4c62-adb9-f664d0da5b3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:55.397 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c26d13d8-505f-4ba3-aa5d-cce5832f6728]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584677, 'reachable_time': 38541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295308, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d45f1bbc0\x2dc06e\x2d4a64\x2d9d82\x2d3a4cbaa9482c.mount: Deactivated successfully.
Nov 29 08:05:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:55.403 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:05:55 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:55.403 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2fdc7a-b958-4006-9af7-6b911a520100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.606 256736 INFO nova.virt.libvirt.driver [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Deleting instance files /var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532_del
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.607 256736 INFO nova.virt.libvirt.driver [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Deletion of /var/lib/nova/instances/d165576a-73f8-49f3-874e-2fe3aba30532_del complete
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.755 256736 INFO nova.compute.manager [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Took 0.91 seconds to destroy the instance on the hypervisor.
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.756 256736 DEBUG oslo.service.loopingcall [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.756 256736 DEBUG nova.compute.manager [-] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:05:55 compute-0 nova_compute[256729]: 2025-11-29 08:05:55.757 256736 DEBUG nova.network.neutron [-] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:05:56 compute-0 ceph-mon[75050]: pgmap v2008: 305 pgs: 305 active+clean; 350 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.8 MiB/s wr, 112 op/s
Nov 29 08:05:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 350 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.8 MiB/s wr, 160 op/s
Nov 29 08:05:57 compute-0 nova_compute[256729]: 2025-11-29 08:05:57.457 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:57 compute-0 nova_compute[256729]: 2025-11-29 08:05:57.592 256736 DEBUG nova.network.neutron [-] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:05:57 compute-0 nova_compute[256729]: 2025-11-29 08:05:57.630 256736 INFO nova.compute.manager [-] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Took 1.87 seconds to deallocate network for instance.
Nov 29 08:05:57 compute-0 nova_compute[256729]: 2025-11-29 08:05:57.680 256736 DEBUG nova.compute.manager [req-4009bd23-ba92-430d-9d93-bcf79b249df2 req-8135a1a3-daae-4d22-9878-580b23f7f18e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Received event network-vif-deleted-0aa65aa8-efb0-46a2-88e7-a95ca258d9e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:57 compute-0 sshd-session[295209]: Connection closed by authenticating user root 143.14.121.41 port 43094 [preauth]
Nov 29 08:05:57 compute-0 nova_compute[256729]: 2025-11-29 08:05:57.731 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:05:57 compute-0 nova_compute[256729]: 2025-11-29 08:05:57.896 256736 INFO nova.compute.manager [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Took 0.26 seconds to detach 1 volumes for instance.
Nov 29 08:05:57 compute-0 nova_compute[256729]: 2025-11-29 08:05:57.996 256736 DEBUG oslo_concurrency.lockutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:57 compute-0 nova_compute[256729]: 2025-11-29 08:05:57.997 256736 DEBUG oslo_concurrency.lockutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:58 compute-0 nova_compute[256729]: 2025-11-29 08:05:58.067 256736 DEBUG oslo_concurrency.processutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:05:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:05:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/36761390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:58 compute-0 nova_compute[256729]: 2025-11-29 08:05:58.561 256736 DEBUG oslo_concurrency.processutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:05:58 compute-0 nova_compute[256729]: 2025-11-29 08:05:58.569 256736 DEBUG nova.compute.provider_tree [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:05:58 compute-0 nova_compute[256729]: 2025-11-29 08:05:58.589 256736 DEBUG nova.scheduler.client.report [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:05:58 compute-0 ceph-mon[75050]: pgmap v2009: 305 pgs: 305 active+clean; 350 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.8 MiB/s wr, 160 op/s
Nov 29 08:05:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/36761390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:05:58 compute-0 nova_compute[256729]: 2025-11-29 08:05:58.642 256736 DEBUG oslo_concurrency.lockutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:58 compute-0 nova_compute[256729]: 2025-11-29 08:05:58.681 256736 INFO nova.scheduler.client.report [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Deleted allocations for instance d165576a-73f8-49f3-874e-2fe3aba30532
Nov 29 08:05:58 compute-0 nova_compute[256729]: 2025-11-29 08:05:58.758 256736 DEBUG oslo_concurrency.lockutils [None req-c13d7b63-58c3-437d-b16b-a00490150093 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "d165576a-73f8-49f3-874e-2fe3aba30532" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 136 op/s
Nov 29 08:05:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:59.784 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:05:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:59.785 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:05:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:05:59.786 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:05:59 compute-0 nova_compute[256729]: 2025-11-29 08:05:59.826 256736 DEBUG nova.compute.manager [req-19015ec8-9e28-4297-88d1-cb1202d899dc req-fbd6c17e-9fc5-4b83-bcaa-7db1cfb46444 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received event network-changed-55b6aa9b-29fc-4f6b-9ae5-885c514941fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:05:59 compute-0 nova_compute[256729]: 2025-11-29 08:05:59.827 256736 DEBUG nova.compute.manager [req-19015ec8-9e28-4297-88d1-cb1202d899dc req-fbd6c17e-9fc5-4b83-bcaa-7db1cfb46444 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Refreshing instance network info cache due to event network-changed-55b6aa9b-29fc-4f6b-9ae5-885c514941fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:05:59 compute-0 nova_compute[256729]: 2025-11-29 08:05:59.828 256736 DEBUG oslo_concurrency.lockutils [req-19015ec8-9e28-4297-88d1-cb1202d899dc req-fbd6c17e-9fc5-4b83-bcaa-7db1cfb46444 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:05:59 compute-0 nova_compute[256729]: 2025-11-29 08:05:59.828 256736 DEBUG oslo_concurrency.lockutils [req-19015ec8-9e28-4297-88d1-cb1202d899dc req-fbd6c17e-9fc5-4b83-bcaa-7db1cfb46444 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:05:59 compute-0 nova_compute[256729]: 2025-11-29 08:05:59.829 256736 DEBUG nova.network.neutron [req-19015ec8-9e28-4297-88d1-cb1202d899dc req-fbd6c17e-9fc5-4b83-bcaa-7db1cfb46444 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Refreshing network info cache for port 55b6aa9b-29fc-4f6b-9ae5-885c514941fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:00 compute-0 sshd-session[295309]: Connection closed by authenticating user root 143.14.121.41 port 43098 [preauth]
Nov 29 08:06:00 compute-0 nova_compute[256729]: 2025-11-29 08:06:00.119 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:00 compute-0 nova_compute[256729]: 2025-11-29 08:06:00.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:00 compute-0 ceph-mon[75050]: pgmap v2010: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 136 op/s
Nov 29 08:06:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Nov 29 08:06:01 compute-0 nova_compute[256729]: 2025-11-29 08:06:01.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:01 compute-0 nova_compute[256729]: 2025-11-29 08:06:01.530 256736 DEBUG nova.network.neutron [req-19015ec8-9e28-4297-88d1-cb1202d899dc req-fbd6c17e-9fc5-4b83-bcaa-7db1cfb46444 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updated VIF entry in instance network info cache for port 55b6aa9b-29fc-4f6b-9ae5-885c514941fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:01 compute-0 nova_compute[256729]: 2025-11-29 08:06:01.531 256736 DEBUG nova.network.neutron [req-19015ec8-9e28-4297-88d1-cb1202d899dc req-fbd6c17e-9fc5-4b83-bcaa-7db1cfb46444 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updating instance_info_cache with network_info: [{"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:01 compute-0 nova_compute[256729]: 2025-11-29 08:06:01.767 256736 DEBUG oslo_concurrency.lockutils [req-19015ec8-9e28-4297-88d1-cb1202d899dc req-fbd6c17e-9fc5-4b83-bcaa-7db1cfb46444 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:02 compute-0 nova_compute[256729]: 2025-11-29 08:06:02.734 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:02 compute-0 ceph-mon[75050]: pgmap v2011: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Nov 29 08:06:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Nov 29 08:06:03 compute-0 nova_compute[256729]: 2025-11-29 08:06:03.147 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:03 compute-0 ceph-mon[75050]: pgmap v2012: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Nov 29 08:06:03 compute-0 ovn_controller[153383]: 2025-11-29T08:06:03Z|00048|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.10
Nov 29 08:06:03 compute-0 ovn_controller[153383]: 2025-11-29T08:06:03Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c8:3a:8c 10.100.0.10
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.063 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:04 compute-0 sshd-session[295333]: Connection closed by authenticating user root 143.14.121.41 port 43114 [preauth]
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.301 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.301 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.301 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.301 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.302 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:04 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322841108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.768 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:04 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1322841108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.898 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:06:04 compute-0 nova_compute[256729]: 2025-11-29 08:06:04.899 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 26 KiB/s wr, 94 op/s
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.114 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.115 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4165MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.116 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.116 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.122 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.481 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 10a1a099-bf1a-4195-9186-8f440437a1ce actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.482 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.482 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.522 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:06:05
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'backups']
Nov 29 08:06:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:06:05 compute-0 ceph-mon[75050]: pgmap v2013: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 26 KiB/s wr, 94 op/s
Nov 29 08:06:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139892156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.968 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:05 compute-0 nova_compute[256729]: 2025-11-29 08:06:05.977 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:06 compute-0 nova_compute[256729]: 2025-11-29 08:06:06.175 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:06 compute-0 nova_compute[256729]: 2025-11-29 08:06:06.435 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:06:06 compute-0 nova_compute[256729]: 2025-11-29 08:06:06.436 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1139892156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:07 compute-0 ovn_controller[153383]: 2025-11-29T08:06:07Z|00050|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.10
Nov 29 08:06:07 compute-0 ovn_controller[153383]: 2025-11-29T08:06:07Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c8:3a:8c 10.100.0.10
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:06:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 27 KiB/s wr, 90 op/s
Nov 29 08:06:07 compute-0 nova_compute[256729]: 2025-11-29 08:06:07.438 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:07 compute-0 nova_compute[256729]: 2025-11-29 08:06:07.439 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:06:07 compute-0 nova_compute[256729]: 2025-11-29 08:06:07.439 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:06:07 compute-0 nova_compute[256729]: 2025-11-29 08:06:07.704 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:07 compute-0 nova_compute[256729]: 2025-11-29 08:06:07.704 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:07 compute-0 nova_compute[256729]: 2025-11-29 08:06:07.704 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:06:07 compute-0 nova_compute[256729]: 2025-11-29 08:06:07.704 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 10a1a099-bf1a-4195-9186-8f440437a1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:07 compute-0 podman[295384]: 2025-11-29 08:06:07.729281842 +0000 UTC m=+0.083551756 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:06:07 compute-0 podman[295383]: 2025-11-29 08:06:07.733986172 +0000 UTC m=+0.092195435 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 08:06:07 compute-0 nova_compute[256729]: 2025-11-29 08:06:07.751 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:07 compute-0 podman[295382]: 2025-11-29 08:06:07.784127346 +0000 UTC m=+0.144650963 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:07 compute-0 sshd-session[295355]: Connection closed by authenticating user root 143.14.121.41 port 38638 [preauth]
Nov 29 08:06:07 compute-0 ceph-mon[75050]: pgmap v2014: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 27 KiB/s wr, 90 op/s
Nov 29 08:06:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:06:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3855778550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:06:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3855778550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:08 compute-0 nova_compute[256729]: 2025-11-29 08:06:08.769 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:08 compute-0 nova_compute[256729]: 2025-11-29 08:06:08.770 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3855778550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3855778550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:08 compute-0 ovn_controller[153383]: 2025-11-29T08:06:08Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:3a:8c 10.100.0.10
Nov 29 08:06:08 compute-0 ovn_controller[153383]: 2025-11-29T08:06:08Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:3a:8c 10.100.0.10
Nov 29 08:06:08 compute-0 nova_compute[256729]: 2025-11-29 08:06:08.947 256736 DEBUG nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:06:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 14 KiB/s wr, 50 op/s
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.265 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.266 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.274 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.275 256736 INFO nova.compute.claims [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.549 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.655 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updating instance_info_cache with network_info: [{"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.678 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.679 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.680 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.681 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.681 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:09 compute-0 nova_compute[256729]: 2025-11-29 08:06:09.682 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:06:09 compute-0 ceph-mon[75050]: pgmap v2015: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 14 KiB/s wr, 50 op/s
Nov 29 08:06:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445610136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.046 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.056 256736 DEBUG nova.compute.provider_tree [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.072 256736 DEBUG nova.scheduler.client.report [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.087 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403555.0864146, d165576a-73f8-49f3-874e-2fe3aba30532 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.088 256736 INFO nova.compute.manager [-] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] VM Stopped (Lifecycle Event)
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.096 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.097 256736 DEBUG nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.126 256736 DEBUG nova.compute.manager [None req-6fa436f8-a44c-4b53-8d8f-e49f15c38ec3 - - - - - -] [instance: d165576a-73f8-49f3-874e-2fe3aba30532] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.127 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.160 256736 DEBUG nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.160 256736 DEBUG nova.network.neutron [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.183 256736 INFO nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.203 256736 DEBUG nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.250 256736 INFO nova.virt.block_device [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Booting with volume ed03ba2b-50f6-4b72-8e40-ced840493c2f at /dev/vda
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.377 256736 DEBUG nova.policy [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb2de7fb67042f89a025f1a3e872530', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.400 256736 DEBUG os_brick.utils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.401 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.419 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.419 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9e0c04-c2ff-4eaf-a45b-437c4f8f07cc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.420 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.429 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.430 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5544f3-d68e-4727-a383-6fff97b1505f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.431 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.446 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.446 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[d467780c-e43a-4c74-9fd9-4c2222db40ad]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.448 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3829cf-feb7-4f93-b8b2-5bbf6be8bc89]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.448 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.478 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.482 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.484 256736 DEBUG os_brick.initiator.connectors.lightos [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.485 256736 DEBUG os_brick.initiator.connectors.lightos [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.485 256736 DEBUG os_brick.initiator.connectors.lightos [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.485 256736 DEBUG os_brick.utils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:06:10 compute-0 nova_compute[256729]: 2025-11-29 08:06:10.486 256736 DEBUG nova.virt.block_device [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Updating existing volume attachment record: 00bbb70b-9a8d-44bb-80a7-ed6ec0c4c8f0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:06:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2445610136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 13 KiB/s wr, 46 op/s
Nov 29 08:06:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3259138742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:11 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:11.447 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:11 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:11.449 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:06:11 compute-0 nova_compute[256729]: 2025-11-29 08:06:11.449 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:11 compute-0 nova_compute[256729]: 2025-11-29 08:06:11.521 256736 DEBUG nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:06:11 compute-0 nova_compute[256729]: 2025-11-29 08:06:11.524 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:06:11 compute-0 nova_compute[256729]: 2025-11-29 08:06:11.525 256736 INFO nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Creating image(s)
Nov 29 08:06:11 compute-0 nova_compute[256729]: 2025-11-29 08:06:11.526 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:06:11 compute-0 nova_compute[256729]: 2025-11-29 08:06:11.526 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Ensure instance console log exists: /var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:06:11 compute-0 nova_compute[256729]: 2025-11-29 08:06:11.527 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:11 compute-0 nova_compute[256729]: 2025-11-29 08:06:11.528 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:11 compute-0 nova_compute[256729]: 2025-11-29 08:06:11.528 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:11 compute-0 sshd-session[295444]: Invalid user otsmanager from 143.14.121.41 port 38650
Nov 29 08:06:11 compute-0 sshd-session[295444]: Connection closed by invalid user otsmanager 143.14.121.41 port 38650 [preauth]
Nov 29 08:06:11 compute-0 ceph-mon[75050]: pgmap v2016: 305 pgs: 305 active+clean; 350 MiB data, 628 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 13 KiB/s wr, 46 op/s
Nov 29 08:06:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3259138742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:12 compute-0 nova_compute[256729]: 2025-11-29 08:06:12.067 256736 DEBUG nova.network.neutron [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Successfully created port: 906ad477-03aa-4cfd-9485-d0308f5ce2f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:06:12 compute-0 nova_compute[256729]: 2025-11-29 08:06:12.753 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 17 KiB/s wr, 46 op/s
Nov 29 08:06:13 compute-0 sshd-session[295475]: Invalid user oracle from 143.14.121.41 port 51948
Nov 29 08:06:13 compute-0 sshd-session[295475]: Connection closed by invalid user oracle 143.14.121.41 port 51948 [preauth]
Nov 29 08:06:14 compute-0 ceph-mon[75050]: pgmap v2017: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 17 KiB/s wr, 46 op/s
Nov 29 08:06:14 compute-0 nova_compute[256729]: 2025-11-29 08:06:14.951 256736 DEBUG nova.network.neutron [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Successfully updated port: 906ad477-03aa-4cfd-9485-d0308f5ce2f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:06:14 compute-0 nova_compute[256729]: 2025-11-29 08:06:14.972 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "refresh_cache-c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:14 compute-0 nova_compute[256729]: 2025-11-29 08:06:14.972 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquired lock "refresh_cache-c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:14 compute-0 nova_compute[256729]: 2025-11-29 08:06:14.973 256736 DEBUG nova.network.neutron [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 26 KiB/s wr, 47 op/s
Nov 29 08:06:15 compute-0 nova_compute[256729]: 2025-11-29 08:06:15.129 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:15 compute-0 nova_compute[256729]: 2025-11-29 08:06:15.134 256736 DEBUG nova.network.neutron [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:06:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:15 compute-0 nova_compute[256729]: 2025-11-29 08:06:15.366 256736 DEBUG nova.compute.manager [req-d1dcb52c-6167-438b-a4dc-4ad2873cb92a req-e641aade-a22b-486b-b059-820480ca7ae4 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received event network-changed-906ad477-03aa-4cfd-9485-d0308f5ce2f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:15 compute-0 nova_compute[256729]: 2025-11-29 08:06:15.367 256736 DEBUG nova.compute.manager [req-d1dcb52c-6167-438b-a4dc-4ad2873cb92a req-e641aade-a22b-486b-b059-820480ca7ae4 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Refreshing instance network info cache due to event network-changed-906ad477-03aa-4cfd-9485-d0308f5ce2f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:15 compute-0 nova_compute[256729]: 2025-11-29 08:06:15.368 256736 DEBUG oslo_concurrency.lockutils [req-d1dcb52c-6167-438b-a4dc-4ad2873cb92a req-e641aade-a22b-486b-b059-820480ca7ae4 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0036482624983720778 of space, bias 1.0, pg target 1.0944787495116233 quantized to 32 (current 32)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.9013621638340822e-05 quantized to 32 (current 32)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19918670028325844 quantized to 32 (current 32)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:06:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Nov 29 08:06:16 compute-0 ceph-mon[75050]: pgmap v2018: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 26 KiB/s wr, 47 op/s
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.593 256736 DEBUG nova.network.neutron [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Updating instance_info_cache with network_info: [{"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.621 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Releasing lock "refresh_cache-c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.622 256736 DEBUG nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Instance network_info: |[{"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.622 256736 DEBUG oslo_concurrency.lockutils [req-d1dcb52c-6167-438b-a4dc-4ad2873cb92a req-e641aade-a22b-486b-b059-820480ca7ae4 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.623 256736 DEBUG nova.network.neutron [req-d1dcb52c-6167-438b-a4dc-4ad2873cb92a req-e641aade-a22b-486b-b059-820480ca7ae4 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Refreshing network info cache for port 906ad477-03aa-4cfd-9485-d0308f5ce2f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.630 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Start _get_guest_xml network_info=[{"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c2e8da51-3b05-4a1c-a872-9b977bf7cdcd', 'attached_at': '', 'detached_at': '', 'volume_id': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'serial': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': '00bbb70b-9a8d-44bb-80a7-ed6ec0c4c8f0', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.636 256736 WARNING nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.642 256736 DEBUG nova.virt.libvirt.host [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.643 256736 DEBUG nova.virt.libvirt.host [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.647 256736 DEBUG nova.virt.libvirt.host [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.648 256736 DEBUG nova.virt.libvirt.host [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.649 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.649 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.650 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.651 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.651 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.652 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.652 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.653 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.653 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.654 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.654 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.655 256736 DEBUG nova.virt.hardware [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.690 256736 DEBUG nova.storage.rbd_utils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image c2e8da51-3b05-4a1c-a872-9b977bf7cdcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:16 compute-0 nova_compute[256729]: 2025-11-29 08:06:16.694 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 15 KiB/s wr, 33 op/s
Nov 29 08:06:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374700188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.144 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/374700188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.276 256736 DEBUG os_brick.encryptors [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Using volume encryption metadata '{'encryption_key_id': '2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'c2e8da51-3b05-4a1c-a872-9b977bf7cdcd', 'attached_at': '', 'detached_at': '', 'volume_id': 'ed03ba2b-50f6-4b72-8e40-ced840493c2f', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.281 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.301 256736 DEBUG barbicanclient.v1.secrets [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.302 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.325 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.326 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.346 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.347 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.369 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.370 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.394 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.394 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.423 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.424 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.446 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.446 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.469 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.470 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.494 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.495 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.516 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.516 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.538 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.538 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.564 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.564 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.593 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.594 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.617 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.618 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.641 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.642 256736 INFO barbicanclient.base [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Calculated Secrets uuid ref: secrets/2b81d34d-d7d8-4d29-b6ab-bec2c1da4a67
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.666 256736 DEBUG barbicanclient.client [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.667 256736 DEBUG nova.virt.libvirt.host [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <volume>ed03ba2b-50f6-4b72-8e40-ced840493c2f</volume>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   </usage>
Nov 29 08:06:17 compute-0 nova_compute[256729]: </secret>
Nov 29 08:06:17 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.702 256736 DEBUG nova.virt.libvirt.vif [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1719257085',display_name='tempest-TransferEncryptedVolumeTest-server-1719257085',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1719257085',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF/ioI52WgdhEBAbeM3RhpZNbUdNn18Xja5uOnO3NOZPUzsKxYrvXBByAxA/Dl5IK3nSUHQ9foFVWH8Ax4rgF1bIpX1xDfETzCAV2xOlgY9UnrjEKcSJoT+wgO+gA9frAA==',key_name='tempest-TransferEncryptedVolumeTest-1552823458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-wdsp1zqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:10Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=c2e8da51-3b05-4a1c-a872-9b977bf7cdcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.703 256736 DEBUG nova.network.os_vif_util [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.704 256736 DEBUG nova.network.os_vif_util [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:70:fa,bridge_name='br-int',has_traffic_filtering=True,id=906ad477-03aa-4cfd-9485-d0308f5ce2f1,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906ad477-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.707 256736 DEBUG nova.objects.instance [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lazy-loading 'pci_devices' on Instance uuid c2e8da51-3b05-4a1c-a872-9b977bf7cdcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.721 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <uuid>c2e8da51-3b05-4a1c-a872-9b977bf7cdcd</uuid>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <name>instance-00000018</name>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1719257085</nova:name>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:06:16</nova:creationTime>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <nova:user uuid="2cb2de7fb67042f89a025f1a3e872530">tempest-TransferEncryptedVolumeTest-2049180676-project-member</nova:user>
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <nova:project uuid="00f4c1f7964a4e5fbe3db5be46b9676e">tempest-TransferEncryptedVolumeTest-2049180676</nova:project>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <nova:port uuid="906ad477-03aa-4cfd-9485-d0308f5ce2f1">
Nov 29 08:06:17 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <system>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <entry name="serial">c2e8da51-3b05-4a1c-a872-9b977bf7cdcd</entry>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <entry name="uuid">c2e8da51-3b05-4a1c-a872-9b977bf7cdcd</entry>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     </system>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <os>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   </os>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <features>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   </features>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd_disk.config">
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       </source>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-ed03ba2b-50f6-4b72-8e40-ced840493c2f">
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       </source>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <serial>ed03ba2b-50f6-4b72-8e40-ced840493c2f</serial>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <encryption format="luks">
Nov 29 08:06:17 compute-0 nova_compute[256729]:         <secret type="passphrase" uuid="58ad5883-187d-4528-85b0-96f2464dcf0e"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       </encryption>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:bd:70:fa"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <target dev="tap906ad477-03"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd/console.log" append="off"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <video>
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     </video>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:06:17 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:06:17 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:06:17 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:06:17 compute-0 nova_compute[256729]: </domain>
Nov 29 08:06:17 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.723 256736 DEBUG nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Preparing to wait for external event network-vif-plugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.724 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.725 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.725 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.727 256736 DEBUG nova.virt.libvirt.vif [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1719257085',display_name='tempest-TransferEncryptedVolumeTest-server-1719257085',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1719257085',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF/ioI52WgdhEBAbeM3RhpZNbUdNn18Xja5uOnO3NOZPUzsKxYrvXBByAxA/Dl5IK3nSUHQ9foFVWH8Ax4rgF1bIpX1xDfETzCAV2xOlgY9UnrjEKcSJoT+wgO+gA9frAA==',key_name='tempest-TransferEncryptedVolumeTest-1552823458',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-wdsp1zqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:10Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=c2e8da51-3b05-4a1c-a872-9b977bf7cdcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.727 256736 DEBUG nova.network.os_vif_util [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.728 256736 DEBUG nova.network.os_vif_util [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:70:fa,bridge_name='br-int',has_traffic_filtering=True,id=906ad477-03aa-4cfd-9485-d0308f5ce2f1,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906ad477-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.730 256736 DEBUG os_vif [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:70:fa,bridge_name='br-int',has_traffic_filtering=True,id=906ad477-03aa-4cfd-9485-d0308f5ce2f1,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906ad477-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.731 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.731 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.732 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.737 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.738 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap906ad477-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.738 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap906ad477-03, col_values=(('external_ids', {'iface-id': '906ad477-03aa-4cfd-9485-d0308f5ce2f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:70:fa', 'vm-uuid': 'c2e8da51-3b05-4a1c-a872-9b977bf7cdcd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.740 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:17 compute-0 NetworkManager[48962]: <info>  [1764403577.7421] manager: (tap906ad477-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.744 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.750 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.751 256736 INFO os_vif [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:70:fa,bridge_name='br-int',has_traffic_filtering=True,id=906ad477-03aa-4cfd-9485-d0308f5ce2f1,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906ad477-03')
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.754 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.818 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.819 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.819 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] No VIF found with MAC fa:16:3e:bd:70:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.819 256736 INFO nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Using config drive
Nov 29 08:06:17 compute-0 nova_compute[256729]: 2025-11-29 08:06:17.838 256736 DEBUG nova.storage.rbd_utils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image c2e8da51-3b05-4a1c-a872-9b977bf7cdcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:18 compute-0 ceph-mon[75050]: pgmap v2019: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 15 KiB/s wr, 33 op/s
Nov 29 08:06:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 123 KiB/s rd, 21 KiB/s wr, 9 op/s
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.202 256736 INFO nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Creating config drive at /var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd/disk.config
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.212 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjliy2k5q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.352 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjliy2k5q" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.385 256736 DEBUG nova.storage.rbd_utils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] rbd image c2e8da51-3b05-4a1c-a872-9b977bf7cdcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.388 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd/disk.config c2e8da51-3b05-4a1c-a872-9b977bf7cdcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.556 256736 DEBUG oslo_concurrency.processutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd/disk.config c2e8da51-3b05-4a1c-a872-9b977bf7cdcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.557 256736 INFO nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Deleting local config drive /var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd/disk.config because it was imported into RBD.
Nov 29 08:06:19 compute-0 kernel: tap906ad477-03: entered promiscuous mode
Nov 29 08:06:19 compute-0 NetworkManager[48962]: <info>  [1764403579.6322] manager: (tap906ad477-03): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Nov 29 08:06:19 compute-0 ovn_controller[153383]: 2025-11-29T08:06:19Z|00226|binding|INFO|Claiming lport 906ad477-03aa-4cfd-9485-d0308f5ce2f1 for this chassis.
Nov 29 08:06:19 compute-0 ovn_controller[153383]: 2025-11-29T08:06:19Z|00227|binding|INFO|906ad477-03aa-4cfd-9485-d0308f5ce2f1: Claiming fa:16:3e:bd:70:fa 10.100.0.5
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.635 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.645 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:70:fa 10.100.0.5'], port_security=['fa:16:3e:bd:70:fa 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c2e8da51-3b05-4a1c-a872-9b977bf7cdcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '377cba5c-a444-4939-9e65-f24eadd0abbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=357216b9-f046-4273-a2c2-2385abe848ac, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=906ad477-03aa-4cfd-9485-d0308f5ce2f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.646 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 906ad477-03aa-4cfd-9485-d0308f5ce2f1 in datapath 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c bound to our chassis
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.648 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:06:19 compute-0 ovn_controller[153383]: 2025-11-29T08:06:19Z|00228|binding|INFO|Setting lport 906ad477-03aa-4cfd-9485-d0308f5ce2f1 ovn-installed in OVS
Nov 29 08:06:19 compute-0 ovn_controller[153383]: 2025-11-29T08:06:19Z|00229|binding|INFO|Setting lport 906ad477-03aa-4cfd-9485-d0308f5ce2f1 up in Southbound
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.660 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.663 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.669 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[3150746f-e50c-4cf2-8357-e37dbb0400b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.669 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45f1bbc0-c1 in ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.672 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45f1bbc0-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.672 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[8683df21-992b-4d41-8b82-20449126dc68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.673 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e668e208-a098-44e5-96f5-522d873ef27f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 systemd-udevd[295593]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:19 compute-0 systemd-machined[217781]: New machine qemu-24-instance-00000018.
Nov 29 08:06:19 compute-0 NetworkManager[48962]: <info>  [1764403579.6909] device (tap906ad477-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:06:19 compute-0 NetworkManager[48962]: <info>  [1764403579.6919] device (tap906ad477-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:06:19 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.693 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[efd34045-9784-446b-9c76-44f2031304c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.719 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[28d1394f-4d15-453f-9074-83cd68ee0fef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 sshd-session[295477]: Invalid user oracle from 143.14.121.41 port 51960
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.758 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[ce85a763-837a-4731-a9dc-9c1e7217bd5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 NetworkManager[48962]: <info>  [1764403579.7652] manager: (tap45f1bbc0-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/122)
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.765 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b37ad630-ecb2-49b3-85c9-42b784dff09e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 systemd-udevd[295596]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.804 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[4e4fa51f-111f-4bc4-bf48-d4b354f8a4c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.808 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6958d5-6194-4256-80f7-2471a537a762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 NetworkManager[48962]: <info>  [1764403579.8364] device (tap45f1bbc0-c0): carrier: link connected
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.844 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[efa55e8a-b9be-48be-aec6-72a3babb1713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.864 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[405f199b-75bb-4416-9705-7674e6262549]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f1bbc0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:b9:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589490, 'reachable_time': 20504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295625, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.881 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[63602905-970d-46fd-b896-df7da862045f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:b9ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 589490, 'tstamp': 589490}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295626, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.900 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[159d89b2-a3ac-4010-9ffe-cfc96161d1d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45f1bbc0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:b9:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589490, 'reachable_time': 20504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295627, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.924 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[9a50334c-c265-4b18-bfb6-90a668757078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.979 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[cb862d00-cfe2-422a-a24e-f0aaf21eff0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.981 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f1bbc0-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.981 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.982 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45f1bbc0-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:19 compute-0 kernel: tap45f1bbc0-c0: entered promiscuous mode
Nov 29 08:06:19 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.984 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:19 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:06:19 compute-0 NetworkManager[48962]: <info>  [1764403579.9852] manager: (tap45f1bbc0-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.993 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45f1bbc0-c0, col_values=(('external_ids', {'iface-id': '1506b576-854d-4118-b808-0e5e32d85d28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.995 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:19 compute-0 ovn_controller[153383]: 2025-11-29T08:06:19Z|00230|binding|INFO|Releasing lport 1506b576-854d-4118-b808-0e5e32d85d28 from this chassis (sb_readonly=0)
Nov 29 08:06:19 compute-0 nova_compute[256729]: 2025-11-29 08:06:19.996 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.996 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.997 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[00203758-a623-40c8-a0d3-04bf54456d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.998 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.pid.haproxy
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c
Nov 29 08:06:19 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:06:20 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:19.999 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'env', 'PROCESS_TAG=haproxy-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:06:20 compute-0 nova_compute[256729]: 2025-11-29 08:06:20.015 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:20 compute-0 sshd-session[295477]: Connection closed by invalid user oracle 143.14.121.41 port 51960 [preauth]
Nov 29 08:06:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:20 compute-0 ceph-mon[75050]: pgmap v2020: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 123 KiB/s rd, 21 KiB/s wr, 9 op/s
Nov 29 08:06:20 compute-0 podman[295678]: 2025-11-29 08:06:20.432115077 +0000 UTC m=+0.079522185 container create 1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 08:06:20 compute-0 podman[295678]: 2025-11-29 08:06:20.385383097 +0000 UTC m=+0.032790205 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:06:20 compute-0 systemd[1]: Started libpod-conmon-1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f.scope.
Nov 29 08:06:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2285de84fca3028f035e1a007468ada3ed7b394938137b08fbfe7b7dbd1f363/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:20 compute-0 podman[295678]: 2025-11-29 08:06:20.663079471 +0000 UTC m=+0.310486619 container init 1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:06:20 compute-0 podman[295678]: 2025-11-29 08:06:20.67066733 +0000 UTC m=+0.318074438 container start 1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 08:06:20 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[295711]: [NOTICE]   (295715) : New worker (295717) forked
Nov 29 08:06:20 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[295711]: [NOTICE]   (295715) : Loading success.
Nov 29 08:06:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Nov 29 08:06:21 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:21.452 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:21 compute-0 nova_compute[256729]: 2025-11-29 08:06:21.534 256736 DEBUG nova.network.neutron [req-d1dcb52c-6167-438b-a4dc-4ad2873cb92a req-e641aade-a22b-486b-b059-820480ca7ae4 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Updated VIF entry in instance network info cache for port 906ad477-03aa-4cfd-9485-d0308f5ce2f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:21 compute-0 nova_compute[256729]: 2025-11-29 08:06:21.535 256736 DEBUG nova.network.neutron [req-d1dcb52c-6167-438b-a4dc-4ad2873cb92a req-e641aade-a22b-486b-b059-820480ca7ae4 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Updating instance_info_cache with network_info: [{"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:21 compute-0 nova_compute[256729]: 2025-11-29 08:06:21.720 256736 DEBUG oslo_concurrency.lockutils [req-d1dcb52c-6167-438b-a4dc-4ad2873cb92a req-e641aade-a22b-486b-b059-820480ca7ae4 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:21 compute-0 nova_compute[256729]: 2025-11-29 08:06:21.822 256736 DEBUG nova.compute.manager [req-80bdd0b5-0e39-4e78-8dc1-25cc00566b64 req-5fb21a66-7c58-4cee-934d-4a74ceab45ba ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received event network-vif-plugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:21 compute-0 nova_compute[256729]: 2025-11-29 08:06:21.822 256736 DEBUG oslo_concurrency.lockutils [req-80bdd0b5-0e39-4e78-8dc1-25cc00566b64 req-5fb21a66-7c58-4cee-934d-4a74ceab45ba ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:21 compute-0 nova_compute[256729]: 2025-11-29 08:06:21.823 256736 DEBUG oslo_concurrency.lockutils [req-80bdd0b5-0e39-4e78-8dc1-25cc00566b64 req-5fb21a66-7c58-4cee-934d-4a74ceab45ba ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:21 compute-0 nova_compute[256729]: 2025-11-29 08:06:21.823 256736 DEBUG oslo_concurrency.lockutils [req-80bdd0b5-0e39-4e78-8dc1-25cc00566b64 req-5fb21a66-7c58-4cee-934d-4a74ceab45ba ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:21 compute-0 nova_compute[256729]: 2025-11-29 08:06:21.824 256736 DEBUG nova.compute.manager [req-80bdd0b5-0e39-4e78-8dc1-25cc00566b64 req-5fb21a66-7c58-4cee-934d-4a74ceab45ba ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Processing event network-vif-plugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:06:22 compute-0 ceph-mon[75050]: pgmap v2021: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Nov 29 08:06:22 compute-0 nova_compute[256729]: 2025-11-29 08:06:22.574 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403582.5735373, c2e8da51-3b05-4a1c-a872-9b977bf7cdcd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:22 compute-0 nova_compute[256729]: 2025-11-29 08:06:22.575 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] VM Started (Lifecycle Event)
Nov 29 08:06:22 compute-0 nova_compute[256729]: 2025-11-29 08:06:22.580 256736 DEBUG nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:06:22 compute-0 nova_compute[256729]: 2025-11-29 08:06:22.585 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:06:22 compute-0 nova_compute[256729]: 2025-11-29 08:06:22.590 256736 INFO nova.virt.libvirt.driver [-] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Instance spawned successfully.
Nov 29 08:06:22 compute-0 nova_compute[256729]: 2025-11-29 08:06:22.591 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:06:22 compute-0 nova_compute[256729]: 2025-11-29 08:06:22.773 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:22 compute-0 nova_compute[256729]: 2025-11-29 08:06:22.973 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:22 compute-0 nova_compute[256729]: 2025-11-29 08:06:22.978 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.078 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.078 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.079 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.080 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.081 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.081 256736 DEBUG nova.virt.libvirt.driver [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.088 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.088 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403582.5755904, c2e8da51-3b05-4a1c-a872-9b977bf7cdcd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.089 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] VM Paused (Lifecycle Event)
Nov 29 08:06:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 38 KiB/s wr, 6 op/s
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.121 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.126 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403582.5843391, c2e8da51-3b05-4a1c-a872-9b977bf7cdcd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.126 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] VM Resumed (Lifecycle Event)
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.152 256736 INFO nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Took 11.63 seconds to spawn the instance on the hypervisor.
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.153 256736 DEBUG nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.154 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.163 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.196 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.215 256736 INFO nova.compute.manager [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Took 13.97 seconds to build instance.
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.230 256736 DEBUG oslo_concurrency.lockutils [None req-7cf1dcd1-784d-4246-91fc-138b90715b03 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.904 256736 DEBUG nova.compute.manager [req-6a60a3aa-7cb0-4e51-87a7-916b7ac78b4d req-e5b9fa3d-b1ed-4b2e-8b4f-0feed341ca14 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received event network-vif-plugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.905 256736 DEBUG oslo_concurrency.lockutils [req-6a60a3aa-7cb0-4e51-87a7-916b7ac78b4d req-e5b9fa3d-b1ed-4b2e-8b4f-0feed341ca14 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.905 256736 DEBUG oslo_concurrency.lockutils [req-6a60a3aa-7cb0-4e51-87a7-916b7ac78b4d req-e5b9fa3d-b1ed-4b2e-8b4f-0feed341ca14 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.905 256736 DEBUG oslo_concurrency.lockutils [req-6a60a3aa-7cb0-4e51-87a7-916b7ac78b4d req-e5b9fa3d-b1ed-4b2e-8b4f-0feed341ca14 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.906 256736 DEBUG nova.compute.manager [req-6a60a3aa-7cb0-4e51-87a7-916b7ac78b4d req-e5b9fa3d-b1ed-4b2e-8b4f-0feed341ca14 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] No waiting events found dispatching network-vif-plugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:23 compute-0 nova_compute[256729]: 2025-11-29 08:06:23.906 256736 WARNING nova.compute.manager [req-6a60a3aa-7cb0-4e51-87a7-916b7ac78b4d req-e5b9fa3d-b1ed-4b2e-8b4f-0feed341ca14 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received unexpected event network-vif-plugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 for instance with vm_state active and task_state None.
Nov 29 08:06:24 compute-0 ceph-mon[75050]: pgmap v2022: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 38 KiB/s wr, 6 op/s
Nov 29 08:06:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 36 KiB/s wr, 13 op/s
Nov 29 08:06:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:26 compute-0 sshd-session[295726]: Invalid user main from 143.14.121.41 port 51964
Nov 29 08:06:26 compute-0 ceph-mon[75050]: pgmap v2023: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 36 KiB/s wr, 13 op/s
Nov 29 08:06:26 compute-0 sshd-session[295726]: Connection closed by invalid user main 143.14.121.41 port 51964 [preauth]
Nov 29 08:06:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 27 KiB/s wr, 60 op/s
Nov 29 08:06:27 compute-0 nova_compute[256729]: 2025-11-29 08:06:27.775 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:27 compute-0 nova_compute[256729]: 2025-11-29 08:06:27.778 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:28 compute-0 ceph-mon[75050]: pgmap v2024: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 27 KiB/s wr, 60 op/s
Nov 29 08:06:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 75 op/s
Nov 29 08:06:29 compute-0 nova_compute[256729]: 2025-11-29 08:06:29.487 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:29 compute-0 nova_compute[256729]: 2025-11-29 08:06:29.488 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:29 compute-0 nova_compute[256729]: 2025-11-29 08:06:29.507 256736 DEBUG nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:06:29 compute-0 nova_compute[256729]: 2025-11-29 08:06:29.577 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:29 compute-0 nova_compute[256729]: 2025-11-29 08:06:29.578 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:29 compute-0 nova_compute[256729]: 2025-11-29 08:06:29.587 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:06:29 compute-0 nova_compute[256729]: 2025-11-29 08:06:29.588 256736 INFO nova.compute.claims [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:06:29 compute-0 nova_compute[256729]: 2025-11-29 08:06:29.721 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:30 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142814299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.173 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.184 256736 DEBUG nova.compute.provider_tree [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.206 256736 DEBUG nova.scheduler.client.report [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Nov 29 08:06:30 compute-0 ceph-mon[75050]: pgmap v2025: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 75 op/s
Nov 29 08:06:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4142814299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.251 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.252 256736 DEBUG nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:06:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Nov 29 08:06:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.346 256736 DEBUG nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.348 256736 DEBUG nova.network.neutron [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.371 256736 DEBUG nova.compute.manager [req-095ba528-53ea-46ec-a952-74e5cb5167e2 req-baa598e2-fe21-46d0-85cc-c44d8a87a109 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received event network-changed-906ad477-03aa-4cfd-9485-d0308f5ce2f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.372 256736 DEBUG nova.compute.manager [req-095ba528-53ea-46ec-a952-74e5cb5167e2 req-baa598e2-fe21-46d0-85cc-c44d8a87a109 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Refreshing instance network info cache due to event network-changed-906ad477-03aa-4cfd-9485-d0308f5ce2f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.373 256736 DEBUG oslo_concurrency.lockutils [req-095ba528-53ea-46ec-a952-74e5cb5167e2 req-baa598e2-fe21-46d0-85cc-c44d8a87a109 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.373 256736 DEBUG oslo_concurrency.lockutils [req-095ba528-53ea-46ec-a952-74e5cb5167e2 req-baa598e2-fe21-46d0-85cc-c44d8a87a109 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.375 256736 DEBUG nova.network.neutron [req-095ba528-53ea-46ec-a952-74e5cb5167e2 req-baa598e2-fe21-46d0-85cc-c44d8a87a109 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Refreshing network info cache for port 906ad477-03aa-4cfd-9485-d0308f5ce2f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.384 256736 INFO nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.421 256736 DEBUG nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.552 256736 DEBUG nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.553 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.554 256736 INFO nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Creating image(s)
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.590 256736 DEBUG nova.storage.rbd_utils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] rbd image f2cbf4cd-582b-408f-92b1-6b70364babcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.627 256736 DEBUG nova.storage.rbd_utils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] rbd image f2cbf4cd-582b-408f-92b1-6b70364babcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.667 256736 DEBUG nova.storage.rbd_utils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] rbd image f2cbf4cd-582b-408f-92b1-6b70364babcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.675 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.769 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.772 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.773 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.773 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.805 256736 DEBUG nova.storage.rbd_utils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] rbd image f2cbf4cd-582b-408f-92b1-6b70364babcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.811 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 f2cbf4cd-582b-408f-92b1-6b70364babcf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:30 compute-0 nova_compute[256729]: 2025-11-29 08:06:30.934 256736 DEBUG nova.policy [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d7bf857ed854504b6f769bea1a63cc4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2c2f274b1f924edba19c49761e8636bb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:06:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 23 KiB/s wr, 89 op/s
Nov 29 08:06:31 compute-0 nova_compute[256729]: 2025-11-29 08:06:31.192 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 f2cbf4cd-582b-408f-92b1-6b70364babcf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:31 compute-0 nova_compute[256729]: 2025-11-29 08:06:31.249 256736 DEBUG nova.storage.rbd_utils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] resizing rbd image f2cbf4cd-582b-408f-92b1-6b70364babcf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:06:31 compute-0 ceph-mon[75050]: osdmap e404: 3 total, 3 up, 3 in
Nov 29 08:06:31 compute-0 nova_compute[256729]: 2025-11-29 08:06:31.356 256736 DEBUG nova.objects.instance [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'migration_context' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:31 compute-0 nova_compute[256729]: 2025-11-29 08:06:31.377 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:06:31 compute-0 nova_compute[256729]: 2025-11-29 08:06:31.377 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Ensure instance console log exists: /var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:06:31 compute-0 nova_compute[256729]: 2025-11-29 08:06:31.378 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:31 compute-0 nova_compute[256729]: 2025-11-29 08:06:31.378 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:31 compute-0 nova_compute[256729]: 2025-11-29 08:06:31.379 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:31 compute-0 sshd-session[295734]: Invalid user HwHiAiUser from 143.14.121.41 port 56742
Nov 29 08:06:32 compute-0 ceph-mon[75050]: pgmap v2027: 305 pgs: 305 active+clean; 352 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 23 KiB/s wr, 89 op/s
Nov 29 08:06:32 compute-0 sshd-session[295734]: Connection closed by invalid user HwHiAiUser 143.14.121.41 port 56742 [preauth]
Nov 29 08:06:32 compute-0 nova_compute[256729]: 2025-11-29 08:06:32.580 256736 DEBUG nova.network.neutron [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Successfully created port: c112be9f-f94a-4fd7-bf2c-4f4614918d8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:06:32 compute-0 nova_compute[256729]: 2025-11-29 08:06:32.601 256736 DEBUG nova.network.neutron [req-095ba528-53ea-46ec-a952-74e5cb5167e2 req-baa598e2-fe21-46d0-85cc-c44d8a87a109 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Updated VIF entry in instance network info cache for port 906ad477-03aa-4cfd-9485-d0308f5ce2f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:32 compute-0 nova_compute[256729]: 2025-11-29 08:06:32.602 256736 DEBUG nova.network.neutron [req-095ba528-53ea-46ec-a952-74e5cb5167e2 req-baa598e2-fe21-46d0-85cc-c44d8a87a109 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Updating instance_info_cache with network_info: [{"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:32 compute-0 nova_compute[256729]: 2025-11-29 08:06:32.621 256736 DEBUG oslo_concurrency.lockutils [req-095ba528-53ea-46ec-a952-74e5cb5167e2 req-baa598e2-fe21-46d0-85cc-c44d8a87a109 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:32 compute-0 nova_compute[256729]: 2025-11-29 08:06:32.779 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 383 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 110 op/s
Nov 29 08:06:33 compute-0 nova_compute[256729]: 2025-11-29 08:06:33.962 256736 DEBUG nova.network.neutron [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Successfully updated port: c112be9f-f94a-4fd7-bf2c-4f4614918d8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:06:33 compute-0 nova_compute[256729]: 2025-11-29 08:06:33.981 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "refresh_cache-f2cbf4cd-582b-408f-92b1-6b70364babcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:33 compute-0 nova_compute[256729]: 2025-11-29 08:06:33.981 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquired lock "refresh_cache-f2cbf4cd-582b-408f-92b1-6b70364babcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:33 compute-0 nova_compute[256729]: 2025-11-29 08:06:33.982 256736 DEBUG nova.network.neutron [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:06:34 compute-0 nova_compute[256729]: 2025-11-29 08:06:34.072 256736 DEBUG nova.compute.manager [req-b55f3024-e5af-4eae-a5e2-63384b019d08 req-1b615ca3-ed9a-4478-a3be-c9eb43f6f8bc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received event network-changed-c112be9f-f94a-4fd7-bf2c-4f4614918d8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:34 compute-0 nova_compute[256729]: 2025-11-29 08:06:34.072 256736 DEBUG nova.compute.manager [req-b55f3024-e5af-4eae-a5e2-63384b019d08 req-1b615ca3-ed9a-4478-a3be-c9eb43f6f8bc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Refreshing instance network info cache due to event network-changed-c112be9f-f94a-4fd7-bf2c-4f4614918d8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:34 compute-0 nova_compute[256729]: 2025-11-29 08:06:34.073 256736 DEBUG oslo_concurrency.lockutils [req-b55f3024-e5af-4eae-a5e2-63384b019d08 req-1b615ca3-ed9a-4478-a3be-c9eb43f6f8bc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-f2cbf4cd-582b-408f-92b1-6b70364babcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:34 compute-0 ceph-mon[75050]: pgmap v2028: 305 pgs: 305 active+clean; 383 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 110 op/s
Nov 29 08:06:34 compute-0 ovn_controller[153383]: 2025-11-29T08:06:34Z|00054|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.5
Nov 29 08:06:34 compute-0 ovn_controller[153383]: 2025-11-29T08:06:34Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:bd:70:fa 10.100.0.5
Nov 29 08:06:34 compute-0 nova_compute[256729]: 2025-11-29 08:06:34.578 256736 DEBUG nova.network.neutron [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:06:34 compute-0 sshd-session[295924]: Invalid user guest from 143.14.121.41 port 35182
Nov 29 08:06:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 398 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 29 08:06:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:35 compute-0 sshd-session[295924]: Connection closed by invalid user guest 143.14.121.41 port 35182 [preauth]
Nov 29 08:06:35 compute-0 sudo[295926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:35 compute-0 sudo[295926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:35 compute-0 sudo[295926]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:35 compute-0 sudo[295951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:35 compute-0 sudo[295951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:35 compute-0 sudo[295951]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:35 compute-0 sudo[295976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:35 compute-0 sudo[295976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:35 compute-0 sudo[295976]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:35 compute-0 sudo[296001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:06:35 compute-0 sudo[296001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:06:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.621 256736 DEBUG nova.network.neutron [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updating instance_info_cache with network_info: [{"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.653 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Releasing lock "refresh_cache-f2cbf4cd-582b-408f-92b1-6b70364babcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.654 256736 DEBUG nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Instance network_info: |[{"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.655 256736 DEBUG oslo_concurrency.lockutils [req-b55f3024-e5af-4eae-a5e2-63384b019d08 req-1b615ca3-ed9a-4478-a3be-c9eb43f6f8bc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-f2cbf4cd-582b-408f-92b1-6b70364babcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.656 256736 DEBUG nova.network.neutron [req-b55f3024-e5af-4eae-a5e2-63384b019d08 req-1b615ca3-ed9a-4478-a3be-c9eb43f6f8bc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Refreshing network info cache for port c112be9f-f94a-4fd7-bf2c-4f4614918d8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.661 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Start _get_guest_xml network_info=[{"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.676 256736 WARNING nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.683 256736 DEBUG nova.virt.libvirt.host [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.684 256736 DEBUG nova.virt.libvirt.host [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.689 256736 DEBUG nova.virt.libvirt.host [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.690 256736 DEBUG nova.virt.libvirt.host [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.691 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.692 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.693 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.693 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.693 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.694 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.694 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.695 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.695 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.696 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.696 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.696 256736 DEBUG nova.virt.hardware [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:06:35 compute-0 nova_compute[256729]: 2025-11-29 08:06:35.702 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:35 compute-0 sudo[296001]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:06:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:06:35 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:35 compute-0 sudo[296066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:35 compute-0 sudo[296066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:35 compute-0 sudo[296066]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:36 compute-0 sudo[296091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:36 compute-0 sudo[296091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:36 compute-0 sudo[296091]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:36 compute-0 sudo[296116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:36 compute-0 sudo[296116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:36 compute-0 sudo[296116]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240168143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:36 compute-0 sudo[296141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:06:36 compute-0 sudo[296141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.165 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.193 256736 DEBUG nova.storage.rbd_utils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] rbd image f2cbf4cd-582b-408f-92b1-6b70364babcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.198 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:36 compute-0 ceph-mon[75050]: pgmap v2029: 305 pgs: 305 active+clean; 398 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Nov 29 08:06:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:36 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:36 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/240168143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.309 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "cde72883-eb73-406e-8301-a92fe1527a26" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.309 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.327 256736 DEBUG nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.394 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.395 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.402 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.403 256736 INFO nova.compute.claims [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.583 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290107482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.649 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.653 256736 DEBUG nova.virt.libvirt.vif [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1938153464',display_name='tempest-SnapshotDataIntegrityTests-server-1938153464',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1938153464',id=25,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzueeqjQFGrOlbn4utB/WDt1HT2fTR9vZS4MlTRSfHyAqmh1iCrJR4YQMfNazhLwEtND2MN7Di+NQETm+Mveut1YrwZowy8OY9ggEZ70bUUWirP0dRn530bPgh4HmSc2A==',key_name='tempest-keypair-18056619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c2f274b1f924edba19c49761e8636bb',ramdisk_id='',reservation_id='r-ci340j2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-9894843',owner_user_name='tempest-SnapshotDataIntegrityTests-9894843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d7bf857ed854504b6f769bea1a63cc4',uuid=f2cbf4cd-582b-408f-92b1-6b70364babcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.653 256736 DEBUG nova.network.os_vif_util [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Converting VIF {"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.654 256736 DEBUG nova.network.os_vif_util [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=c112be9f-f94a-4fd7-bf2c-4f4614918d8f,network=Network(0d9be530-6530-495c-aa98-b2316438e1fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc112be9f-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.656 256736 DEBUG nova.objects.instance [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'pci_devices' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.672 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <uuid>f2cbf4cd-582b-408f-92b1-6b70364babcf</uuid>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <name>instance-00000019</name>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <nova:name>tempest-SnapshotDataIntegrityTests-server-1938153464</nova:name>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:06:35</nova:creationTime>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <nova:user uuid="4d7bf857ed854504b6f769bea1a63cc4">tempest-SnapshotDataIntegrityTests-9894843-project-member</nova:user>
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <nova:project uuid="2c2f274b1f924edba19c49761e8636bb">tempest-SnapshotDataIntegrityTests-9894843</nova:project>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <nova:port uuid="c112be9f-f94a-4fd7-bf2c-4f4614918d8f">
Nov 29 08:06:36 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <system>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <entry name="serial">f2cbf4cd-582b-408f-92b1-6b70364babcf</entry>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <entry name="uuid">f2cbf4cd-582b-408f-92b1-6b70364babcf</entry>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     </system>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <os>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   </os>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <features>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   </features>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/f2cbf4cd-582b-408f-92b1-6b70364babcf_disk">
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       </source>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/f2cbf4cd-582b-408f-92b1-6b70364babcf_disk.config">
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       </source>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:06:36 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:1e:cb:1b"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <target dev="tapc112be9f-f9"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf/console.log" append="off"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <video>
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     </video>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:06:36 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:06:36 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:06:36 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:06:36 compute-0 nova_compute[256729]: </domain>
Nov 29 08:06:36 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.672 256736 DEBUG nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Preparing to wait for external event network-vif-plugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.673 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.673 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.673 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.673 256736 DEBUG nova.virt.libvirt.vif [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1938153464',display_name='tempest-SnapshotDataIntegrityTests-server-1938153464',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1938153464',id=25,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzueeqjQFGrOlbn4utB/WDt1HT2fTR9vZS4MlTRSfHyAqmh1iCrJR4YQMfNazhLwEtND2MN7Di+NQETm+Mveut1YrwZowy8OY9ggEZ70bUUWirP0dRn530bPgh4HmSc2A==',key_name='tempest-keypair-18056619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c2f274b1f924edba19c49761e8636bb',ramdisk_id='',reservation_id='r-ci340j2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-9894843',owner_user_name='tempest-SnapshotDataIntegrityTests-9894843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d7bf857ed854504b6f769bea1a63cc4',uuid=f2cbf4cd-582b-408f-92b1-6b70364babcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.674 256736 DEBUG nova.network.os_vif_util [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Converting VIF {"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.674 256736 DEBUG nova.network.os_vif_util [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=c112be9f-f94a-4fd7-bf2c-4f4614918d8f,network=Network(0d9be530-6530-495c-aa98-b2316438e1fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc112be9f-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.674 256736 DEBUG os_vif [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=c112be9f-f94a-4fd7-bf2c-4f4614918d8f,network=Network(0d9be530-6530-495c-aa98-b2316438e1fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc112be9f-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.675 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.675 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.676 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.679 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.679 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc112be9f-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.680 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc112be9f-f9, col_values=(('external_ids', {'iface-id': 'c112be9f-f94a-4fd7-bf2c-4f4614918d8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:cb:1b', 'vm-uuid': 'f2cbf4cd-582b-408f-92b1-6b70364babcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:36 compute-0 NetworkManager[48962]: <info>  [1764403596.6828] manager: (tapc112be9f-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.683 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.691 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.692 256736 INFO os_vif [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=c112be9f-f94a-4fd7-bf2c-4f4614918d8f,network=Network(0d9be530-6530-495c-aa98-b2316438e1fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc112be9f-f9')
Nov 29 08:06:36 compute-0 sudo[296141]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.765 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.766 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.766 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No VIF found with MAC fa:16:3e:1e:cb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.767 256736 INFO nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Using config drive
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.791 256736 DEBUG nova.storage.rbd_utils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] rbd image f2cbf4cd-582b-408f-92b1-6b70364babcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:06:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:36 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 662a63e7-af4b-49d9-87ca-692b10f2f677 does not exist
Nov 29 08:06:36 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 80ebf339-6af3-406a-8443-30ee23c338b6 does not exist
Nov 29 08:06:36 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 6301b5b3-b3ed-4fa1-a01f-59d8079faf64 does not exist
Nov 29 08:06:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:06:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:06:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:06:36 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:36 compute-0 sudo[296280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:36 compute-0 sudo[296280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:36 compute-0 sudo[296280]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:36 compute-0 sudo[296305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:36 compute-0 sudo[296305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:36 compute-0 sudo[296305]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.949 256736 DEBUG nova.network.neutron [req-b55f3024-e5af-4eae-a5e2-63384b019d08 req-1b615ca3-ed9a-4478-a3be-c9eb43f6f8bc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updated VIF entry in instance network info cache for port c112be9f-f94a-4fd7-bf2c-4f4614918d8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:36 compute-0 nova_compute[256729]: 2025-11-29 08:06:36.950 256736 DEBUG nova.network.neutron [req-b55f3024-e5af-4eae-a5e2-63384b019d08 req-1b615ca3-ed9a-4478-a3be-c9eb43f6f8bc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updating instance_info_cache with network_info: [{"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:37 compute-0 sudo[296330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.004 256736 DEBUG oslo_concurrency.lockutils [req-b55f3024-e5af-4eae-a5e2-63384b019d08 req-1b615ca3-ed9a-4478-a3be-c9eb43f6f8bc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-f2cbf4cd-582b-408f-92b1-6b70364babcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:37 compute-0 sudo[296330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:37 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1655346992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:37 compute-0 sudo[296330]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.026 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.033 256736 DEBUG nova.compute.provider_tree [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.059 256736 DEBUG nova.scheduler.client.report [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:37 compute-0 sudo[296357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:06:37 compute-0 sudo[296357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.101 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.102 256736 DEBUG nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:06:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.160 256736 DEBUG nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.160 256736 DEBUG nova.network.neutron [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.188 256736 INFO nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.214 256736 DEBUG nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.263 256736 INFO nova.virt.block_device [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Booting with volume 4be6f041-5b7c-4a84-af46-a4c40439c008 at /dev/vda
Nov 29 08:06:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3290107482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:06:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:06:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:06:37 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:06:37 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1655346992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.461 256736 DEBUG os_brick.utils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.463 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.485 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.485 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[25248633-bb89-427a-be42-565ec735f386]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.488 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.503 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.504 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[428c4b47-9d7e-4c0c-b54e-511724cb6caa]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.506 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.524 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.525 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[b646677c-39f2-4adb-a715-722d1ea2e005]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.527 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[a8186483-ac77-404e-b9ae-268cf41ac542]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.528 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:37 compute-0 podman[296423]: 2025-11-29 08:06:37.544566106 +0000 UTC m=+0.078112467 container create 55f5976220ebdda1262679687f78179fc7ca99753f67c1d45dc1e3f5da977f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.564 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.568 256736 DEBUG os_brick.initiator.connectors.lightos [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.569 256736 DEBUG os_brick.initiator.connectors.lightos [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.569 256736 DEBUG os_brick.initiator.connectors.lightos [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.569 256736 DEBUG os_brick.utils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] <== get_connector_properties: return (108ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.570 256736 DEBUG nova.virt.block_device [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Updating existing volume attachment record: bdd0eb2b-fdf4-4447-baad-c3d9b009d752 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.587 256736 INFO nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Creating config drive at /var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf/disk.config
Nov 29 08:06:37 compute-0 systemd[1]: Started libpod-conmon-55f5976220ebdda1262679687f78179fc7ca99753f67c1d45dc1e3f5da977f4a.scope.
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.601 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjinmnbna execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:37 compute-0 podman[296423]: 2025-11-29 08:06:37.514090705 +0000 UTC m=+0.047637166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.633 256736 DEBUG nova.policy [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9664e420085d412aae898a6ec021b24f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dfb6854e99614af5b8df420841fde0db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:06:37 compute-0 podman[296423]: 2025-11-29 08:06:37.6527096 +0000 UTC m=+0.186255991 container init 55f5976220ebdda1262679687f78179fc7ca99753f67c1d45dc1e3f5da977f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chaplygin, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 08:06:37 compute-0 podman[296423]: 2025-11-29 08:06:37.66177502 +0000 UTC m=+0.195321381 container start 55f5976220ebdda1262679687f78179fc7ca99753f67c1d45dc1e3f5da977f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:06:37 compute-0 podman[296423]: 2025-11-29 08:06:37.665938765 +0000 UTC m=+0.199485186 container attach 55f5976220ebdda1262679687f78179fc7ca99753f67c1d45dc1e3f5da977f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chaplygin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:06:37 compute-0 nifty_chaplygin[296446]: 167 167
Nov 29 08:06:37 compute-0 systemd[1]: libpod-55f5976220ebdda1262679687f78179fc7ca99753f67c1d45dc1e3f5da977f4a.scope: Deactivated successfully.
Nov 29 08:06:37 compute-0 podman[296423]: 2025-11-29 08:06:37.671766506 +0000 UTC m=+0.205312887 container died 55f5976220ebdda1262679687f78179fc7ca99753f67c1d45dc1e3f5da977f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chaplygin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-739d39d4d07b485de5f951567b762f61524143ad929340d62cdd4c2bba313c38-merged.mount: Deactivated successfully.
Nov 29 08:06:37 compute-0 podman[296423]: 2025-11-29 08:06:37.725109878 +0000 UTC m=+0.258656239 container remove 55f5976220ebdda1262679687f78179fc7ca99753f67c1d45dc1e3f5da977f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:06:37 compute-0 systemd[1]: libpod-conmon-55f5976220ebdda1262679687f78179fc7ca99753f67c1d45dc1e3f5da977f4a.scope: Deactivated successfully.
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.743 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjinmnbna" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.771 256736 DEBUG nova.storage.rbd_utils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] rbd image f2cbf4cd-582b-408f-92b1-6b70364babcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.774 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf/disk.config f2cbf4cd-582b-408f-92b1-6b70364babcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:37 compute-0 nova_compute[256729]: 2025-11-29 08:06:37.805 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-0 podman[296507]: 2025-11-29 08:06:38.014210797 +0000 UTC m=+0.119944172 container create 2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swanson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:06:38 compute-0 podman[296507]: 2025-11-29 08:06:37.931476124 +0000 UTC m=+0.037209489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.058 256736 DEBUG oslo_concurrency.processutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf/disk.config f2cbf4cd-582b-408f-92b1-6b70364babcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.059 256736 INFO nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Deleting local config drive /var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf/disk.config because it was imported into RBD.
Nov 29 08:06:38 compute-0 systemd[1]: Started libpod-conmon-2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1.scope.
Nov 29 08:06:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6a3a10daed8e59565e19dc41bab2b67a605e5168262ede243209ff94a348d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6a3a10daed8e59565e19dc41bab2b67a605e5168262ede243209ff94a348d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6a3a10daed8e59565e19dc41bab2b67a605e5168262ede243209ff94a348d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6a3a10daed8e59565e19dc41bab2b67a605e5168262ede243209ff94a348d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a6a3a10daed8e59565e19dc41bab2b67a605e5168262ede243209ff94a348d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:38 compute-0 podman[296507]: 2025-11-29 08:06:38.150256771 +0000 UTC m=+0.255990146 container init 2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:06:38 compute-0 kernel: tapc112be9f-f9: entered promiscuous mode
Nov 29 08:06:38 compute-0 NetworkManager[48962]: <info>  [1764403598.1575] manager: (tapc112be9f-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Nov 29 08:06:38 compute-0 podman[296507]: 2025-11-29 08:06:38.161659776 +0000 UTC m=+0.267393121 container start 2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:06:38 compute-0 ovn_controller[153383]: 2025-11-29T08:06:38Z|00231|binding|INFO|Claiming lport c112be9f-f94a-4fd7-bf2c-4f4614918d8f for this chassis.
Nov 29 08:06:38 compute-0 ovn_controller[153383]: 2025-11-29T08:06:38Z|00232|binding|INFO|c112be9f-f94a-4fd7-bf2c-4f4614918d8f: Claiming fa:16:3e:1e:cb:1b 10.100.0.4
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.163 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-0 podman[296507]: 2025-11-29 08:06:38.166069287 +0000 UTC m=+0.271802672 container attach 2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.173 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:cb:1b 10.100.0.4'], port_security=['fa:16:3e:1e:cb:1b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f2cbf4cd-582b-408f-92b1-6b70364babcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d9be530-6530-495c-aa98-b2316438e1fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c2f274b1f924edba19c49761e8636bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58284a6e-1181-4efe-a885-d8a09d336e99 fd4dfdbf-227c-4e95-b1ab-fa20aeef8912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f156bc90-0f03-49fe-bc45-8726c0e42606, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=c112be9f-f94a-4fd7-bf2c-4f4614918d8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.175 163655 INFO neutron.agent.ovn.metadata.agent [-] Port c112be9f-f94a-4fd7-bf2c-4f4614918d8f in datapath 0d9be530-6530-495c-aa98-b2316438e1fd bound to our chassis
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.176 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d9be530-6530-495c-aa98-b2316438e1fd
Nov 29 08:06:38 compute-0 podman[296528]: 2025-11-29 08:06:38.18536422 +0000 UTC m=+0.099968699 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:06:38 compute-0 podman[296527]: 2025-11-29 08:06:38.185410841 +0000 UTC m=+0.116167766 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd)
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.189 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e9077e42-77a5-4b1b-9989-4546796220d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.190 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0d9be530-61 in ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:06:38 compute-0 ovn_controller[153383]: 2025-11-29T08:06:38Z|00233|binding|INFO|Setting lport c112be9f-f94a-4fd7-bf2c-4f4614918d8f ovn-installed in OVS
Nov 29 08:06:38 compute-0 ovn_controller[153383]: 2025-11-29T08:06:38Z|00234|binding|INFO|Setting lport c112be9f-f94a-4fd7-bf2c-4f4614918d8f up in Southbound
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.189 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-0 podman[296524]: 2025-11-29 08:06:38.193213027 +0000 UTC m=+0.129302120 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.192 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0d9be530-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.192 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dfafcf20-570f-4061-92bb-648bf44a971c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.195 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4cdff8c0-68fe-4a48-addf-0f50fc864566]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 systemd-machined[217781]: New machine qemu-25-instance-00000019.
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.215 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b024e2-9602-4bd5-a0bf-14954412e5d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.238 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d13e1089-79ad-49a6-bcf6-b39406d19bd5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 systemd-udevd[296609]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:38 compute-0 NetworkManager[48962]: <info>  [1764403598.2511] device (tapc112be9f-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:06:38 compute-0 NetworkManager[48962]: <info>  [1764403598.2521] device (tapc112be9f-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:06:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3422555368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.279 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[3d45e3e7-0165-4ec0-9167-483d214a4c90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 NetworkManager[48962]: <info>  [1764403598.2870] manager: (tap0d9be530-60): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.286 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9cc248-5634-432e-898e-8670953ba40a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.327 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[a76e6ad7-bf87-4bd9-8f25-42dacec17182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.330 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[5248a716-4922-459e-b6c8-31272a07afc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 NetworkManager[48962]: <info>  [1764403598.3614] device (tap0d9be530-60): carrier: link connected
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.368 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[c6af249e-9ca1-44cc-95dd-2df40307886d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.390 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[317b3fcc-d875-46e0-9bb0-147e265357b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d9be530-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:45:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591342, 'reachable_time': 38625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296639, 'error': None, 'target': 'ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.410 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a555de98-d414-4808-ad1c-cf6388c86424]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:455f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591342, 'tstamp': 591342}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296640, 'error': None, 'target': 'ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.428 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac547d7-29b7-42d3-8a44-351947a1959d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d9be530-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:45:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591342, 'reachable_time': 38625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296641, 'error': None, 'target': 'ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.468 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2007e37a-55cc-453a-af60-3eb5bda56909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.521 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f2dfb8bc-4f52-4a6b-be3b-aec10c838aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.523 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d9be530-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.523 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.524 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d9be530-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.568 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-0 NetworkManager[48962]: <info>  [1764403598.5692] manager: (tap0d9be530-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Nov 29 08:06:38 compute-0 kernel: tap0d9be530-60: entered promiscuous mode
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.573 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.576 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d9be530-60, col_values=(('external_ids', {'iface-id': '9c3f688e-b00f-4b58-999f-eca278500698'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:38 compute-0 ovn_controller[153383]: 2025-11-29T08:06:38Z|00235|binding|INFO|Releasing lport 9c3f688e-b00f-4b58-999f-eca278500698 from this chassis (sb_readonly=0)
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.578 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.580 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.581 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0d9be530-6530-495c-aa98-b2316438e1fd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0d9be530-6530-495c-aa98-b2316438e1fd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.582 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[97ecc7f9-88a8-4671-b066-c0cb6bf2182f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.582 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-0d9be530-6530-495c-aa98-b2316438e1fd
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/0d9be530-6530-495c-aa98-b2316438e1fd.pid.haproxy
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 0d9be530-6530-495c-aa98-b2316438e1fd
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:06:38 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:38.583 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd', 'env', 'PROCESS_TAG=haproxy-0d9be530-6530-495c-aa98-b2316438e1fd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0d9be530-6530-495c-aa98-b2316438e1fd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.596 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.741 256736 DEBUG nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.745 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.746 256736 INFO nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Creating image(s)
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.747 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.747 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Ensure instance console log exists: /var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.748 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.748 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.749 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.750 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403598.7454312, f2cbf4cd-582b-408f-92b1-6b70364babcf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.750 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] VM Started (Lifecycle Event)
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.772 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.777 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403598.745589, f2cbf4cd-582b-408f-92b1-6b70364babcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.777 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] VM Paused (Lifecycle Event)
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.801 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.805 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:38 compute-0 ceph-mon[75050]: pgmap v2030: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Nov 29 08:06:38 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3422555368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:38 compute-0 nova_compute[256729]: 2025-11-29 08:06:38.826 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:38 compute-0 ovn_controller[153383]: 2025-11-29T08:06:38Z|00056|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.5
Nov 29 08:06:38 compute-0 ovn_controller[153383]: 2025-11-29T08:06:38Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:bd:70:fa 10.100.0.5
Nov 29 08:06:38 compute-0 podman[296721]: 2025-11-29 08:06:38.951850073 +0000 UTC m=+0.053797905 container create 030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:06:38 compute-0 systemd[1]: Started libpod-conmon-030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491.scope.
Nov 29 08:06:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:39 compute-0 podman[296721]: 2025-11-29 08:06:38.924364715 +0000 UTC m=+0.026312557 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:06:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1dbc9cb883e66a936cd23585ede11c9716928dc77350d0fee7ec3db836e81fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:39 compute-0 podman[296721]: 2025-11-29 08:06:39.034076842 +0000 UTC m=+0.136024684 container init 030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 08:06:39 compute-0 podman[296721]: 2025-11-29 08:06:39.039350228 +0000 UTC m=+0.141298040 container start 030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.054 256736 DEBUG nova.network.neutron [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Successfully created port: be389aee-c934-4833-bcc3-3624a4a8e32f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:06:39 compute-0 neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd[296742]: [NOTICE]   (296748) : New worker (296752) forked
Nov 29 08:06:39 compute-0 neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd[296742]: [NOTICE]   (296748) : Loading success.
Nov 29 08:06:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 1015 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.143 256736 DEBUG nova.compute.manager [req-137cf16e-bac6-4ba4-9f5b-a96b9c17bdb1 req-455cf3f8-8416-49dc-af97-f64a728caa07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received event network-vif-plugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.143 256736 DEBUG oslo_concurrency.lockutils [req-137cf16e-bac6-4ba4-9f5b-a96b9c17bdb1 req-455cf3f8-8416-49dc-af97-f64a728caa07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.143 256736 DEBUG oslo_concurrency.lockutils [req-137cf16e-bac6-4ba4-9f5b-a96b9c17bdb1 req-455cf3f8-8416-49dc-af97-f64a728caa07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.144 256736 DEBUG oslo_concurrency.lockutils [req-137cf16e-bac6-4ba4-9f5b-a96b9c17bdb1 req-455cf3f8-8416-49dc-af97-f64a728caa07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.144 256736 DEBUG nova.compute.manager [req-137cf16e-bac6-4ba4-9f5b-a96b9c17bdb1 req-455cf3f8-8416-49dc-af97-f64a728caa07 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Processing event network-vif-plugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.145 256736 DEBUG nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.151 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403599.1512694, f2cbf4cd-582b-408f-92b1-6b70364babcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.152 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] VM Resumed (Lifecycle Event)
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.154 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.156 256736 INFO nova.virt.libvirt.driver [-] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Instance spawned successfully.
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.157 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.173 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.178 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.181 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.181 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.182 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.182 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.182 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.182 256736 DEBUG nova.virt.libvirt.driver [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.216 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:39 compute-0 relaxed_swanson[296549]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:06:39 compute-0 relaxed_swanson[296549]: --> relative data size: 1.0
Nov 29 08:06:39 compute-0 relaxed_swanson[296549]: --> All data devices are unavailable
Nov 29 08:06:39 compute-0 systemd[1]: libpod-2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1.scope: Deactivated successfully.
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.251 256736 INFO nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Took 8.70 seconds to spawn the instance on the hypervisor.
Nov 29 08:06:39 compute-0 systemd[1]: libpod-2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1.scope: Consumed 1.018s CPU time.
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.251 256736 DEBUG nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:39 compute-0 conmon[296549]: conmon 2b7f7c1f539d53257f77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1.scope/container/memory.events
Nov 29 08:06:39 compute-0 podman[296507]: 2025-11-29 08:06:39.253337134 +0000 UTC m=+1.359070479 container died 2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 08:06:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a6a3a10daed8e59565e19dc41bab2b67a605e5168262ede243209ff94a348d0-merged.mount: Deactivated successfully.
Nov 29 08:06:39 compute-0 podman[296507]: 2025-11-29 08:06:39.319561651 +0000 UTC m=+1.425294996 container remove 2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_swanson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 08:06:39 compute-0 systemd[1]: libpod-conmon-2b7f7c1f539d53257f77022a13f13c47a5601438712740e20801a049f4f2b6b1.scope: Deactivated successfully.
Nov 29 08:06:39 compute-0 sudo[296357]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:39 compute-0 ovn_controller[153383]: 2025-11-29T08:06:39Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bd:70:fa 10.100.0.5
Nov 29 08:06:39 compute-0 ovn_controller[153383]: 2025-11-29T08:06:39Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bd:70:fa 10.100.0.5
Nov 29 08:06:39 compute-0 sudo[296783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:39 compute-0 sudo[296783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:39 compute-0 sudo[296783]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.493 256736 INFO nova.compute.manager [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Took 9.95 seconds to build instance.
Nov 29 08:06:39 compute-0 nova_compute[256729]: 2025-11-29 08:06:39.525 256736 DEBUG oslo_concurrency.lockutils [None req-f5d120df-578b-4676-a36f-ea7842ad054d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:39 compute-0 sudo[296808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:39 compute-0 sudo[296808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:39 compute-0 sudo[296808]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:39 compute-0 sudo[296833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:39 compute-0 sudo[296833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:39 compute-0 sudo[296833]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:39 compute-0 sudo[296858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:06:39 compute-0 sudo[296858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:39 compute-0 ovn_controller[153383]: 2025-11-29T08:06:39Z|00236|memory|INFO|peak resident set size grew 51% in last 2515.2 seconds, from 16128 kB to 24416 kB
Nov 29 08:06:39 compute-0 ovn_controller[153383]: 2025-11-29T08:06:39Z|00237|memory|INFO|idl-cells-OVN_Southbound:10992 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:390 lflow-cache-entries-cache-matches:293 lflow-cache-size-KB:1609 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:678 ofctrl_installed_flow_usage-KB:494 ofctrl_sb_flow_ref_usage-KB:257
Nov 29 08:06:40 compute-0 nova_compute[256729]: 2025-11-29 08:06:40.095 256736 DEBUG nova.network.neutron [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Successfully updated port: be389aee-c934-4833-bcc3-3624a4a8e32f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:06:40 compute-0 nova_compute[256729]: 2025-11-29 08:06:40.110 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:40 compute-0 nova_compute[256729]: 2025-11-29 08:06:40.110 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquired lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:40 compute-0 nova_compute[256729]: 2025-11-29 08:06:40.111 256736 DEBUG nova.network.neutron [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:06:40 compute-0 sshd-session[296221]: Invalid user git from 143.14.121.41 port 35192
Nov 29 08:06:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:40 compute-0 podman[296924]: 2025-11-29 08:06:40.178335491 +0000 UTC m=+0.049681992 container create 1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 08:06:40 compute-0 nova_compute[256729]: 2025-11-29 08:06:40.202 256736 DEBUG nova.compute.manager [req-c5b7ee47-6f46-4673-9708-1dd4ec65e122 req-8f5104cd-3beb-48ee-a0a0-a4938e9d562e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received event network-changed-be389aee-c934-4833-bcc3-3624a4a8e32f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:40 compute-0 nova_compute[256729]: 2025-11-29 08:06:40.203 256736 DEBUG nova.compute.manager [req-c5b7ee47-6f46-4673-9708-1dd4ec65e122 req-8f5104cd-3beb-48ee-a0a0-a4938e9d562e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Refreshing instance network info cache due to event network-changed-be389aee-c934-4833-bcc3-3624a4a8e32f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:40 compute-0 nova_compute[256729]: 2025-11-29 08:06:40.204 256736 DEBUG oslo_concurrency.lockutils [req-c5b7ee47-6f46-4673-9708-1dd4ec65e122 req-8f5104cd-3beb-48ee-a0a0-a4938e9d562e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:40 compute-0 systemd[1]: Started libpod-conmon-1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4.scope.
Nov 29 08:06:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:40 compute-0 podman[296924]: 2025-11-29 08:06:40.248825316 +0000 UTC m=+0.120171837 container init 1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:06:40 compute-0 podman[296924]: 2025-11-29 08:06:40.156651603 +0000 UTC m=+0.027998114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:40 compute-0 podman[296924]: 2025-11-29 08:06:40.25586232 +0000 UTC m=+0.127208821 container start 1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:06:40 compute-0 podman[296924]: 2025-11-29 08:06:40.258887894 +0000 UTC m=+0.130234395 container attach 1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:06:40 compute-0 affectionate_meitner[296940]: 167 167
Nov 29 08:06:40 compute-0 systemd[1]: libpod-1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4.scope: Deactivated successfully.
Nov 29 08:06:40 compute-0 conmon[296940]: conmon 1db2761f0d591bbd611c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4.scope/container/memory.events
Nov 29 08:06:40 compute-0 podman[296924]: 2025-11-29 08:06:40.27177732 +0000 UTC m=+0.143123831 container died 1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 08:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb8e721283194739c39c99b6d9b3c1db5e3285f6863ff297dfa721b5f72db636-merged.mount: Deactivated successfully.
Nov 29 08:06:40 compute-0 podman[296924]: 2025-11-29 08:06:40.307690331 +0000 UTC m=+0.179036832 container remove 1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 08:06:40 compute-0 systemd[1]: libpod-conmon-1db2761f0d591bbd611cc4233cb00a790361f80f1763a06352751b00e23062c4.scope: Deactivated successfully.
Nov 29 08:06:40 compute-0 podman[296964]: 2025-11-29 08:06:40.512258246 +0000 UTC m=+0.042586906 container create ad09d4efbd155ae0a356e13705abd2e9031a6a480782c327462144775c04c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:06:40 compute-0 systemd[1]: Started libpod-conmon-ad09d4efbd155ae0a356e13705abd2e9031a6a480782c327462144775c04c3b6.scope.
Nov 29 08:06:40 compute-0 sshd-session[296221]: Connection closed by invalid user git 143.14.121.41 port 35192 [preauth]
Nov 29 08:06:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baea5f12f7547b0114fa6399ebbc4db192f4f1d59ab4084fcabc97ce05a1bb7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baea5f12f7547b0114fa6399ebbc4db192f4f1d59ab4084fcabc97ce05a1bb7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baea5f12f7547b0114fa6399ebbc4db192f4f1d59ab4084fcabc97ce05a1bb7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baea5f12f7547b0114fa6399ebbc4db192f4f1d59ab4084fcabc97ce05a1bb7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:40 compute-0 podman[296964]: 2025-11-29 08:06:40.494656881 +0000 UTC m=+0.024985551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:40 compute-0 nova_compute[256729]: 2025-11-29 08:06:40.599 256736 DEBUG nova.network.neutron [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:06:40 compute-0 podman[296964]: 2025-11-29 08:06:40.791569925 +0000 UTC m=+0.321898585 container init ad09d4efbd155ae0a356e13705abd2e9031a6a480782c327462144775c04c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 08:06:40 compute-0 podman[296964]: 2025-11-29 08:06:40.805705505 +0000 UTC m=+0.336034175 container start ad09d4efbd155ae0a356e13705abd2e9031a6a480782c327462144775c04c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:06:40 compute-0 podman[296964]: 2025-11-29 08:06:40.812150313 +0000 UTC m=+0.342479013 container attach ad09d4efbd155ae0a356e13705abd2e9031a6a480782c327462144775c04c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:06:40 compute-0 ceph-mon[75050]: pgmap v2031: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 1015 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Nov 29 08:06:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 935 KiB/s rd, 2.0 MiB/s wr, 105 op/s
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.306 256736 DEBUG nova.compute.manager [req-390e818e-5faa-4e65-9611-5c699f45c84e req-d5a27323-6feb-4571-a5ac-90821eeffd88 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received event network-vif-plugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.306 256736 DEBUG oslo_concurrency.lockutils [req-390e818e-5faa-4e65-9611-5c699f45c84e req-d5a27323-6feb-4571-a5ac-90821eeffd88 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.307 256736 DEBUG oslo_concurrency.lockutils [req-390e818e-5faa-4e65-9611-5c699f45c84e req-d5a27323-6feb-4571-a5ac-90821eeffd88 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.307 256736 DEBUG oslo_concurrency.lockutils [req-390e818e-5faa-4e65-9611-5c699f45c84e req-d5a27323-6feb-4571-a5ac-90821eeffd88 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.307 256736 DEBUG nova.compute.manager [req-390e818e-5faa-4e65-9611-5c699f45c84e req-d5a27323-6feb-4571-a5ac-90821eeffd88 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] No waiting events found dispatching network-vif-plugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.307 256736 WARNING nova.compute.manager [req-390e818e-5faa-4e65-9611-5c699f45c84e req-d5a27323-6feb-4571-a5ac-90821eeffd88 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received unexpected event network-vif-plugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f for instance with vm_state active and task_state None.
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.578 256736 DEBUG nova.network.neutron [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Updating instance_info_cache with network_info: [{"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:41 compute-0 bold_hypatia[296981]: {
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:     "0": [
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:         {
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "devices": [
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "/dev/loop3"
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             ],
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_name": "ceph_lv0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_size": "21470642176",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "name": "ceph_lv0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "tags": {
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.cluster_name": "ceph",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.crush_device_class": "",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.encrypted": "0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.osd_id": "0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.type": "block",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.vdo": "0"
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             },
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "type": "block",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "vg_name": "ceph_vg0"
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:         }
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:     ],
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:     "1": [
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:         {
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "devices": [
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "/dev/loop4"
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             ],
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_name": "ceph_lv1",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_size": "21470642176",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "name": "ceph_lv1",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "tags": {
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.cluster_name": "ceph",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.crush_device_class": "",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.encrypted": "0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.osd_id": "1",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.type": "block",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.vdo": "0"
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             },
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "type": "block",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "vg_name": "ceph_vg1"
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:         }
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:     ],
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:     "2": [
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:         {
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "devices": [
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "/dev/loop5"
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             ],
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_name": "ceph_lv2",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_size": "21470642176",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "name": "ceph_lv2",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "tags": {
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.cluster_name": "ceph",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.crush_device_class": "",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.encrypted": "0",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.osd_id": "2",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.type": "block",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:                 "ceph.vdo": "0"
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             },
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "type": "block",
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:             "vg_name": "ceph_vg2"
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:         }
Nov 29 08:06:41 compute-0 bold_hypatia[296981]:     ]
Nov 29 08:06:41 compute-0 bold_hypatia[296981]: }
Nov 29 08:06:41 compute-0 systemd[1]: libpod-ad09d4efbd155ae0a356e13705abd2e9031a6a480782c327462144775c04c3b6.scope: Deactivated successfully.
Nov 29 08:06:41 compute-0 podman[296964]: 2025-11-29 08:06:41.639190337 +0000 UTC m=+1.169518997 container died ad09d4efbd155ae0a356e13705abd2e9031a6a480782c327462144775c04c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hypatia, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:06:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-baea5f12f7547b0114fa6399ebbc4db192f4f1d59ab4084fcabc97ce05a1bb7f-merged.mount: Deactivated successfully.
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.689 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.692 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Releasing lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.692 256736 DEBUG nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Instance network_info: |[{"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.693 256736 DEBUG oslo_concurrency.lockutils [req-c5b7ee47-6f46-4673-9708-1dd4ec65e122 req-8f5104cd-3beb-48ee-a0a0-a4938e9d562e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.693 256736 DEBUG nova.network.neutron [req-c5b7ee47-6f46-4673-9708-1dd4ec65e122 req-8f5104cd-3beb-48ee-a0a0-a4938e9d562e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Refreshing network info cache for port be389aee-c934-4833-bcc3-3624a4a8e32f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:41 compute-0 podman[296964]: 2025-11-29 08:06:41.694457992 +0000 UTC m=+1.224786652 container remove ad09d4efbd155ae0a356e13705abd2e9031a6a480782c327462144775c04c3b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.697 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Start _get_guest_xml network_info=[{"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4be6f041-5b7c-4a84-af46-a4c40439c008', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4be6f041-5b7c-4a84-af46-a4c40439c008', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'cde72883-eb73-406e-8301-a92fe1527a26', 'attached_at': '', 'detached_at': '', 'volume_id': '4be6f041-5b7c-4a84-af46-a4c40439c008', 'serial': '4be6f041-5b7c-4a84-af46-a4c40439c008'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': 'bdd0eb2b-fdf4-4447-baad-c3d9b009d752', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.713 256736 WARNING nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:06:41 compute-0 systemd[1]: libpod-conmon-ad09d4efbd155ae0a356e13705abd2e9031a6a480782c327462144775c04c3b6.scope: Deactivated successfully.
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.719 256736 DEBUG nova.virt.libvirt.host [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.719 256736 DEBUG nova.virt.libvirt.host [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.723 256736 DEBUG nova.virt.libvirt.host [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.723 256736 DEBUG nova.virt.libvirt.host [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.723 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.724 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.724 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.724 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.725 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.725 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.725 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.725 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.726 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.726 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:06:41 compute-0 sudo[296858]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.726 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.727 256736 DEBUG nova.virt.hardware [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.756 256736 DEBUG nova.storage.rbd_utils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image cde72883-eb73-406e-8301-a92fe1527a26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:41 compute-0 nova_compute[256729]: 2025-11-29 08:06:41.760 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:41 compute-0 sudo[297010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:41 compute-0 sudo[297010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:41 compute-0 sudo[297010]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:41 compute-0 sudo[297047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:06:41 compute-0 sudo[297047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:41 compute-0 sudo[297047]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:41 compute-0 sudo[297073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:41 compute-0 sudo[297073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:41 compute-0 sudo[297073]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:41 compute-0 sudo[297116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:06:41 compute-0 sudo[297116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:06:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128005819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:42 compute-0 nova_compute[256729]: 2025-11-29 08:06:42.193 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:42 compute-0 podman[297182]: 2025-11-29 08:06:42.321059115 +0000 UTC m=+0.050460673 container create c8823590aef3b8adf8f00708b1d9f453f36d4e1240bfb13955180203037284c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bassi, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:06:42 compute-0 systemd[1]: Started libpod-conmon-c8823590aef3b8adf8f00708b1d9f453f36d4e1240bfb13955180203037284c7.scope.
Nov 29 08:06:42 compute-0 podman[297182]: 2025-11-29 08:06:42.296289801 +0000 UTC m=+0.025691439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:42 compute-0 podman[297182]: 2025-11-29 08:06:42.418695159 +0000 UTC m=+0.148096767 container init c8823590aef3b8adf8f00708b1d9f453f36d4e1240bfb13955180203037284c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bassi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:06:42 compute-0 podman[297182]: 2025-11-29 08:06:42.431365709 +0000 UTC m=+0.160767297 container start c8823590aef3b8adf8f00708b1d9f453f36d4e1240bfb13955180203037284c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bassi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:06:42 compute-0 infallible_bassi[297199]: 167 167
Nov 29 08:06:42 compute-0 podman[297182]: 2025-11-29 08:06:42.438061763 +0000 UTC m=+0.167463371 container attach c8823590aef3b8adf8f00708b1d9f453f36d4e1240bfb13955180203037284c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bassi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 08:06:42 compute-0 systemd[1]: libpod-c8823590aef3b8adf8f00708b1d9f453f36d4e1240bfb13955180203037284c7.scope: Deactivated successfully.
Nov 29 08:06:42 compute-0 podman[297182]: 2025-11-29 08:06:42.439598006 +0000 UTC m=+0.168999614 container died c8823590aef3b8adf8f00708b1d9f453f36d4e1240bfb13955180203037284c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:06:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe329aa3ba171ddfd5e6449e04ef0390f5265b0b6013362a97cc9b18e571b7fc-merged.mount: Deactivated successfully.
Nov 29 08:06:42 compute-0 podman[297182]: 2025-11-29 08:06:42.499142459 +0000 UTC m=+0.228544057 container remove c8823590aef3b8adf8f00708b1d9f453f36d4e1240bfb13955180203037284c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:06:42 compute-0 systemd[1]: libpod-conmon-c8823590aef3b8adf8f00708b1d9f453f36d4e1240bfb13955180203037284c7.scope: Deactivated successfully.
Nov 29 08:06:42 compute-0 podman[297222]: 2025-11-29 08:06:42.758081766 +0000 UTC m=+0.071749932 container create d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:06:42 compute-0 nova_compute[256729]: 2025-11-29 08:06:42.784 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:42 compute-0 systemd[1]: Started libpod-conmon-d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5.scope.
Nov 29 08:06:42 compute-0 podman[297222]: 2025-11-29 08:06:42.728569741 +0000 UTC m=+0.042237917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:06:42 compute-0 ceph-mon[75050]: pgmap v2032: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 935 KiB/s rd, 2.0 MiB/s wr, 105 op/s
Nov 29 08:06:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/128005819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:06:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c839a0cd2205cb16198a16f03c7ca07f69ca2aa96c299c33be83c20df988719/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c839a0cd2205cb16198a16f03c7ca07f69ca2aa96c299c33be83c20df988719/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c839a0cd2205cb16198a16f03c7ca07f69ca2aa96c299c33be83c20df988719/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c839a0cd2205cb16198a16f03c7ca07f69ca2aa96c299c33be83c20df988719/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:06:42 compute-0 podman[297222]: 2025-11-29 08:06:42.870982271 +0000 UTC m=+0.184650417 container init d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jepsen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:06:42 compute-0 podman[297222]: 2025-11-29 08:06:42.88072906 +0000 UTC m=+0.194397196 container start d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:06:42 compute-0 podman[297222]: 2025-11-29 08:06:42.88363974 +0000 UTC m=+0.197307866 container attach d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:06:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.496 256736 DEBUG nova.network.neutron [req-c5b7ee47-6f46-4673-9708-1dd4ec65e122 req-8f5104cd-3beb-48ee-a0a0-a4938e9d562e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Updated VIF entry in instance network info cache for port be389aee-c934-4833-bcc3-3624a4a8e32f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.497 256736 DEBUG nova.network.neutron [req-c5b7ee47-6f46-4673-9708-1dd4ec65e122 req-8f5104cd-3beb-48ee-a0a0-a4938e9d562e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Updating instance_info_cache with network_info: [{"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:43 compute-0 sshd-session[296986]: Invalid user git from 143.14.121.41 port 35196
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.805 256736 DEBUG nova.virt.libvirt.vif [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-632518534',display_name='tempest-TestVolumeBootPattern-server-632518534',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-632518534',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRhGhCDf+2DWWqDuvRpS/JaOK+f/CbMMIs9mX1kyTRqTPCFubI8ju/4twf4g9TbzLiRX/BzWwQ/uPnV3ZkV8vI7PffevvM5uIZzGBjdTxd3Z49lVgwpoVKRmE3GzO1NBg==',key_name='tempest-TestVolumeBootPattern-556618908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-xexrwmxo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:37Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=cde72883-eb73-406e-8301-a92fe1527a26,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.805 256736 DEBUG nova.network.os_vif_util [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.807 256736 DEBUG nova.network.os_vif_util [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:44:c8,bridge_name='br-int',has_traffic_filtering=True,id=be389aee-c934-4833-bcc3-3624a4a8e32f,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe389aee-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.810 256736 DEBUG nova.objects.instance [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'pci_devices' on Instance uuid cde72883-eb73-406e-8301-a92fe1527a26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.830 256736 DEBUG nova.compute.manager [req-6d06d78c-ebef-4561-a4af-7e3d548d4f2b req-49b3e8a5-562b-46f0-b83a-9cc8384d6fbf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received event network-changed-c112be9f-f94a-4fd7-bf2c-4f4614918d8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.831 256736 DEBUG nova.compute.manager [req-6d06d78c-ebef-4561-a4af-7e3d548d4f2b req-49b3e8a5-562b-46f0-b83a-9cc8384d6fbf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Refreshing instance network info cache due to event network-changed-c112be9f-f94a-4fd7-bf2c-4f4614918d8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.832 256736 DEBUG oslo_concurrency.lockutils [req-6d06d78c-ebef-4561-a4af-7e3d548d4f2b req-49b3e8a5-562b-46f0-b83a-9cc8384d6fbf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-f2cbf4cd-582b-408f-92b1-6b70364babcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.833 256736 DEBUG oslo_concurrency.lockutils [req-6d06d78c-ebef-4561-a4af-7e3d548d4f2b req-49b3e8a5-562b-46f0-b83a-9cc8384d6fbf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-f2cbf4cd-582b-408f-92b1-6b70364babcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.834 256736 DEBUG nova.network.neutron [req-6d06d78c-ebef-4561-a4af-7e3d548d4f2b req-49b3e8a5-562b-46f0-b83a-9cc8384d6fbf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Refreshing network info cache for port c112be9f-f94a-4fd7-bf2c-4f4614918d8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.851 256736 DEBUG oslo_concurrency.lockutils [req-c5b7ee47-6f46-4673-9708-1dd4ec65e122 req-8f5104cd-3beb-48ee-a0a0-a4938e9d562e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.862 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <uuid>cde72883-eb73-406e-8301-a92fe1527a26</uuid>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <name>instance-0000001a</name>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <nova:name>tempest-TestVolumeBootPattern-server-632518534</nova:name>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:06:41</nova:creationTime>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <nova:user uuid="9664e420085d412aae898a6ec021b24f">tempest-TestVolumeBootPattern-776329285-project-member</nova:user>
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <nova:project uuid="dfb6854e99614af5b8df420841fde0db">tempest-TestVolumeBootPattern-776329285</nova:project>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <nova:port uuid="be389aee-c934-4833-bcc3-3624a4a8e32f">
Nov 29 08:06:43 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <system>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <entry name="serial">cde72883-eb73-406e-8301-a92fe1527a26</entry>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <entry name="uuid">cde72883-eb73-406e-8301-a92fe1527a26</entry>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     </system>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <os>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   </os>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <features>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   </features>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/cde72883-eb73-406e-8301-a92fe1527a26_disk.config">
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       </source>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-4be6f041-5b7c-4a84-af46-a4c40439c008">
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       </source>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:06:43 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <serial>4be6f041-5b7c-4a84-af46-a4c40439c008</serial>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:de:44:c8"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <target dev="tapbe389aee-c9"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26/console.log" append="off"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <video>
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     </video>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:06:43 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:06:43 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:06:43 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:06:43 compute-0 nova_compute[256729]: </domain>
Nov 29 08:06:43 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.871 256736 DEBUG nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Preparing to wait for external event network-vif-plugged-be389aee-c934-4833-bcc3-3624a4a8e32f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.872 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "cde72883-eb73-406e-8301-a92fe1527a26-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.873 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.873 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.879 256736 DEBUG nova.virt.libvirt.vif [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-632518534',display_name='tempest-TestVolumeBootPattern-server-632518534',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-632518534',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRhGhCDf+2DWWqDuvRpS/JaOK+f/CbMMIs9mX1kyTRqTPCFubI8ju/4twf4g9TbzLiRX/BzWwQ/uPnV3ZkV8vI7PffevvM5uIZzGBjdTxd3Z49lVgwpoVKRmE3GzO1NBg==',key_name='tempest-TestVolumeBootPattern-556618908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-xexrwmxo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:37Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=cde72883-eb73-406e-8301-a92fe1527a26,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.880 256736 DEBUG nova.network.os_vif_util [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.881 256736 DEBUG nova.network.os_vif_util [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:44:c8,bridge_name='br-int',has_traffic_filtering=True,id=be389aee-c934-4833-bcc3-3624a4a8e32f,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe389aee-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.882 256736 DEBUG os_vif [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:44:c8,bridge_name='br-int',has_traffic_filtering=True,id=be389aee-c934-4833-bcc3-3624a4a8e32f,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe389aee-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.883 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.883 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.884 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.888 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.889 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe389aee-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.890 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe389aee-c9, col_values=(('external_ids', {'iface-id': 'be389aee-c934-4833-bcc3-3624a4a8e32f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:44:c8', 'vm-uuid': 'cde72883-eb73-406e-8301-a92fe1527a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.892 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:43 compute-0 NetworkManager[48962]: <info>  [1764403603.8931] manager: (tapbe389aee-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.895 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.903 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.904 256736 INFO os_vif [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:44:c8,bridge_name='br-int',has_traffic_filtering=True,id=be389aee-c934-4833-bcc3-3624a4a8e32f,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe389aee-c9')
Nov 29 08:06:43 compute-0 nice_jepsen[297238]: {
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "osd_id": 2,
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "type": "bluestore"
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:     },
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "osd_id": 1,
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "type": "bluestore"
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:     },
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "osd_id": 0,
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:         "type": "bluestore"
Nov 29 08:06:43 compute-0 nice_jepsen[297238]:     }
Nov 29 08:06:43 compute-0 nice_jepsen[297238]: }
Nov 29 08:06:43 compute-0 systemd[1]: libpod-d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5.scope: Deactivated successfully.
Nov 29 08:06:43 compute-0 systemd[1]: libpod-d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5.scope: Consumed 1.068s CPU time.
Nov 29 08:06:43 compute-0 podman[297222]: 2025-11-29 08:06:43.949499325 +0000 UTC m=+1.263167531 container died d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 08:06:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c839a0cd2205cb16198a16f03c7ca07f69ca2aa96c299c33be83c20df988719-merged.mount: Deactivated successfully.
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.986 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.987 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.987 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] No VIF found with MAC fa:16:3e:de:44:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:06:43 compute-0 nova_compute[256729]: 2025-11-29 08:06:43.988 256736 INFO nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Using config drive
Nov 29 08:06:44 compute-0 nova_compute[256729]: 2025-11-29 08:06:44.021 256736 DEBUG nova.storage.rbd_utils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image cde72883-eb73-406e-8301-a92fe1527a26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:44 compute-0 podman[297222]: 2025-11-29 08:06:44.033840193 +0000 UTC m=+1.347508339 container remove d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jepsen, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 08:06:44 compute-0 systemd[1]: libpod-conmon-d78c07651f639d1fc477b1c066c53d9f7fa5c8c60ce1681cf097dae9ac9eeaa5.scope: Deactivated successfully.
Nov 29 08:06:44 compute-0 sudo[297116]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:06:44 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:06:44 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:44 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev feeaa825-4b41-4798-9c43-be56d0127f0a does not exist
Nov 29 08:06:44 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 4af51af4-0851-476f-ad96-c746737f2580 does not exist
Nov 29 08:06:44 compute-0 sudo[297304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:06:44 compute-0 sudo[297304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:44 compute-0 sudo[297304]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:44 compute-0 sudo[297329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:06:44 compute-0 sudo[297329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:06:44 compute-0 sudo[297329]: pam_unix(sudo:session): session closed for user root
Nov 29 08:06:44 compute-0 sshd-session[296986]: Connection closed by invalid user git 143.14.121.41 port 35196 [preauth]
Nov 29 08:06:44 compute-0 ceph-mon[75050]: pgmap v2033: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Nov 29 08:06:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:44 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:06:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 905 KiB/s wr, 147 op/s
Nov 29 08:06:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:45 compute-0 nova_compute[256729]: 2025-11-29 08:06:45.567 256736 INFO nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Creating config drive at /var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26/disk.config
Nov 29 08:06:45 compute-0 nova_compute[256729]: 2025-11-29 08:06:45.586 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8b7bc6z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:45 compute-0 nova_compute[256729]: 2025-11-29 08:06:45.724 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8b7bc6z" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:45 compute-0 nova_compute[256729]: 2025-11-29 08:06:45.767 256736 DEBUG nova.storage.rbd_utils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] rbd image cde72883-eb73-406e-8301-a92fe1527a26_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:06:45 compute-0 nova_compute[256729]: 2025-11-29 08:06:45.779 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26/disk.config cde72883-eb73-406e-8301-a92fe1527a26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.096 256736 DEBUG oslo_concurrency.lockutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.097 256736 DEBUG oslo_concurrency.lockutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.097 256736 DEBUG oslo_concurrency.lockutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.097 256736 DEBUG oslo_concurrency.lockutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.098 256736 DEBUG oslo_concurrency.lockutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.099 256736 INFO nova.compute.manager [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Terminating instance
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.101 256736 DEBUG nova.compute.manager [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:06:46 compute-0 ceph-mon[75050]: pgmap v2034: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 905 KiB/s wr, 147 op/s
Nov 29 08:06:46 compute-0 kernel: tap906ad477-03 (unregistering): left promiscuous mode
Nov 29 08:06:46 compute-0 NetworkManager[48962]: <info>  [1764403606.3853] device (tap906ad477-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:06:46 compute-0 ovn_controller[153383]: 2025-11-29T08:06:46Z|00238|binding|INFO|Releasing lport 906ad477-03aa-4cfd-9485-d0308f5ce2f1 from this chassis (sb_readonly=0)
Nov 29 08:06:46 compute-0 ovn_controller[153383]: 2025-11-29T08:06:46Z|00239|binding|INFO|Setting lport 906ad477-03aa-4cfd-9485-d0308f5ce2f1 down in Southbound
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.411 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 ovn_controller[153383]: 2025-11-29T08:06:46Z|00240|binding|INFO|Removing iface tap906ad477-03 ovn-installed in OVS
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.433 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:46.430 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:70:fa 10.100.0.5'], port_security=['fa:16:3e:bd:70:fa 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'c2e8da51-3b05-4a1c-a872-9b977bf7cdcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '00f4c1f7964a4e5fbe3db5be46b9676e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '377cba5c-a444-4939-9e65-f24eadd0abbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=357216b9-f046-4273-a2c2-2385abe848ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=906ad477-03aa-4cfd-9485-d0308f5ce2f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:46 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:46.432 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 906ad477-03aa-4cfd-9485-d0308f5ce2f1 in datapath 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c unbound from our chassis
Nov 29 08:06:46 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:46.434 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:06:46 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:46.435 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[205a37cc-7448-458f-81fa-75c55d9a0384]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:46 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:46.435 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c namespace which is not needed anymore
Nov 29 08:06:46 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 29 08:06:46 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 16.359s CPU time.
Nov 29 08:06:46 compute-0 systemd-machined[217781]: Machine qemu-24-instance-00000018 terminated.
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.460 256736 DEBUG oslo_concurrency.processutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26/disk.config cde72883-eb73-406e-8301-a92fe1527a26_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.461 256736 INFO nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Deleting local config drive /var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26/disk.config because it was imported into RBD.
Nov 29 08:06:46 compute-0 kernel: tapbe389aee-c9: entered promiscuous mode
Nov 29 08:06:46 compute-0 systemd-udevd[297401]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:06:46 compute-0 NetworkManager[48962]: <info>  [1764403606.5216] manager: (tapbe389aee-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Nov 29 08:06:46 compute-0 ovn_controller[153383]: 2025-11-29T08:06:46Z|00241|binding|INFO|Claiming lport be389aee-c934-4833-bcc3-3624a4a8e32f for this chassis.
Nov 29 08:06:46 compute-0 ovn_controller[153383]: 2025-11-29T08:06:46Z|00242|binding|INFO|be389aee-c934-4833-bcc3-3624a4a8e32f: Claiming fa:16:3e:de:44:c8 10.100.0.11
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.524 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 NetworkManager[48962]: <info>  [1764403606.5325] device (tapbe389aee-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:06:46 compute-0 NetworkManager[48962]: <info>  [1764403606.5340] device (tapbe389aee-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:06:46 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:46.537 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:44:c8 10.100.0.11'], port_security=['fa:16:3e:de:44:c8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cde72883-eb73-406e-8301-a92fe1527a26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '284fde66-e9d8-4738-b856-2e805436581e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=be389aee-c934-4833-bcc3-3624a4a8e32f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.548 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 ovn_controller[153383]: 2025-11-29T08:06:46Z|00243|binding|INFO|Setting lport be389aee-c934-4833-bcc3-3624a4a8e32f ovn-installed in OVS
Nov 29 08:06:46 compute-0 ovn_controller[153383]: 2025-11-29T08:06:46Z|00244|binding|INFO|Setting lport be389aee-c934-4833-bcc3-3624a4a8e32f up in Southbound
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.549 256736 INFO nova.virt.libvirt.driver [-] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Instance destroyed successfully.
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.549 256736 DEBUG nova.objects.instance [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lazy-loading 'resources' on Instance uuid c2e8da51-3b05-4a1c-a872-9b977bf7cdcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.550 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.553 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 systemd-machined[217781]: New machine qemu-26-instance-0000001a.
Nov 29 08:06:46 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.571 256736 DEBUG nova.virt.libvirt.vif [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1719257085',display_name='tempest-TransferEncryptedVolumeTest-server-1719257085',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1719257085',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF/ioI52WgdhEBAbeM3RhpZNbUdNn18Xja5uOnO3NOZPUzsKxYrvXBByAxA/Dl5IK3nSUHQ9foFVWH8Ax4rgF1bIpX1xDfETzCAV2xOlgY9UnrjEKcSJoT+wgO+gA9frAA==',key_name='tempest-TransferEncryptedVolumeTest-1552823458',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='00f4c1f7964a4e5fbe3db5be46b9676e',ramdisk_id='',reservation_id='r-wdsp1zqo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-2049180676',owner_user_name='tempest-TransferEncryptedVolumeTest-2049180676-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:23Z,user_data=None,user_id='2cb2de7fb67042f89a025f1a3e872530',uuid=c2e8da51-3b05-4a1c-a872-9b977bf7cdcd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.571 256736 DEBUG nova.network.os_vif_util [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converting VIF {"id": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "address": "fa:16:3e:bd:70:fa", "network": {"id": "45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1075568732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "00f4c1f7964a4e5fbe3db5be46b9676e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap906ad477-03", "ovs_interfaceid": "906ad477-03aa-4cfd-9485-d0308f5ce2f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.572 256736 DEBUG nova.network.os_vif_util [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:70:fa,bridge_name='br-int',has_traffic_filtering=True,id=906ad477-03aa-4cfd-9485-d0308f5ce2f1,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906ad477-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.572 256736 DEBUG os_vif [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:70:fa,bridge_name='br-int',has_traffic_filtering=True,id=906ad477-03aa-4cfd-9485-d0308f5ce2f1,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906ad477-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.575 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.575 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap906ad477-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.577 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.579 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.581 256736 INFO os_vif [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:70:fa,bridge_name='br-int',has_traffic_filtering=True,id=906ad477-03aa-4cfd-9485-d0308f5ce2f1,network=Network(45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap906ad477-03')
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.700 256736 DEBUG nova.compute.manager [req-7ab71833-d7ae-471f-b176-5e3a726bd9d2 req-cf33fe3a-4046-428c-86bb-8cffb539ddd0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received event network-vif-unplugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.700 256736 DEBUG oslo_concurrency.lockutils [req-7ab71833-d7ae-471f-b176-5e3a726bd9d2 req-cf33fe3a-4046-428c-86bb-8cffb539ddd0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.701 256736 DEBUG oslo_concurrency.lockutils [req-7ab71833-d7ae-471f-b176-5e3a726bd9d2 req-cf33fe3a-4046-428c-86bb-8cffb539ddd0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.701 256736 DEBUG oslo_concurrency.lockutils [req-7ab71833-d7ae-471f-b176-5e3a726bd9d2 req-cf33fe3a-4046-428c-86bb-8cffb539ddd0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.701 256736 DEBUG nova.compute.manager [req-7ab71833-d7ae-471f-b176-5e3a726bd9d2 req-cf33fe3a-4046-428c-86bb-8cffb539ddd0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] No waiting events found dispatching network-vif-unplugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.701 256736 DEBUG nova.compute.manager [req-7ab71833-d7ae-471f-b176-5e3a726bd9d2 req-cf33fe3a-4046-428c-86bb-8cffb539ddd0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received event network-vif-unplugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.767 256736 DEBUG nova.network.neutron [req-6d06d78c-ebef-4561-a4af-7e3d548d4f2b req-49b3e8a5-562b-46f0-b83a-9cc8384d6fbf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updated VIF entry in instance network info cache for port c112be9f-f94a-4fd7-bf2c-4f4614918d8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.768 256736 DEBUG nova.network.neutron [req-6d06d78c-ebef-4561-a4af-7e3d548d4f2b req-49b3e8a5-562b-46f0-b83a-9cc8384d6fbf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updating instance_info_cache with network_info: [{"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:46 compute-0 nova_compute[256729]: 2025-11-29 08:06:46.787 256736 DEBUG oslo_concurrency.lockutils [req-6d06d78c-ebef-4561-a4af-7e3d548d4f2b req-49b3e8a5-562b-46f0-b83a-9cc8384d6fbf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-f2cbf4cd-582b-408f-92b1-6b70364babcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:06:46 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[295711]: [NOTICE]   (295715) : haproxy version is 2.8.14-c23fe91
Nov 29 08:06:46 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[295711]: [NOTICE]   (295715) : path to executable is /usr/sbin/haproxy
Nov 29 08:06:46 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[295711]: [WARNING]  (295715) : Exiting Master process...
Nov 29 08:06:46 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[295711]: [WARNING]  (295715) : Exiting Master process...
Nov 29 08:06:46 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[295711]: [ALERT]    (295715) : Current worker (295717) exited with code 143 (Terminated)
Nov 29 08:06:46 compute-0 neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c[295711]: [WARNING]  (295715) : All workers exited. Exiting... (0)
Nov 29 08:06:46 compute-0 systemd[1]: libpod-1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f.scope: Deactivated successfully.
Nov 29 08:06:46 compute-0 podman[297432]: 2025-11-29 08:06:46.843341388 +0000 UTC m=+0.311165258 container died 1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:06:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 54 KiB/s wr, 138 op/s
Nov 29 08:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f-userdata-shm.mount: Deactivated successfully.
Nov 29 08:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2285de84fca3028f035e1a007468ada3ed7b394938137b08fbfe7b7dbd1f363-merged.mount: Deactivated successfully.
Nov 29 08:06:47 compute-0 podman[297432]: 2025-11-29 08:06:47.296274418 +0000 UTC m=+0.764098288 container cleanup 1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:06:47 compute-0 sshd-session[297354]: Invalid user ftpuser from 143.14.121.41 port 49090
Nov 29 08:06:47 compute-0 podman[297540]: 2025-11-29 08:06:47.399118725 +0000 UTC m=+0.063624466 container remove 1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 08:06:47 compute-0 systemd[1]: libpod-conmon-1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f.scope: Deactivated successfully.
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.407 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dd725b2b-3145-4110-9e67-79e60ab318e8]: (4, ('Sat Nov 29 08:06:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c (1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f)\n1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f\nSat Nov 29 08:06:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c (1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f)\n1d5cb56d931f65a7a8203ed9f96d04a9b499c8a7a722358c46c9bc5b9215294f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.409 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403607.4088948, cde72883-eb73-406e-8301-a92fe1527a26 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.409 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] VM Started (Lifecycle Event)
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.410 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc900c5-060d-4609-947b-0d6408ef2ad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.410 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45f1bbc0-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:47 compute-0 kernel: tap45f1bbc0-c0: left promiscuous mode
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.412 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.430 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[131fc449-932d-4dec-94a2-743fee9bee1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.433 256736 INFO nova.virt.libvirt.driver [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Deleting instance files /var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd_del
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.437 256736 INFO nova.virt.libvirt.driver [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Deletion of /var/lib/nova/instances/c2e8da51-3b05-4a1c-a872-9b977bf7cdcd_del complete
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.444 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c4bba3-b5b3-43de-9e10-9a06f841946d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.445 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[03b3f9d4-d2b3-4407-b831-3057d7b97130]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d45f1bbc0\x2dc06e\x2d4a64\x2d9d82\x2d3a4cbaa9482c.mount: Deactivated successfully.
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.459 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[672b2b1f-e404-4831-ae93-1829ca3a94c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 589481, 'reachable_time': 24874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297556, 'error': None, 'target': 'ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.463 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45f1bbc0-c06e-4a64-9d82-3a4cbaa9482c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.463 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[94db355f-6fdd-431f-bcd7-64afb8dff526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.463 163655 INFO neutron.agent.ovn.metadata.agent [-] Port be389aee-c934-4833-bcc3-3624a4a8e32f in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 unbound from our chassis
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.465 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.478 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1994d2c2-24e0-4419-82b0-cee735b2c4df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.507 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0d49b2-bf63-430d-bd3e-f7512ceb4a50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.510 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[a21c46a1-95ac-4c47-99ee-8b485f329b8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.542 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[0a46d06a-6480-4b1c-9aa5-02c76bf07888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.556 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e75fbaf3-a7c7-4d7f-a588-d58ad12415bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586581, 'reachable_time': 18771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297563, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.568 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b031d0c6-3da4-49a9-a682-191380c03a4e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2d9c390c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586593, 'tstamp': 586593}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297564, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2d9c390c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586597, 'tstamp': 586597}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297564, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.569 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.570 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.572 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d9c390c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.572 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.573 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d9c390c-30, col_values=(('external_ids', {'iface-id': '30965993-2787-409a-9e74-8cf68d39c3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:06:47 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:47.573 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.572 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.580 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.584 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403607.4106638, cde72883-eb73-406e-8301-a92fe1527a26 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.585 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] VM Paused (Lifecycle Event)
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.614 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.619 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.630 256736 INFO nova.compute.manager [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Took 1.53 seconds to destroy the instance on the hypervisor.
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.631 256736 DEBUG oslo.service.loopingcall [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.632 256736 DEBUG nova.compute.manager [-] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.632 256736 DEBUG nova.network.neutron [-] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.650 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:47 compute-0 nova_compute[256729]: 2025-11-29 08:06:47.788 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:47 compute-0 sshd-session[297354]: Connection closed by invalid user ftpuser 143.14.121.41 port 49090 [preauth]
Nov 29 08:06:48 compute-0 ceph-mon[75050]: pgmap v2035: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 54 KiB/s wr, 138 op/s
Nov 29 08:06:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 46 KiB/s wr, 105 op/s
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.132 256736 DEBUG nova.compute.manager [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received event network-vif-plugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.134 256736 DEBUG oslo_concurrency.lockutils [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.135 256736 DEBUG oslo_concurrency.lockutils [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.135 256736 DEBUG oslo_concurrency.lockutils [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.136 256736 DEBUG nova.compute.manager [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] No waiting events found dispatching network-vif-plugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.137 256736 WARNING nova.compute.manager [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received unexpected event network-vif-plugged-906ad477-03aa-4cfd-9485-d0308f5ce2f1 for instance with vm_state active and task_state deleting.
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.138 256736 DEBUG nova.compute.manager [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received event network-vif-plugged-be389aee-c934-4833-bcc3-3624a4a8e32f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.139 256736 DEBUG oslo_concurrency.lockutils [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "cde72883-eb73-406e-8301-a92fe1527a26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.139 256736 DEBUG oslo_concurrency.lockutils [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.140 256736 DEBUG oslo_concurrency.lockutils [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.141 256736 DEBUG nova.compute.manager [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Processing event network-vif-plugged-be389aee-c934-4833-bcc3-3624a4a8e32f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.142 256736 DEBUG nova.compute.manager [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received event network-vif-plugged-be389aee-c934-4833-bcc3-3624a4a8e32f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.143 256736 DEBUG oslo_concurrency.lockutils [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "cde72883-eb73-406e-8301-a92fe1527a26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.144 256736 DEBUG oslo_concurrency.lockutils [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.144 256736 DEBUG oslo_concurrency.lockutils [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.145 256736 DEBUG nova.compute.manager [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] No waiting events found dispatching network-vif-plugged-be389aee-c934-4833-bcc3-3624a4a8e32f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.146 256736 WARNING nova.compute.manager [req-fb6e29ac-d957-4309-8979-6bbc4c5bf680 req-42e20dcc-6d5b-4d93-832d-ed5c5a3acfc9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received unexpected event network-vif-plugged-be389aee-c934-4833-bcc3-3624a4a8e32f for instance with vm_state building and task_state spawning.
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.149 256736 DEBUG nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.155 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403609.1549585, cde72883-eb73-406e-8301-a92fe1527a26 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.156 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] VM Resumed (Lifecycle Event)
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.159 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.163 256736 INFO nova.virt.libvirt.driver [-] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Instance spawned successfully.
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.163 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.186 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.192 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.193 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.194 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.195 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.196 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.197 256736 DEBUG nova.virt.libvirt.driver [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.205 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.247 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.276 256736 INFO nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Took 10.53 seconds to spawn the instance on the hypervisor.
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.277 256736 DEBUG nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.354 256736 INFO nova.compute.manager [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Took 12.99 seconds to build instance.
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.375 256736 DEBUG oslo_concurrency.lockutils [None req-9afe058c-004d-4c46-8cb8-fb8965556fd8 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.439 256736 DEBUG nova.network.neutron [-] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:06:49 compute-0 nova_compute[256729]: 2025-11-29 08:06:49.557 256736 INFO nova.compute.manager [-] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Took 1.92 seconds to deallocate network for instance.
Nov 29 08:06:49 compute-0 sshd-session[297565]: Invalid user free from 143.14.121.41 port 49106
Nov 29 08:06:50 compute-0 nova_compute[256729]: 2025-11-29 08:06:50.030 256736 INFO nova.compute.manager [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Took 0.47 seconds to detach 1 volumes for instance.
Nov 29 08:06:50 compute-0 nova_compute[256729]: 2025-11-29 08:06:50.078 256736 DEBUG oslo_concurrency.lockutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:50 compute-0 nova_compute[256729]: 2025-11-29 08:06:50.079 256736 DEBUG oslo_concurrency.lockutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:50 compute-0 sshd-session[297565]: Connection closed by invalid user free 143.14.121.41 port 49106 [preauth]
Nov 29 08:06:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:50 compute-0 nova_compute[256729]: 2025-11-29 08:06:50.190 256736 DEBUG oslo_concurrency.processutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:06:50 compute-0 ceph-mon[75050]: pgmap v2036: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 46 KiB/s wr, 105 op/s
Nov 29 08:06:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:06:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1503577423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:50 compute-0 nova_compute[256729]: 2025-11-29 08:06:50.669 256736 DEBUG oslo_concurrency.processutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:06:50 compute-0 nova_compute[256729]: 2025-11-29 08:06:50.676 256736 DEBUG nova.compute.provider_tree [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:06:50 compute-0 nova_compute[256729]: 2025-11-29 08:06:50.714 256736 DEBUG nova.scheduler.client.report [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:06:50 compute-0 nova_compute[256729]: 2025-11-29 08:06:50.771 256736 DEBUG oslo_concurrency.lockutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:50 compute-0 nova_compute[256729]: 2025-11-29 08:06:50.909 256736 INFO nova.scheduler.client.report [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Deleted allocations for instance c2e8da51-3b05-4a1c-a872-9b977bf7cdcd
Nov 29 08:06:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 31 KiB/s wr, 94 op/s
Nov 29 08:06:51 compute-0 nova_compute[256729]: 2025-11-29 08:06:51.256 256736 DEBUG oslo_concurrency.lockutils [None req-dc8b431b-0fb5-43ca-bd17-5ead8d776985 2cb2de7fb67042f89a025f1a3e872530 00f4c1f7964a4e5fbe3db5be46b9676e - - default default] Lock "c2e8da51-3b05-4a1c-a872-9b977bf7cdcd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:06:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1503577423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:06:51 compute-0 nova_compute[256729]: 2025-11-29 08:06:51.577 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:51 compute-0 nova_compute[256729]: 2025-11-29 08:06:51.588 256736 DEBUG nova.compute.manager [req-75642ec2-d604-4ab5-bfbf-f3d16fb3e8fe req-1db62ee3-41b0-4969-bf1d-bb538ee68e16 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Received event network-vif-deleted-906ad477-03aa-4cfd-9485-d0308f5ce2f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:52 compute-0 ceph-mon[75050]: pgmap v2037: 305 pgs: 305 active+clean; 398 MiB data, 634 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 31 KiB/s wr, 94 op/s
Nov 29 08:06:52 compute-0 ovn_controller[153383]: 2025-11-29T08:06:52Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:cb:1b 10.100.0.4
Nov 29 08:06:52 compute-0 ovn_controller[153383]: 2025-11-29T08:06:52Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:cb:1b 10.100.0.4
Nov 29 08:06:52 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:06:52 compute-0 nova_compute[256729]: 2025-11-29 08:06:52.789 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 414 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 170 op/s
Nov 29 08:06:54 compute-0 ceph-mon[75050]: pgmap v2038: 305 pgs: 305 active+clean; 414 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 170 op/s
Nov 29 08:06:54 compute-0 sshd-session[297589]: Invalid user daniel from 143.14.121.41 port 49112
Nov 29 08:06:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 426 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 179 op/s
Nov 29 08:06:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:06:55 compute-0 sshd-session[297589]: Connection closed by invalid user daniel 143.14.121.41 port 49112 [preauth]
Nov 29 08:06:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:06:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/263101566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:06:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/263101566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:56 compute-0 ceph-mon[75050]: pgmap v2039: 305 pgs: 305 active+clean; 426 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 179 op/s
Nov 29 08:06:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/263101566' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:06:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/263101566' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:06:56 compute-0 nova_compute[256729]: 2025-11-29 08:06:56.580 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 303 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 179 op/s
Nov 29 08:06:57 compute-0 nova_compute[256729]: 2025-11-29 08:06:57.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:06:57 compute-0 nova_compute[256729]: 2025-11-29 08:06:57.793 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:06:57 compute-0 sshd-session[297592]: Invalid user cyber from 143.14.121.41 port 60596
Nov 29 08:06:58 compute-0 sshd-session[297592]: Connection closed by invalid user cyber 143.14.121.41 port 60596 [preauth]
Nov 29 08:06:58 compute-0 ceph-mon[75050]: pgmap v2040: 305 pgs: 305 active+clean; 303 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 179 op/s
Nov 29 08:06:58 compute-0 nova_compute[256729]: 2025-11-29 08:06:58.525 256736 DEBUG nova.compute.manager [req-0d163cfc-8abb-429b-ae5f-7487cff269e5 req-99cc0358-edf6-4bf1-b15d-a65f5c7417c9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received event network-changed-be389aee-c934-4833-bcc3-3624a4a8e32f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:06:58 compute-0 nova_compute[256729]: 2025-11-29 08:06:58.525 256736 DEBUG nova.compute.manager [req-0d163cfc-8abb-429b-ae5f-7487cff269e5 req-99cc0358-edf6-4bf1-b15d-a65f5c7417c9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Refreshing instance network info cache due to event network-changed-be389aee-c934-4833-bcc3-3624a4a8e32f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:06:58 compute-0 nova_compute[256729]: 2025-11-29 08:06:58.526 256736 DEBUG oslo_concurrency.lockutils [req-0d163cfc-8abb-429b-ae5f-7487cff269e5 req-99cc0358-edf6-4bf1-b15d-a65f5c7417c9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:06:58 compute-0 nova_compute[256729]: 2025-11-29 08:06:58.526 256736 DEBUG oslo_concurrency.lockutils [req-0d163cfc-8abb-429b-ae5f-7487cff269e5 req-99cc0358-edf6-4bf1-b15d-a65f5c7417c9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:06:58 compute-0 nova_compute[256729]: 2025-11-29 08:06:58.526 256736 DEBUG nova.network.neutron [req-0d163cfc-8abb-429b-ae5f-7487cff269e5 req-99cc0358-edf6-4bf1-b15d-a65f5c7417c9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Refreshing network info cache for port be389aee-c934-4833-bcc3-3624a4a8e32f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:06:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 248 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Nov 29 08:06:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:59.784 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:06:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:59.785 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:06:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:06:59.785 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:00 compute-0 sshd-session[297594]: Invalid user www from 143.14.121.41 port 60604
Nov 29 08:07:00 compute-0 ceph-mon[75050]: pgmap v2041: 305 pgs: 305 active+clean; 248 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Nov 29 08:07:00 compute-0 nova_compute[256729]: 2025-11-29 08:07:00.539 256736 DEBUG nova.network.neutron [req-0d163cfc-8abb-429b-ae5f-7487cff269e5 req-99cc0358-edf6-4bf1-b15d-a65f5c7417c9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Updated VIF entry in instance network info cache for port be389aee-c934-4833-bcc3-3624a4a8e32f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:07:00 compute-0 nova_compute[256729]: 2025-11-29 08:07:00.540 256736 DEBUG nova.network.neutron [req-0d163cfc-8abb-429b-ae5f-7487cff269e5 req-99cc0358-edf6-4bf1-b15d-a65f5c7417c9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Updating instance_info_cache with network_info: [{"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:00 compute-0 nova_compute[256729]: 2025-11-29 08:07:00.569 256736 DEBUG oslo_concurrency.lockutils [req-0d163cfc-8abb-429b-ae5f-7487cff269e5 req-99cc0358-edf6-4bf1-b15d-a65f5c7417c9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:00 compute-0 sshd-session[297594]: Connection closed by invalid user www 143.14.121.41 port 60604 [preauth]
Nov 29 08:07:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 248 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Nov 29 08:07:01 compute-0 nova_compute[256729]: 2025-11-29 08:07:01.533 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403606.5302222, c2e8da51-3b05-4a1c-a872-9b977bf7cdcd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:01 compute-0 nova_compute[256729]: 2025-11-29 08:07:01.534 256736 INFO nova.compute.manager [-] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] VM Stopped (Lifecycle Event)
Nov 29 08:07:01 compute-0 nova_compute[256729]: 2025-11-29 08:07:01.582 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:01 compute-0 nova_compute[256729]: 2025-11-29 08:07:01.685 256736 DEBUG nova.compute.manager [None req-ac9a9c22-38d0-43ac-86f2-2731ad5da934 - - - - - -] [instance: c2e8da51-3b05-4a1c-a872-9b977bf7cdcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:01 compute-0 ovn_controller[153383]: 2025-11-29T08:07:01Z|00062|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.11
Nov 29 08:07:01 compute-0 ovn_controller[153383]: 2025-11-29T08:07:01Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:de:44:c8 10.100.0.11
Nov 29 08:07:02 compute-0 nova_compute[256729]: 2025-11-29 08:07:02.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:02 compute-0 nova_compute[256729]: 2025-11-29 08:07:02.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:02 compute-0 ceph-mon[75050]: pgmap v2042: 305 pgs: 305 active+clean; 248 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 161 op/s
Nov 29 08:07:02 compute-0 nova_compute[256729]: 2025-11-29 08:07:02.795 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 257 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 186 op/s
Nov 29 08:07:03 compute-0 nova_compute[256729]: 2025-11-29 08:07:03.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:03 compute-0 nova_compute[256729]: 2025-11-29 08:07:03.729 256736 DEBUG oslo_concurrency.lockutils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:03 compute-0 nova_compute[256729]: 2025-11-29 08:07:03.730 256736 DEBUG oslo_concurrency.lockutils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:03 compute-0 nova_compute[256729]: 2025-11-29 08:07:03.770 256736 DEBUG nova.objects.instance [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:03 compute-0 nova_compute[256729]: 2025-11-29 08:07:03.968 256736 DEBUG oslo_concurrency.lockutils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:04 compute-0 ceph-mon[75050]: pgmap v2043: 305 pgs: 305 active+clean; 257 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 186 op/s
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.676 256736 DEBUG oslo_concurrency.lockutils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.676 256736 DEBUG oslo_concurrency.lockutils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.677 256736 INFO nova.compute.manager [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attaching volume 1463fb6e-c566-47d5-a9ed-2ae1a1ef949a to /dev/vdb
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.851 256736 DEBUG os_brick.utils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.852 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.863 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.864 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[666cddcc-0859-4e90-933b-cefa242ad308]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.866 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.874 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.874 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[a2aef8b3-2510-4da9-a0a8-04aaf9064637]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.876 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.885 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.886 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[d99c4a8d-8bb6-4da2-b805-b1791d681c19]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.887 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[fcae85a0-640b-47de-9ae2-edb456dce2da]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.888 256736 DEBUG oslo_concurrency.processutils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.921 256736 DEBUG oslo_concurrency.processutils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.925 256736 DEBUG os_brick.initiator.connectors.lightos [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.925 256736 DEBUG os_brick.initiator.connectors.lightos [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.926 256736 DEBUG os_brick.initiator.connectors.lightos [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.927 256736 DEBUG os_brick.utils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:07:04 compute-0 nova_compute[256729]: 2025-11-29 08:07:04.928 256736 DEBUG nova.virt.block_device [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updating existing volume attachment record: 8abf0c26-68ed-48d2-a429-0180fbf224ad _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 263 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1007 KiB/s wr, 137 op/s
Nov 29 08:07:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1863149668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:07:05
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', '.rgw.root', 'vms', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta']
Nov 29 08:07:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:07:06 compute-0 ovn_controller[153383]: 2025-11-29T08:07:06Z|00245|binding|INFO|Releasing lport 30965993-2787-409a-9e74-8cf68d39c3b3 from this chassis (sb_readonly=0)
Nov 29 08:07:06 compute-0 ovn_controller[153383]: 2025-11-29T08:07:06Z|00246|binding|INFO|Releasing lport 9c3f688e-b00f-4b58-999f-eca278500698 from this chassis (sb_readonly=0)
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.165 256736 DEBUG nova.objects.instance [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.280 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:06 compute-0 ceph-mon[75050]: pgmap v2044: 305 pgs: 305 active+clean; 263 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1007 KiB/s wr, 137 op/s
Nov 29 08:07:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1863149668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:06 compute-0 ovn_controller[153383]: 2025-11-29T08:07:06Z|00064|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.11
Nov 29 08:07:06 compute-0 ovn_controller[153383]: 2025-11-29T08:07:06Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:de:44:c8 10.100.0.11
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.585 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.857 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.858 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.858 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.859 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 10a1a099-bf1a-4195-9186-8f440437a1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:06 compute-0 ovn_controller[153383]: 2025-11-29T08:07:06Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:44:c8 10.100.0.11
Nov 29 08:07:06 compute-0 ovn_controller[153383]: 2025-11-29T08:07:06Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:44:c8 10.100.0.11
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.964 256736 DEBUG nova.virt.libvirt.driver [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attempting to attach volume 1463fb6e-c566-47d5-a9ed-2ae1a1ef949a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:07:06 compute-0 nova_compute[256729]: 2025-11-29 08:07:06.968 256736 DEBUG nova.virt.libvirt.guest [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:07:06 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:06 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-1463fb6e-c566-47d5-a9ed-2ae1a1ef949a">
Nov 29 08:07:06 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:06 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:06 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 08:07:06 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:07:06 compute-0 nova_compute[256729]:   </auth>
Nov 29 08:07:06 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:06 compute-0 nova_compute[256729]:   <serial>1463fb6e-c566-47d5-a9ed-2ae1a1ef949a</serial>
Nov 29 08:07:06 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:06 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:07:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 267 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 628 KiB/s wr, 89 op/s
Nov 29 08:07:07 compute-0 sshd-session[297596]: Invalid user user from 143.14.121.41 port 60616
Nov 29 08:07:07 compute-0 nova_compute[256729]: 2025-11-29 08:07:07.842 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:08 compute-0 sshd-session[297596]: Connection closed by invalid user user 143.14.121.41 port 60616 [preauth]
Nov 29 08:07:08 compute-0 nova_compute[256729]: 2025-11-29 08:07:08.133 256736 DEBUG nova.virt.libvirt.driver [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:08 compute-0 nova_compute[256729]: 2025-11-29 08:07:08.133 256736 DEBUG nova.virt.libvirt.driver [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:08 compute-0 nova_compute[256729]: 2025-11-29 08:07:08.134 256736 DEBUG nova.virt.libvirt.driver [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:08 compute-0 nova_compute[256729]: 2025-11-29 08:07:08.134 256736 DEBUG nova.virt.libvirt.driver [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No VIF found with MAC fa:16:3e:1e:cb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:07:08 compute-0 ceph-mon[75050]: pgmap v2045: 305 pgs: 305 active+clean; 267 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 628 KiB/s wr, 89 op/s
Nov 29 08:07:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:07:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1605106751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:07:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1605106751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:08 compute-0 podman[297626]: 2025-11-29 08:07:08.741531182 +0000 UTC m=+0.105497313 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:07:08 compute-0 podman[297627]: 2025-11-29 08:07:08.753232845 +0000 UTC m=+0.106311416 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:07:08 compute-0 podman[297625]: 2025-11-29 08:07:08.771928111 +0000 UTC m=+0.135895472 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:07:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 267 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 591 KiB/s wr, 59 op/s
Nov 29 08:07:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1605106751' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1605106751' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:09 compute-0 nova_compute[256729]: 2025-11-29 08:07:09.708 256736 DEBUG oslo_concurrency.lockutils [None req-90bfe676-4add-4187-9120-d6504b6f20dc 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:10 compute-0 ceph-mon[75050]: pgmap v2046: 305 pgs: 305 active+clean; 267 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 591 KiB/s wr, 59 op/s
Nov 29 08:07:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 267 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 591 KiB/s wr, 57 op/s
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.587 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.784 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updating instance_info_cache with network_info: [{"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.929 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.929 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.930 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.931 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.932 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.932 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.934 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.992 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.992 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.993 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.994 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:07:11 compute-0 nova_compute[256729]: 2025-11-29 08:07:11.994 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:12 compute-0 sshd-session[297687]: Invalid user user2 from 143.14.121.41 port 53758
Nov 29 08:07:12 compute-0 sshd-session[297687]: Connection closed by invalid user user2 143.14.121.41 port 53758 [preauth]
Nov 29 08:07:12 compute-0 nova_compute[256729]: 2025-11-29 08:07:12.844 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 267 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 600 KiB/s wr, 64 op/s
Nov 29 08:07:13 compute-0 ceph-mon[75050]: pgmap v2047: 305 pgs: 305 active+clean; 267 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 591 KiB/s wr, 57 op/s
Nov 29 08:07:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217288278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:13 compute-0 nova_compute[256729]: 2025-11-29 08:07:13.943 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.948s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.034 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.035 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.035 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.039 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.039 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.042 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.042 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.285 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.287 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3802MB free_disk=59.94240951538086GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.287 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.288 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.364 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 10a1a099-bf1a-4195-9186-8f440437a1ce actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.364 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance f2cbf4cd-582b-408f-92b1-6b70364babcf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.364 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance cde72883-eb73-406e-8301-a92fe1527a26 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.364 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.364 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.381 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing inventories for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.398 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating ProviderTree inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.399 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.415 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing aggregate associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.435 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing trait associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, traits: COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NODE,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.489 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Nov 29 08:07:14 compute-0 ceph-mon[75050]: pgmap v2048: 305 pgs: 305 active+clean; 267 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 600 KiB/s wr, 64 op/s
Nov 29 08:07:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3217288278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Nov 29 08:07:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Nov 29 08:07:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:14.925 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:14.926 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.927 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403825079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.962 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.969 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:07:14 compute-0 nova_compute[256729]: 2025-11-29 08:07:14.985 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.013 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.014 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 268 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 116 KiB/s rd, 161 KiB/s wr, 18 op/s
Nov 29 08:07:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.167417) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403635167471, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1912, "num_deletes": 274, "total_data_size": 2770933, "memory_usage": 2825072, "flush_reason": "Manual Compaction"}
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403635183689, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2715553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34468, "largest_seqno": 36379, "table_properties": {"data_size": 2706685, "index_size": 5490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19149, "raw_average_key_size": 20, "raw_value_size": 2688628, "raw_average_value_size": 2935, "num_data_blocks": 241, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 274, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403482, "oldest_key_time": 1764403482, "file_creation_time": 1764403635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 16319 microseconds, and 5986 cpu microseconds.
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.183739) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2715553 bytes OK
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.183760) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.185609) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.185638) EVENT_LOG_v1 {"time_micros": 1764403635185628, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.185666) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2762531, prev total WAL file size 2762531, number of live WAL files 2.
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.187434) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303037' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2651KB)], [71(9788KB)]
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403635187501, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 12738821, "oldest_snapshot_seqno": -1}
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.232 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6937 keys, 12590255 bytes, temperature: kUnknown
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403635325772, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 12590255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12536385, "index_size": 35454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 175483, "raw_average_key_size": 25, "raw_value_size": 12404184, "raw_average_value_size": 1788, "num_data_blocks": 1431, "num_entries": 6937, "num_filter_entries": 6937, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764403635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.326261) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 12590255 bytes
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.327666) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.0 rd, 91.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.6 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(9.3) write-amplify(4.6) OK, records in: 7492, records dropped: 555 output_compression: NoCompression
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.327696) EVENT_LOG_v1 {"time_micros": 1764403635327683, "job": 40, "event": "compaction_finished", "compaction_time_micros": 138397, "compaction_time_cpu_micros": 57083, "output_level": 6, "num_output_files": 1, "total_output_size": 12590255, "num_input_records": 7492, "num_output_records": 6937, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403635328741, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403635332124, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.187295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.332224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.332230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.332232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.332233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:15 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:15.332235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007648690336654294 of space, bias 1.0, pg target 0.22946071009962882 quantized to 32 (current 32)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0012245662605000977 of space, bias 1.0, pg target 0.3673698781500293 quantized to 32 (current 32)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:07:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:07:15 compute-0 ceph-mon[75050]: osdmap e405: 3 total, 3 up, 3 in
Nov 29 08:07:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1403825079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:15 compute-0 ovn_controller[153383]: 2025-11-29T08:07:15Z|00247|binding|INFO|Releasing lport 30965993-2787-409a-9e74-8cf68d39c3b3 from this chassis (sb_readonly=0)
Nov 29 08:07:15 compute-0 ovn_controller[153383]: 2025-11-29T08:07:15Z|00248|binding|INFO|Releasing lport 9c3f688e-b00f-4b58-999f-eca278500698 from this chassis (sb_readonly=0)
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.901 256736 DEBUG nova.compute.manager [req-66502515-edb4-4586-a0e0-16a02af5e51c req-04c1844b-a9c0-4ccc-8311-a630b6b64cdf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received event network-changed-be389aee-c934-4833-bcc3-3624a4a8e32f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.901 256736 DEBUG nova.compute.manager [req-66502515-edb4-4586-a0e0-16a02af5e51c req-04c1844b-a9c0-4ccc-8311-a630b6b64cdf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Refreshing instance network info cache due to event network-changed-be389aee-c934-4833-bcc3-3624a4a8e32f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.902 256736 DEBUG oslo_concurrency.lockutils [req-66502515-edb4-4586-a0e0-16a02af5e51c req-04c1844b-a9c0-4ccc-8311-a630b6b64cdf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.902 256736 DEBUG oslo_concurrency.lockutils [req-66502515-edb4-4586-a0e0-16a02af5e51c req-04c1844b-a9c0-4ccc-8311-a630b6b64cdf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.903 256736 DEBUG nova.network.neutron [req-66502515-edb4-4586-a0e0-16a02af5e51c req-04c1844b-a9c0-4ccc-8311-a630b6b64cdf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Refreshing network info cache for port be389aee-c934-4833-bcc3-3624a4a8e32f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.953 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.983 256736 DEBUG oslo_concurrency.lockutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "cde72883-eb73-406e-8301-a92fe1527a26" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.984 256736 DEBUG oslo_concurrency.lockutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.984 256736 DEBUG oslo_concurrency.lockutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "cde72883-eb73-406e-8301-a92fe1527a26-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.984 256736 DEBUG oslo_concurrency.lockutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.985 256736 DEBUG oslo_concurrency.lockutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.986 256736 INFO nova.compute.manager [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Terminating instance
Nov 29 08:07:15 compute-0 nova_compute[256729]: 2025-11-29 08:07:15.987 256736 DEBUG nova.compute.manager [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:07:16 compute-0 kernel: tapbe389aee-c9 (unregistering): left promiscuous mode
Nov 29 08:07:16 compute-0 NetworkManager[48962]: <info>  [1764403636.0406] device (tapbe389aee-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:07:16 compute-0 ovn_controller[153383]: 2025-11-29T08:07:16Z|00249|binding|INFO|Releasing lport be389aee-c934-4833-bcc3-3624a4a8e32f from this chassis (sb_readonly=0)
Nov 29 08:07:16 compute-0 ovn_controller[153383]: 2025-11-29T08:07:16Z|00250|binding|INFO|Setting lport be389aee-c934-4833-bcc3-3624a4a8e32f down in Southbound
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.051 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-0 ovn_controller[153383]: 2025-11-29T08:07:16Z|00251|binding|INFO|Removing iface tapbe389aee-c9 ovn-installed in OVS
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.053 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.066 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:44:c8 10.100.0.11'], port_security=['fa:16:3e:de:44:c8 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cde72883-eb73-406e-8301-a92fe1527a26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '284fde66-e9d8-4738-b856-2e805436581e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=be389aee-c934-4833-bcc3-3624a4a8e32f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.068 163655 INFO neutron.agent.ovn.metadata.agent [-] Port be389aee-c934-4833-bcc3-3624a4a8e32f in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 unbound from our chassis
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.073 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d9c390c-362a-41a5-93b0-23344eb99ae5
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.075 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.095 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f15255b8-04a0-407a-8ece-4ba6977784e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:16 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 29 08:07:16 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 14.049s CPU time.
Nov 29 08:07:16 compute-0 systemd-machined[217781]: Machine qemu-26-instance-0000001a terminated.
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.127 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5d3721-64ed-4465-828b-4cfa007a89b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.130 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[30973ef8-61c0-475d-8dd4-6032f197a601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.166 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[7419124a-3519-4c66-913b-f8e853d571f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.190 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a7d514-c877-47a2-ab6f-a6571171289b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d9c390c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:24:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586581, 'reachable_time': 18771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297748, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.214 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4352ec37-2636-4bad-b7b9-5b8ef3975cf8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2d9c390c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586593, 'tstamp': 586593}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297750, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2d9c390c-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586597, 'tstamp': 586597}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297750, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.215 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.218 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.227 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.228 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d9c390c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.229 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.229 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d9c390c-30, col_values=(('external_ids', {'iface-id': '30965993-2787-409a-9e74-8cf68d39c3b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.229 256736 INFO nova.virt.libvirt.driver [-] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Instance destroyed successfully.
Nov 29 08:07:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:16.230 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.230 256736 DEBUG nova.objects.instance [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'resources' on Instance uuid cde72883-eb73-406e-8301-a92fe1527a26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.246 256736 DEBUG nova.virt.libvirt.vif [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-632518534',display_name='tempest-TestVolumeBootPattern-server-632518534',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-632518534',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRhGhCDf+2DWWqDuvRpS/JaOK+f/CbMMIs9mX1kyTRqTPCFubI8ju/4twf4g9TbzLiRX/BzWwQ/uPnV3ZkV8vI7PffevvM5uIZzGBjdTxd3Z49lVgwpoVKRmE3GzO1NBg==',key_name='tempest-TestVolumeBootPattern-556618908',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-xexrwmxo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:49Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=cde72883-eb73-406e-8301-a92fe1527a26,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.247 256736 DEBUG nova.network.os_vif_util [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.248 256736 DEBUG nova.network.os_vif_util [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:44:c8,bridge_name='br-int',has_traffic_filtering=True,id=be389aee-c934-4833-bcc3-3624a4a8e32f,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe389aee-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.249 256736 DEBUG os_vif [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:44:c8,bridge_name='br-int',has_traffic_filtering=True,id=be389aee-c934-4833-bcc3-3624a4a8e32f,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe389aee-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.250 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.251 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe389aee-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.252 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.254 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.256 256736 INFO os_vif [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:44:c8,bridge_name='br-int',has_traffic_filtering=True,id=be389aee-c934-4833-bcc3-3624a4a8e32f,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe389aee-c9')
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.342 256736 DEBUG nova.compute.manager [req-1d733c4a-110f-494d-ba06-50f8db5abb39 req-97bf9cd9-13f6-49c1-9cda-777788182d66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received event network-vif-unplugged-be389aee-c934-4833-bcc3-3624a4a8e32f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.343 256736 DEBUG oslo_concurrency.lockutils [req-1d733c4a-110f-494d-ba06-50f8db5abb39 req-97bf9cd9-13f6-49c1-9cda-777788182d66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "cde72883-eb73-406e-8301-a92fe1527a26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.343 256736 DEBUG oslo_concurrency.lockutils [req-1d733c4a-110f-494d-ba06-50f8db5abb39 req-97bf9cd9-13f6-49c1-9cda-777788182d66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.343 256736 DEBUG oslo_concurrency.lockutils [req-1d733c4a-110f-494d-ba06-50f8db5abb39 req-97bf9cd9-13f6-49c1-9cda-777788182d66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.344 256736 DEBUG nova.compute.manager [req-1d733c4a-110f-494d-ba06-50f8db5abb39 req-97bf9cd9-13f6-49c1-9cda-777788182d66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] No waiting events found dispatching network-vif-unplugged-be389aee-c934-4833-bcc3-3624a4a8e32f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.344 256736 DEBUG nova.compute.manager [req-1d733c4a-110f-494d-ba06-50f8db5abb39 req-97bf9cd9-13f6-49c1-9cda-777788182d66 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received event network-vif-unplugged-be389aee-c934-4833-bcc3-3624a4a8e32f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.488 256736 INFO nova.virt.libvirt.driver [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Deleting instance files /var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26_del
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.489 256736 INFO nova.virt.libvirt.driver [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Deletion of /var/lib/nova/instances/cde72883-eb73-406e-8301-a92fe1527a26_del complete
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.556 256736 INFO nova.compute.manager [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Took 0.57 seconds to destroy the instance on the hypervisor.
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.557 256736 DEBUG oslo.service.loopingcall [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.558 256736 DEBUG nova.compute.manager [-] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:07:16 compute-0 nova_compute[256729]: 2025-11-29 08:07:16.558 256736 DEBUG nova.network.neutron [-] [instance: cde72883-eb73-406e-8301-a92fe1527a26] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:07:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Nov 29 08:07:16 compute-0 ceph-mon[75050]: pgmap v2050: 305 pgs: 305 active+clean; 268 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 116 KiB/s rd, 161 KiB/s wr, 18 op/s
Nov 29 08:07:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Nov 29 08:07:16 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Nov 29 08:07:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 269 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 132 KiB/s wr, 43 op/s
Nov 29 08:07:17 compute-0 sshd-session[297700]: Invalid user ubuntu from 143.14.121.41 port 59968
Nov 29 08:07:17 compute-0 nova_compute[256729]: 2025-11-29 08:07:17.679 256736 DEBUG nova.network.neutron [req-66502515-edb4-4586-a0e0-16a02af5e51c req-04c1844b-a9c0-4ccc-8311-a630b6b64cdf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Updated VIF entry in instance network info cache for port be389aee-c934-4833-bcc3-3624a4a8e32f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:07:17 compute-0 nova_compute[256729]: 2025-11-29 08:07:17.679 256736 DEBUG nova.network.neutron [req-66502515-edb4-4586-a0e0-16a02af5e51c req-04c1844b-a9c0-4ccc-8311-a630b6b64cdf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Updating instance_info_cache with network_info: [{"id": "be389aee-c934-4833-bcc3-3624a4a8e32f", "address": "fa:16:3e:de:44:c8", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe389aee-c9", "ovs_interfaceid": "be389aee-c934-4833-bcc3-3624a4a8e32f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:17 compute-0 nova_compute[256729]: 2025-11-29 08:07:17.715 256736 DEBUG oslo_concurrency.lockutils [req-66502515-edb4-4586-a0e0-16a02af5e51c req-04c1844b-a9c0-4ccc-8311-a630b6b64cdf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-cde72883-eb73-406e-8301-a92fe1527a26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:17 compute-0 ceph-mon[75050]: osdmap e406: 3 total, 3 up, 3 in
Nov 29 08:07:17 compute-0 sshd-session[297700]: Connection closed by invalid user ubuntu 143.14.121.41 port 59968 [preauth]
Nov 29 08:07:17 compute-0 nova_compute[256729]: 2025-11-29 08:07:17.847 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:17 compute-0 nova_compute[256729]: 2025-11-29 08:07:17.850 256736 DEBUG nova.network.neutron [-] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:17 compute-0 nova_compute[256729]: 2025-11-29 08:07:17.870 256736 INFO nova.compute.manager [-] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Took 1.31 seconds to deallocate network for instance.
Nov 29 08:07:17 compute-0 nova_compute[256729]: 2025-11-29 08:07:17.990 256736 DEBUG nova.compute.manager [req-9aa90609-5f1d-4431-aee8-969a129ce0e7 req-06d88fd8-768e-40fc-8fc4-41a27a0d3993 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received event network-vif-deleted-be389aee-c934-4833-bcc3-3624a4a8e32f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.078 256736 INFO nova.compute.manager [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Took 0.21 seconds to detach 1 volumes for instance.
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.127 256736 DEBUG oslo_concurrency.lockutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.128 256736 DEBUG oslo_concurrency.lockutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.223 256736 DEBUG oslo_concurrency.processutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.442 256736 DEBUG nova.compute.manager [req-5bbb0e72-ce9a-42e6-92e0-bc870e10e8ac req-b11f7c2d-ba88-4df8-9443-f1f7e470cdf9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received event network-vif-plugged-be389aee-c934-4833-bcc3-3624a4a8e32f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.443 256736 DEBUG oslo_concurrency.lockutils [req-5bbb0e72-ce9a-42e6-92e0-bc870e10e8ac req-b11f7c2d-ba88-4df8-9443-f1f7e470cdf9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "cde72883-eb73-406e-8301-a92fe1527a26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.444 256736 DEBUG oslo_concurrency.lockutils [req-5bbb0e72-ce9a-42e6-92e0-bc870e10e8ac req-b11f7c2d-ba88-4df8-9443-f1f7e470cdf9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.444 256736 DEBUG oslo_concurrency.lockutils [req-5bbb0e72-ce9a-42e6-92e0-bc870e10e8ac req-b11f7c2d-ba88-4df8-9443-f1f7e470cdf9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.444 256736 DEBUG nova.compute.manager [req-5bbb0e72-ce9a-42e6-92e0-bc870e10e8ac req-b11f7c2d-ba88-4df8-9443-f1f7e470cdf9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] No waiting events found dispatching network-vif-plugged-be389aee-c934-4833-bcc3-3624a4a8e32f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.445 256736 WARNING nova.compute.manager [req-5bbb0e72-ce9a-42e6-92e0-bc870e10e8ac req-b11f7c2d-ba88-4df8-9443-f1f7e470cdf9 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Received unexpected event network-vif-plugged-be389aee-c934-4833-bcc3-3624a4a8e32f for instance with vm_state deleted and task_state None.
Nov 29 08:07:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2292660749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.704 256736 DEBUG oslo_concurrency.processutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.714 256736 DEBUG nova.compute.provider_tree [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:07:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Nov 29 08:07:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Nov 29 08:07:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Nov 29 08:07:18 compute-0 ceph-mon[75050]: pgmap v2052: 305 pgs: 305 active+clean; 269 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 132 KiB/s wr, 43 op/s
Nov 29 08:07:18 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2292660749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.740 256736 DEBUG nova.scheduler.client.report [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.776 256736 DEBUG oslo_concurrency.lockutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.806 256736 INFO nova.scheduler.client.report [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Deleted allocations for instance cde72883-eb73-406e-8301-a92fe1527a26
Nov 29 08:07:18 compute-0 nova_compute[256729]: 2025-11-29 08:07:18.886 256736 DEBUG oslo_concurrency.lockutils [None req-c7a5b2fd-f070-41b9-9774-b514d857d5f9 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "cde72883-eb73-406e-8301-a92fe1527a26" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 269 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 200 KiB/s wr, 99 op/s
Nov 29 08:07:19 compute-0 ovn_controller[153383]: 2025-11-29T08:07:19Z|00252|binding|INFO|Releasing lport 30965993-2787-409a-9e74-8cf68d39c3b3 from this chassis (sb_readonly=0)
Nov 29 08:07:19 compute-0 ovn_controller[153383]: 2025-11-29T08:07:19Z|00253|binding|INFO|Releasing lport 9c3f688e-b00f-4b58-999f-eca278500698 from this chassis (sb_readonly=0)
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.704 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:19 compute-0 ceph-mon[75050]: osdmap e407: 3 total, 3 up, 3 in
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.805 256736 DEBUG oslo_concurrency.lockutils [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.805 256736 DEBUG oslo_concurrency.lockutils [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.825 256736 INFO nova.compute.manager [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Detaching volume 1463fb6e-c566-47d5-a9ed-2ae1a1ef949a
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.962 256736 INFO nova.virt.block_device [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attempting to driver detach volume 1463fb6e-c566-47d5-a9ed-2ae1a1ef949a from mountpoint /dev/vdb
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.974 256736 DEBUG nova.virt.libvirt.driver [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Attempting to detach device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.975 256736 DEBUG nova.virt.libvirt.guest [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-1463fb6e-c566-47d5-a9ed-2ae1a1ef949a">
Nov 29 08:07:19 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <serial>1463fb6e-c566-47d5-a9ed-2ae1a1ef949a</serial>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:19 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:19 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.987 256736 INFO nova.virt.libvirt.driver [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully detached device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the persistent domain config.
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.987 256736 DEBUG nova.virt.libvirt.driver [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:07:19 compute-0 nova_compute[256729]: 2025-11-29 08:07:19.988 256736 DEBUG nova.virt.libvirt.guest [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-1463fb6e-c566-47d5-a9ed-2ae1a1ef949a">
Nov 29 08:07:19 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <serial>1463fb6e-c566-47d5-a9ed-2ae1a1ef949a</serial>
Nov 29 08:07:19 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:19 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:19 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:07:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/473941050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:07:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/473941050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:20 compute-0 nova_compute[256729]: 2025-11-29 08:07:20.121 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764403640.1208646, f2cbf4cd-582b-408f-92b1-6b70364babcf => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:07:20 compute-0 nova_compute[256729]: 2025-11-29 08:07:20.122 256736 DEBUG nova.virt.libvirt.driver [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f2cbf4cd-582b-408f-92b1-6b70364babcf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:07:20 compute-0 nova_compute[256729]: 2025-11-29 08:07:20.125 256736 INFO nova.virt.libvirt.driver [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully detached device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the live domain config.
Nov 29 08:07:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:20 compute-0 nova_compute[256729]: 2025-11-29 08:07:20.287 256736 DEBUG nova.objects.instance [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:20 compute-0 nova_compute[256729]: 2025-11-29 08:07:20.328 256736 DEBUG oslo_concurrency.lockutils [None req-befd32b6-bb45-4776-bfb9-7bee71921b98 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Nov 29 08:07:20 compute-0 ceph-mon[75050]: pgmap v2054: 305 pgs: 305 active+clean; 269 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 200 KiB/s wr, 99 op/s
Nov 29 08:07:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/473941050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/473941050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Nov 29 08:07:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Nov 29 08:07:20 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:20.929 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 269 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 106 KiB/s wr, 95 op/s
Nov 29 08:07:21 compute-0 nova_compute[256729]: 2025-11-29 08:07:21.253 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:21 compute-0 ceph-mon[75050]: osdmap e408: 3 total, 3 up, 3 in
Nov 29 08:07:22 compute-0 nova_compute[256729]: 2025-11-29 08:07:22.869 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:22 compute-0 ceph-mon[75050]: pgmap v2056: 305 pgs: 305 active+clean; 269 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 106 KiB/s wr, 95 op/s
Nov 29 08:07:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 256 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 93 KiB/s wr, 127 op/s
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.420 256736 DEBUG oslo_concurrency.lockutils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.421 256736 DEBUG oslo_concurrency.lockutils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.443 256736 DEBUG nova.objects.instance [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.500 256736 DEBUG oslo_concurrency.lockutils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:23 compute-0 sshd-session[297781]: Connection closed by authenticating user root 143.14.121.41 port 59972 [preauth]
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.801 256736 DEBUG oslo_concurrency.lockutils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.802 256736 DEBUG oslo_concurrency.lockutils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.802 256736 INFO nova.compute.manager [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attaching volume c3f874aa-26a4-44f4-a911-d2e04fcb701a to /dev/vdb
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.835 256736 DEBUG oslo_concurrency.lockutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "10a1a099-bf1a-4195-9186-8f440437a1ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.837 256736 DEBUG oslo_concurrency.lockutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.837 256736 DEBUG oslo_concurrency.lockutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.838 256736 DEBUG oslo_concurrency.lockutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.839 256736 DEBUG oslo_concurrency.lockutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.841 256736 INFO nova.compute.manager [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Terminating instance
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.844 256736 DEBUG nova.compute.manager [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.908 256736 DEBUG nova.compute.manager [req-09fff8f3-9303-4ca9-942b-3a6cddcab8f6 req-eb41b053-3b08-4b01-ab70-86d604c3925f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received event network-changed-55b6aa9b-29fc-4f6b-9ae5-885c514941fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.908 256736 DEBUG nova.compute.manager [req-09fff8f3-9303-4ca9-942b-3a6cddcab8f6 req-eb41b053-3b08-4b01-ab70-86d604c3925f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Refreshing instance network info cache due to event network-changed-55b6aa9b-29fc-4f6b-9ae5-885c514941fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.909 256736 DEBUG oslo_concurrency.lockutils [req-09fff8f3-9303-4ca9-942b-3a6cddcab8f6 req-eb41b053-3b08-4b01-ab70-86d604c3925f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.909 256736 DEBUG oslo_concurrency.lockutils [req-09fff8f3-9303-4ca9-942b-3a6cddcab8f6 req-eb41b053-3b08-4b01-ab70-86d604c3925f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:07:23 compute-0 nova_compute[256729]: 2025-11-29 08:07:23.909 256736 DEBUG nova.network.neutron [req-09fff8f3-9303-4ca9-942b-3a6cddcab8f6 req-eb41b053-3b08-4b01-ab70-86d604c3925f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Refreshing network info cache for port 55b6aa9b-29fc-4f6b-9ae5-885c514941fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:07:23 compute-0 ceph-mon[75050]: pgmap v2057: 305 pgs: 305 active+clean; 256 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 93 KiB/s wr, 127 op/s
Nov 29 08:07:23 compute-0 kernel: tap55b6aa9b-29 (unregistering): left promiscuous mode
Nov 29 08:07:23 compute-0 NetworkManager[48962]: <info>  [1764403643.9839] device (tap55b6aa9b-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:07:24 compute-0 ovn_controller[153383]: 2025-11-29T08:07:24Z|00254|binding|INFO|Releasing lport 55b6aa9b-29fc-4f6b-9ae5-885c514941fa from this chassis (sb_readonly=0)
Nov 29 08:07:24 compute-0 ovn_controller[153383]: 2025-11-29T08:07:24Z|00255|binding|INFO|Setting lport 55b6aa9b-29fc-4f6b-9ae5-885c514941fa down in Southbound
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.041 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:24 compute-0 ovn_controller[153383]: 2025-11-29T08:07:24Z|00256|binding|INFO|Removing iface tap55b6aa9b-29 ovn-installed in OVS
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.044 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.056 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:24.058 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:3a:8c 10.100.0.10'], port_security=['fa:16:3e:c8:3a:8c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '10a1a099-bf1a-4195-9186-8f440437a1ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb6854e99614af5b8df420841fde0db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '284fde66-e9d8-4738-b856-2e805436581e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e6bb40-3758-40fe-8944-476e9d8b3205, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=55b6aa9b-29fc-4f6b-9ae5-885c514941fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:07:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:24.060 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 55b6aa9b-29fc-4f6b-9ae5-885c514941fa in datapath 2d9c390c-362a-41a5-93b0-23344eb99ae5 unbound from our chassis
Nov 29 08:07:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:24.061 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d9c390c-362a-41a5-93b0-23344eb99ae5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:07:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:24.062 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[badbde51-0039-46fc-84f4-1a5eda8c14c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:24.062 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 namespace which is not needed anymore
Nov 29 08:07:24 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 29 08:07:24 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 17.744s CPU time.
Nov 29 08:07:24 compute-0 systemd-machined[217781]: Machine qemu-23-instance-00000017 terminated.
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.297 256736 INFO nova.virt.libvirt.driver [-] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Instance destroyed successfully.
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.298 256736 DEBUG nova.objects.instance [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lazy-loading 'resources' on Instance uuid 10a1a099-bf1a-4195-9186-8f440437a1ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.319 256736 DEBUG nova.virt.libvirt.vif [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-870934529',display_name='tempest-TestVolumeBootPattern-server-870934529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-870934529',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFRhGhCDf+2DWWqDuvRpS/JaOK+f/CbMMIs9mX1kyTRqTPCFubI8ju/4twf4g9TbzLiRX/BzWwQ/uPnV3ZkV8vI7PffevvM5uIZzGBjdTxd3Z49lVgwpoVKRmE3GzO1NBg==',key_name='tempest-TestVolumeBootPattern-556618908',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dfb6854e99614af5b8df420841fde0db',ramdisk_id='',reservation_id='r-emhf3y7m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-776329285',owner_user_name='tempest-TestVolumeBootPattern-776329285-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:51Z,user_data=None,user_id='9664e420085d412aae898a6ec021b24f',uuid=10a1a099-bf1a-4195-9186-8f440437a1ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.320 256736 DEBUG nova.network.os_vif_util [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converting VIF {"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.321 256736 DEBUG nova.network.os_vif_util [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=55b6aa9b-29fc-4f6b-9ae5-885c514941fa,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55b6aa9b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.321 256736 DEBUG os_vif [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=55b6aa9b-29fc-4f6b-9ae5-885c514941fa,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55b6aa9b-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.323 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.324 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55b6aa9b-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.327 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.330 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.332 256736 DEBUG os_brick.utils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.334 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.336 256736 INFO os_vif [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:3a:8c,bridge_name='br-int',has_traffic_filtering=True,id=55b6aa9b-29fc-4f6b-9ae5-885c514941fa,network=Network(2d9c390c-362a-41a5-93b0-23344eb99ae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55b6aa9b-29')
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.353 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.354 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[714f30e0-ac46-4f1f-ab5e-e44f050a9920]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.366 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.376 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.377 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9b1e9b-b268-4f50-91e7-0123b5ba5a82]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.379 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.389 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.389 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[b112bc0f-0e09-447d-a22d-c8eb000ac545]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.391 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[dec27f33-9422-4483-afa3-338e89916f0c]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.392 256736 DEBUG oslo_concurrency.processutils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.435 256736 DEBUG oslo_concurrency.processutils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.439 256736 DEBUG os_brick.initiator.connectors.lightos [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.439 256736 DEBUG os_brick.initiator.connectors.lightos [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.440 256736 DEBUG os_brick.initiator.connectors.lightos [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.441 256736 DEBUG os_brick.utils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] <== get_connector_properties: return (107ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.442 256736 DEBUG nova.virt.block_device [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updating existing volume attachment record: 5670cc9b-70c8-42d4-90f8-c4b131a42622 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:07:24 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[295194]: [NOTICE]   (295198) : haproxy version is 2.8.14-c23fe91
Nov 29 08:07:24 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[295194]: [NOTICE]   (295198) : path to executable is /usr/sbin/haproxy
Nov 29 08:07:24 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[295194]: [WARNING]  (295198) : Exiting Master process...
Nov 29 08:07:24 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[295194]: [ALERT]    (295198) : Current worker (295200) exited with code 143 (Terminated)
Nov 29 08:07:24 compute-0 neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5[295194]: [WARNING]  (295198) : All workers exited. Exiting... (0)
Nov 29 08:07:24 compute-0 systemd[1]: libpod-13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509.scope: Deactivated successfully.
Nov 29 08:07:24 compute-0 podman[297831]: 2025-11-29 08:07:24.8101073 +0000 UTC m=+0.635885240 container died 13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509-userdata-shm.mount: Deactivated successfully.
Nov 29 08:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-27dff588f7cb94da208f5f14329a78d99111fd289969d68ff77d484b0d9dc891-merged.mount: Deactivated successfully.
Nov 29 08:07:24 compute-0 podman[297831]: 2025-11-29 08:07:24.886614112 +0000 UTC m=+0.712392022 container cleanup 13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:07:24 compute-0 systemd[1]: libpod-conmon-13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509.scope: Deactivated successfully.
Nov 29 08:07:24 compute-0 podman[297898]: 2025-11-29 08:07:24.970040194 +0000 UTC m=+0.052410058 container remove 13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:07:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:24.977 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c90e0b49-2b53-47a4-8189-4a4be58c2f29]: (4, ('Sat Nov 29 08:07:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509)\n13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509\nSat Nov 29 08:07:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 (13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509)\n13469ee4330e9633f1a5170ef43812c7f3a6eb0b62c6707dca8b40f3e9635509\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:24.979 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[81af9816-b966-4ffc-b62c-6bb68b696451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:24 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:24.981 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d9c390c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:07:24 compute-0 nova_compute[256729]: 2025-11-29 08:07:24.983 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:24 compute-0 kernel: tap2d9c390c-30: left promiscuous mode
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.003 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:25.006 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b366acfe-e6d4-43f9-9845-b35c78b5813d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:25.021 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dfae1117-1552-498e-96e1-94fbe048b874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:25.023 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[038c63e3-7620-4ad4-b96f-5d5c3623ae76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.026 256736 INFO nova.virt.libvirt.driver [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Deleting instance files /var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce_del
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.027 256736 INFO nova.virt.libvirt.driver [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Deletion of /var/lib/nova/instances/10a1a099-bf1a-4195-9186-8f440437a1ce_del complete
Nov 29 08:07:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:25.041 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8f221b-6304-4da2-884b-96fcd75c7734]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586572, 'reachable_time': 41384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297914, 'error': None, 'target': 'ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d9c390c\x2d362a\x2d41a5\x2d93b0\x2d23344eb99ae5.mount: Deactivated successfully.
Nov 29 08:07:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:25.044 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d9c390c-362a-41a5-93b0-23344eb99ae5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:07:25 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:25.044 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[fe813783-e1f7-42f0-990f-c0de38fe0a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 251 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 50 KiB/s wr, 119 op/s
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.147 256736 INFO nova.compute.manager [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Took 1.30 seconds to destroy the instance on the hypervisor.
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.148 256736 DEBUG oslo.service.loopingcall [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.149 256736 DEBUG nova.compute.manager [-] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.149 256736 DEBUG nova.network.neutron [-] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:07:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/971782983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.802 256736 DEBUG nova.objects.instance [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.849 256736 DEBUG nova.virt.libvirt.driver [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attempting to attach volume c3f874aa-26a4-44f4-a911-d2e04fcb701a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:07:25 compute-0 nova_compute[256729]: 2025-11-29 08:07:25.853 256736 DEBUG nova.virt.libvirt.guest [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:07:25 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:25 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-c3f874aa-26a4-44f4-a911-d2e04fcb701a">
Nov 29 08:07:25 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:25 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:25 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 08:07:25 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:07:25 compute-0 nova_compute[256729]:   </auth>
Nov 29 08:07:25 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:25 compute-0 nova_compute[256729]:   <serial>c3f874aa-26a4-44f4-a911-d2e04fcb701a</serial>
Nov 29 08:07:25 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:25 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.218 256736 DEBUG nova.compute.manager [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received event network-vif-unplugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.219 256736 DEBUG oslo_concurrency.lockutils [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.219 256736 DEBUG oslo_concurrency.lockutils [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.220 256736 DEBUG oslo_concurrency.lockutils [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.220 256736 DEBUG nova.compute.manager [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] No waiting events found dispatching network-vif-unplugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.221 256736 DEBUG nova.compute.manager [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received event network-vif-unplugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.221 256736 DEBUG nova.compute.manager [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received event network-vif-plugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.222 256736 DEBUG oslo_concurrency.lockutils [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.222 256736 DEBUG oslo_concurrency.lockutils [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.223 256736 DEBUG oslo_concurrency.lockutils [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.223 256736 DEBUG nova.compute.manager [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] No waiting events found dispatching network-vif-plugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.224 256736 WARNING nova.compute.manager [req-129e29c8-667b-42b8-b5cd-6f6675d8fb33 req-59a2117c-b52d-4aa6-adb3-bfad019c9afc ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received unexpected event network-vif-plugged-55b6aa9b-29fc-4f6b-9ae5-885c514941fa for instance with vm_state active and task_state deleting.
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.646 256736 DEBUG nova.network.neutron [-] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:26 compute-0 ceph-mon[75050]: pgmap v2058: 305 pgs: 305 active+clean; 251 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 50 KiB/s wr, 119 op/s
Nov 29 08:07:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/971782983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.673 256736 INFO nova.compute.manager [-] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Took 1.52 seconds to deallocate network for instance.
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.746 256736 DEBUG nova.compute.manager [req-d27ee8b9-d57c-48e7-be8e-02602ff5819c req-7697ba9f-3b80-433a-9e2c-b289a4bc271e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Received event network-vif-deleted-55b6aa9b-29fc-4f6b-9ae5-885c514941fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.770 256736 DEBUG nova.virt.libvirt.driver [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.770 256736 DEBUG nova.virt.libvirt.driver [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.770 256736 DEBUG nova.virt.libvirt.driver [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.771 256736 DEBUG nova.virt.libvirt.driver [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No VIF found with MAC fa:16:3e:1e:cb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.846 256736 DEBUG nova.network.neutron [req-09fff8f3-9303-4ca9-942b-3a6cddcab8f6 req-eb41b053-3b08-4b01-ab70-86d604c3925f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updated VIF entry in instance network info cache for port 55b6aa9b-29fc-4f6b-9ae5-885c514941fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.846 256736 DEBUG nova.network.neutron [req-09fff8f3-9303-4ca9-942b-3a6cddcab8f6 req-eb41b053-3b08-4b01-ab70-86d604c3925f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Updating instance_info_cache with network_info: [{"id": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "address": "fa:16:3e:c8:3a:8c", "network": {"id": "2d9c390c-362a-41a5-93b0-23344eb99ae5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-2062671130-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb6854e99614af5b8df420841fde0db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55b6aa9b-29", "ovs_interfaceid": "55b6aa9b-29fc-4f6b-9ae5-885c514941fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.891 256736 DEBUG oslo_concurrency.lockutils [req-09fff8f3-9303-4ca9-942b-3a6cddcab8f6 req-eb41b053-3b08-4b01-ab70-86d604c3925f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-10a1a099-bf1a-4195-9186-8f440437a1ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.921 256736 INFO nova.compute.manager [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Took 0.25 seconds to detach 1 volumes for instance.
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.968 256736 DEBUG oslo_concurrency.lockutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:26 compute-0 nova_compute[256729]: 2025-11-29 08:07:26.969 256736 DEBUG oslo_concurrency.lockutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:27 compute-0 nova_compute[256729]: 2025-11-29 08:07:27.028 256736 DEBUG oslo_concurrency.lockutils [None req-633215d9-3f37-44bd-8a2e-863bbecabc16 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:27 compute-0 nova_compute[256729]: 2025-11-29 08:07:27.063 256736 DEBUG oslo_concurrency.processutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 251 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 20 KiB/s wr, 93 op/s
Nov 29 08:07:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:07:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546532551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:27 compute-0 nova_compute[256729]: 2025-11-29 08:07:27.475 256736 DEBUG oslo_concurrency.processutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:27 compute-0 nova_compute[256729]: 2025-11-29 08:07:27.484 256736 DEBUG nova.compute.provider_tree [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:07:27 compute-0 nova_compute[256729]: 2025-11-29 08:07:27.505 256736 DEBUG nova.scheduler.client.report [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:07:27 compute-0 nova_compute[256729]: 2025-11-29 08:07:27.541 256736 DEBUG oslo_concurrency.lockutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:27 compute-0 nova_compute[256729]: 2025-11-29 08:07:27.574 256736 INFO nova.scheduler.client.report [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Deleted allocations for instance 10a1a099-bf1a-4195-9186-8f440437a1ce
Nov 29 08:07:27 compute-0 nova_compute[256729]: 2025-11-29 08:07:27.656 256736 DEBUG oslo_concurrency.lockutils [None req-67a2bf0b-65d2-4a64-959d-90ecb3d77832 9664e420085d412aae898a6ec021b24f dfb6854e99614af5b8df420841fde0db - - default default] Lock "10a1a099-bf1a-4195-9186-8f440437a1ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3546532551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:07:27 compute-0 nova_compute[256729]: 2025-11-29 08:07:27.872 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:28 compute-0 ceph-mon[75050]: pgmap v2059: 305 pgs: 305 active+clean; 251 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 20 KiB/s wr, 93 op/s
Nov 29 08:07:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 251 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 18 KiB/s wr, 89 op/s
Nov 29 08:07:29 compute-0 nova_compute[256729]: 2025-11-29 08:07:29.327 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:29 compute-0 sshd-session[297807]: Connection closed by authenticating user root 143.14.121.41 port 44306 [preauth]
Nov 29 08:07:29 compute-0 nova_compute[256729]: 2025-11-29 08:07:29.809 256736 DEBUG oslo_concurrency.lockutils [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:29 compute-0 nova_compute[256729]: 2025-11-29 08:07:29.809 256736 DEBUG oslo_concurrency.lockutils [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:29 compute-0 nova_compute[256729]: 2025-11-29 08:07:29.826 256736 INFO nova.compute.manager [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Detaching volume c3f874aa-26a4-44f4-a911-d2e04fcb701a
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.021 256736 INFO nova.virt.block_device [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attempting to driver detach volume c3f874aa-26a4-44f4-a911-d2e04fcb701a from mountpoint /dev/vdb
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.035 256736 DEBUG nova.virt.libvirt.driver [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Attempting to detach device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.036 256736 DEBUG nova.virt.libvirt.guest [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-c3f874aa-26a4-44f4-a911-d2e04fcb701a">
Nov 29 08:07:30 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <serial>c3f874aa-26a4-44f4-a911-d2e04fcb701a</serial>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:30 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:30 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.047 256736 INFO nova.virt.libvirt.driver [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully detached device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the persistent domain config.
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.047 256736 DEBUG nova.virt.libvirt.driver [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.048 256736 DEBUG nova.virt.libvirt.guest [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-c3f874aa-26a4-44f4-a911-d2e04fcb701a">
Nov 29 08:07:30 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <serial>c3f874aa-26a4-44f4-a911-d2e04fcb701a</serial>
Nov 29 08:07:30 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:30 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:30 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Nov 29 08:07:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Nov 29 08:07:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.189 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764403650.189027, f2cbf4cd-582b-408f-92b1-6b70364babcf => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.192 256736 DEBUG nova.virt.libvirt.driver [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f2cbf4cd-582b-408f-92b1-6b70364babcf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.195 256736 INFO nova.virt.libvirt.driver [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully detached device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the live domain config.
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.422 256736 DEBUG nova.objects.instance [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:30 compute-0 nova_compute[256729]: 2025-11-29 08:07:30.582 256736 DEBUG oslo_concurrency.lockutils [None req-79984ae9-005f-43f2-ac5c-e4bda7a96667 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:30 compute-0 ceph-mon[75050]: pgmap v2060: 305 pgs: 305 active+clean; 251 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 18 KiB/s wr, 89 op/s
Nov 29 08:07:30 compute-0 ceph-mon[75050]: osdmap e409: 3 total, 3 up, 3 in
Nov 29 08:07:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:07:30 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2769158583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:07:30 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2769158583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 251 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 18 KiB/s wr, 89 op/s
Nov 29 08:07:31 compute-0 nova_compute[256729]: 2025-11-29 08:07:31.225 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403636.2245343, cde72883-eb73-406e-8301-a92fe1527a26 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:31 compute-0 nova_compute[256729]: 2025-11-29 08:07:31.226 256736 INFO nova.compute.manager [-] [instance: cde72883-eb73-406e-8301-a92fe1527a26] VM Stopped (Lifecycle Event)
Nov 29 08:07:31 compute-0 nova_compute[256729]: 2025-11-29 08:07:31.420 256736 DEBUG nova.compute.manager [None req-8775c362-baa7-468e-8006-286992c0e0a5 - - - - - -] [instance: cde72883-eb73-406e-8301-a92fe1527a26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2769158583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2769158583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:32 compute-0 ceph-mon[75050]: pgmap v2062: 305 pgs: 305 active+clean; 251 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 18 KiB/s wr, 89 op/s
Nov 29 08:07:32 compute-0 nova_compute[256729]: 2025-11-29 08:07:32.875 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 210 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 19 KiB/s wr, 67 op/s
Nov 29 08:07:34 compute-0 sshd-session[297958]: Connection closed by authenticating user root 143.14.121.41 port 44316 [preauth]
Nov 29 08:07:34 compute-0 nova_compute[256729]: 2025-11-29 08:07:34.281 256736 DEBUG oslo_concurrency.lockutils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:34 compute-0 nova_compute[256729]: 2025-11-29 08:07:34.281 256736 DEBUG oslo_concurrency.lockutils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:34 compute-0 nova_compute[256729]: 2025-11-29 08:07:34.329 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:34 compute-0 ceph-mon[75050]: pgmap v2063: 305 pgs: 305 active+clean; 210 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 19 KiB/s wr, 67 op/s
Nov 29 08:07:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 74 KiB/s wr, 67 op/s
Nov 29 08:07:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:07:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:07:35 compute-0 ceph-mon[75050]: pgmap v2064: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 74 KiB/s wr, 67 op/s
Nov 29 08:07:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 84 KiB/s wr, 57 op/s
Nov 29 08:07:37 compute-0 nova_compute[256729]: 2025-11-29 08:07:37.550 256736 DEBUG nova.objects.instance [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:37 compute-0 nova_compute[256729]: 2025-11-29 08:07:37.599 256736 DEBUG oslo_concurrency.lockutils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 3.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:37 compute-0 sshd-session[297962]: Connection closed by authenticating user root 143.14.121.41 port 60116 [preauth]
Nov 29 08:07:37 compute-0 nova_compute[256729]: 2025-11-29 08:07:37.820 256736 DEBUG oslo_concurrency.lockutils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:37 compute-0 nova_compute[256729]: 2025-11-29 08:07:37.821 256736 DEBUG oslo_concurrency.lockutils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:37 compute-0 nova_compute[256729]: 2025-11-29 08:07:37.822 256736 INFO nova.compute.manager [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attaching volume df6b3f1b-f8a7-478f-9506-48047b78992e to /dev/vdb
Nov 29 08:07:37 compute-0 nova_compute[256729]: 2025-11-29 08:07:37.879 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.000 256736 DEBUG os_brick.utils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.003 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.022 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.023 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[0d01ffb7-40b8-47fb-85df-539503eb4750]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.026 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.040 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.040 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[b393f56f-7063-40f3-abfd-ddbd1241bc50]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.044 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.059 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.060 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3d1134-0eb6-43a5-adb1-773960c2fc68]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.062 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[be8b18ca-f344-4704-8514-b420974bbbff]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.063 256736 DEBUG oslo_concurrency.processutils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.104 256736 DEBUG oslo_concurrency.processutils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.110 256736 DEBUG os_brick.initiator.connectors.lightos [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.111 256736 DEBUG os_brick.initiator.connectors.lightos [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.111 256736 DEBUG os_brick.initiator.connectors.lightos [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.112 256736 DEBUG os_brick.utils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] <== get_connector_properties: return (110ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.113 256736 DEBUG nova.virt.block_device [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updating existing volume attachment record: e3f0d0cc-b689-47c8-82e8-6271d8d3ec81 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:07:38 compute-0 ceph-mon[75050]: pgmap v2065: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 84 KiB/s wr, 57 op/s
Nov 29 08:07:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1515875128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.896 256736 DEBUG nova.objects.instance [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.927 256736 DEBUG nova.virt.libvirt.driver [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attempting to attach volume df6b3f1b-f8a7-478f-9506-48047b78992e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:07:38 compute-0 nova_compute[256729]: 2025-11-29 08:07:38.930 256736 DEBUG nova.virt.libvirt.guest [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:07:38 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:38 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-df6b3f1b-f8a7-478f-9506-48047b78992e">
Nov 29 08:07:38 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:38 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:38 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 08:07:38 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:07:38 compute-0 nova_compute[256729]:   </auth>
Nov 29 08:07:38 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:38 compute-0 nova_compute[256729]:   <serial>df6b3f1b-f8a7-478f-9506-48047b78992e</serial>
Nov 29 08:07:38 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:38 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.103 256736 DEBUG nova.virt.libvirt.driver [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.104 256736 DEBUG nova.virt.libvirt.driver [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.105 256736 DEBUG nova.virt.libvirt.driver [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.105 256736 DEBUG nova.virt.libvirt.driver [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No VIF found with MAC fa:16:3e:1e:cb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:07:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 83 KiB/s wr, 46 op/s
Nov 29 08:07:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1515875128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.294 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403644.2930608, 10a1a099-bf1a-4195-9186-8f440437a1ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.295 256736 INFO nova.compute.manager [-] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] VM Stopped (Lifecycle Event)
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.319 256736 DEBUG nova.compute.manager [None req-945b5687-97a7-4507-ae39-d62351137355 - - - - - -] [instance: 10a1a099-bf1a-4195-9186-8f440437a1ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.323 256736 DEBUG oslo_concurrency.lockutils [None req-eed8cc08-7e85-4e77-ba0e-8e41e49f1661 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.331 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:39 compute-0 ovn_controller[153383]: 2025-11-29T08:07:39Z|00257|binding|INFO|Releasing lport 9c3f688e-b00f-4b58-999f-eca278500698 from this chassis (sb_readonly=0)
Nov 29 08:07:39 compute-0 nova_compute[256729]: 2025-11-29 08:07:39.454 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:39 compute-0 podman[297995]: 2025-11-29 08:07:39.73364496 +0000 UTC m=+0.092379731 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:07:39 compute-0 podman[297996]: 2025-11-29 08:07:39.73512101 +0000 UTC m=+0.090018055 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 29 08:07:39 compute-0 podman[297994]: 2025-11-29 08:07:39.764033008 +0000 UTC m=+0.126791550 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 08:07:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:40 compute-0 ceph-mon[75050]: pgmap v2066: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 83 KiB/s wr, 46 op/s
Nov 29 08:07:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 76 KiB/s wr, 42 op/s
Nov 29 08:07:41 compute-0 sshd-session[297967]: Connection closed by authenticating user root 143.14.121.41 port 60118 [preauth]
Nov 29 08:07:41 compute-0 nova_compute[256729]: 2025-11-29 08:07:41.975 256736 DEBUG oslo_concurrency.lockutils [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:41 compute-0 nova_compute[256729]: 2025-11-29 08:07:41.976 256736 DEBUG oslo_concurrency.lockutils [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:41 compute-0 nova_compute[256729]: 2025-11-29 08:07:41.990 256736 INFO nova.compute.manager [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Detaching volume df6b3f1b-f8a7-478f-9506-48047b78992e
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.137 256736 INFO nova.virt.block_device [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attempting to driver detach volume df6b3f1b-f8a7-478f-9506-48047b78992e from mountpoint /dev/vdb
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.150 256736 DEBUG nova.virt.libvirt.driver [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Attempting to detach device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.151 256736 DEBUG nova.virt.libvirt.guest [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-df6b3f1b-f8a7-478f-9506-48047b78992e">
Nov 29 08:07:42 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <serial>df6b3f1b-f8a7-478f-9506-48047b78992e</serial>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:42 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:42 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.162 256736 INFO nova.virt.libvirt.driver [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully detached device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the persistent domain config.
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.163 256736 DEBUG nova.virt.libvirt.driver [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.164 256736 DEBUG nova.virt.libvirt.guest [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-df6b3f1b-f8a7-478f-9506-48047b78992e">
Nov 29 08:07:42 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <serial>df6b3f1b-f8a7-478f-9506-48047b78992e</serial>
Nov 29 08:07:42 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:42 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:42 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:42 compute-0 ceph-mon[75050]: pgmap v2067: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 76 KiB/s wr, 42 op/s
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.295 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764403662.295213, f2cbf4cd-582b-408f-92b1-6b70364babcf => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.297 256736 DEBUG nova.virt.libvirt.driver [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f2cbf4cd-582b-408f-92b1-6b70364babcf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.300 256736 INFO nova.virt.libvirt.driver [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully detached device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the live domain config.
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.478 256736 DEBUG nova.objects.instance [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.558 256736 DEBUG oslo_concurrency.lockutils [None req-deb739f0-9a16-4961-aefa-42ca03e76060 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:42 compute-0 nova_compute[256729]: 2025-11-29 08:07:42.882 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 116 KiB/s wr, 45 op/s
Nov 29 08:07:44 compute-0 ceph-mon[75050]: pgmap v2068: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 116 KiB/s wr, 45 op/s
Nov 29 08:07:44 compute-0 sudo[298065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:44 compute-0 sudo[298065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:44 compute-0 sudo[298065]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:44 compute-0 nova_compute[256729]: 2025-11-29 08:07:44.334 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:44 compute-0 sudo[298090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:07:44 compute-0 sudo[298090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:44 compute-0 sudo[298090]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:44 compute-0 sudo[298115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:44 compute-0 sudo[298115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:44 compute-0 sudo[298115]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:44 compute-0 sudo[298140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:07:44 compute-0 sudo[298140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 117 KiB/s wr, 36 op/s
Nov 29 08:07:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:45 compute-0 sudo[298140]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:07:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:07:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:07:45 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:07:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:07:45 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:07:45 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c9898b62-fe86-4b03-b5d8-1dafc6470e16 does not exist
Nov 29 08:07:45 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev e3e5b81e-8556-4f8d-8495-b41435934309 does not exist
Nov 29 08:07:45 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b8ddc797-5d9f-4d8b-809f-917f81665928 does not exist
Nov 29 08:07:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:07:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:07:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:07:45 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:07:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:07:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:07:45 compute-0 sudo[298197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:45 compute-0 sudo[298197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:45 compute-0 sudo[298197]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.480 256736 DEBUG oslo_concurrency.lockutils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.481 256736 DEBUG oslo_concurrency.lockutils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:45 compute-0 sudo[298222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:07:45 compute-0 sudo[298222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.500 256736 DEBUG nova.objects.instance [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:45 compute-0 sudo[298222]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.537 256736 DEBUG oslo_concurrency.lockutils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:45 compute-0 sudo[298247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:45 compute-0 sudo[298247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:45 compute-0 sudo[298247]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:45 compute-0 sudo[298272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:07:45 compute-0 sudo[298272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.731 256736 DEBUG oslo_concurrency.lockutils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.731 256736 DEBUG oslo_concurrency.lockutils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.732 256736 INFO nova.compute.manager [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attaching volume d8df2601-93e4-4ad2-8abf-280e81e74ff0 to /dev/vdb
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.923 256736 DEBUG os_brick.utils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.925 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.946 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.946 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[92ec473b-5ed1-48e2-9fe9-bb569fce12be]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.948 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.964 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.964 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[01afca15-6805-499e-b0c5-296a111855a0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.966 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.985 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.986 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[97f6166b-f029-4f58-a60d-1be049b69918]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.988 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[5646acd8-96ed-4074-96b8-9ef5efb39b10]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:07:45 compute-0 nova_compute[256729]: 2025-11-29 08:07:45.988 256736 DEBUG oslo_concurrency.processutils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.023 256736 DEBUG oslo_concurrency.processutils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.026 256736 DEBUG os_brick.initiator.connectors.lightos [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.026 256736 DEBUG os_brick.initiator.connectors.lightos [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.026 256736 DEBUG os_brick.initiator.connectors.lightos [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.026 256736 DEBUG os_brick.utils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] <== get_connector_properties: return (102ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.027 256736 DEBUG nova.virt.block_device [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updating existing volume attachment record: 6da22ea5-478c-4f73-af7d-2cf4e9b28e72 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:07:46 compute-0 podman[298344]: 2025-11-29 08:07:46.162899443 +0000 UTC m=+0.036168680 container create cb3e2b3daaff09966dedf2115b7ac02bb768cb7141b8e759ac89c343abcf2ed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_black, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:07:46 compute-0 systemd[1]: Started libpod-conmon-cb3e2b3daaff09966dedf2115b7ac02bb768cb7141b8e759ac89c343abcf2ed0.scope.
Nov 29 08:07:46 compute-0 ceph-mon[75050]: pgmap v2069: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 129 KiB/s rd, 117 KiB/s wr, 36 op/s
Nov 29 08:07:46 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:07:46 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:07:46 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:07:46 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:07:46 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:07:46 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:07:46 compute-0 podman[298344]: 2025-11-29 08:07:46.146944502 +0000 UTC m=+0.020213769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:07:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.250348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403666250400, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 593, "num_deletes": 252, "total_data_size": 542079, "memory_usage": 553160, "flush_reason": "Manual Compaction"}
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403666255947, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 535794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36380, "largest_seqno": 36972, "table_properties": {"data_size": 532629, "index_size": 1073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7777, "raw_average_key_size": 19, "raw_value_size": 526080, "raw_average_value_size": 1325, "num_data_blocks": 48, "num_entries": 397, "num_filter_entries": 397, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403635, "oldest_key_time": 1764403635, "file_creation_time": 1764403666, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 5653 microseconds, and 2433 cpu microseconds.
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.256007) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 535794 bytes OK
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.256025) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.257465) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.257496) EVENT_LOG_v1 {"time_micros": 1764403666257488, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.257517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 538804, prev total WAL file size 538804, number of live WAL files 2.
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.258012) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(523KB)], [74(12MB)]
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403666258050, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 13126049, "oldest_snapshot_seqno": -1}
Nov 29 08:07:46 compute-0 podman[298344]: 2025-11-29 08:07:46.268715403 +0000 UTC m=+0.141984670 container init cb3e2b3daaff09966dedf2115b7ac02bb768cb7141b8e759ac89c343abcf2ed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 08:07:46 compute-0 podman[298344]: 2025-11-29 08:07:46.279189242 +0000 UTC m=+0.152458479 container start cb3e2b3daaff09966dedf2115b7ac02bb768cb7141b8e759ac89c343abcf2ed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:07:46 compute-0 podman[298344]: 2025-11-29 08:07:46.282690549 +0000 UTC m=+0.155959866 container attach cb3e2b3daaff09966dedf2115b7ac02bb768cb7141b8e759ac89c343abcf2ed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_black, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:07:46 compute-0 interesting_black[298360]: 167 167
Nov 29 08:07:46 compute-0 systemd[1]: libpod-cb3e2b3daaff09966dedf2115b7ac02bb768cb7141b8e759ac89c343abcf2ed0.scope: Deactivated successfully.
Nov 29 08:07:46 compute-0 podman[298344]: 2025-11-29 08:07:46.286508994 +0000 UTC m=+0.159778241 container died cb3e2b3daaff09966dedf2115b7ac02bb768cb7141b8e759ac89c343abcf2ed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_black, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 08:07:46 compute-0 sshd-session[298060]: Connection closed by authenticating user root 143.14.121.41 port 37796 [preauth]
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6817 keys, 11316427 bytes, temperature: kUnknown
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403666368777, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11316427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11264906, "index_size": 33361, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 173763, "raw_average_key_size": 25, "raw_value_size": 11136475, "raw_average_value_size": 1633, "num_data_blocks": 1332, "num_entries": 6817, "num_filter_entries": 6817, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764403666, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.369129) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11316427 bytes
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.370354) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.4 rd, 102.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 12.0 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(45.6) write-amplify(21.1) OK, records in: 7334, records dropped: 517 output_compression: NoCompression
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.370369) EVENT_LOG_v1 {"time_micros": 1764403666370362, "job": 42, "event": "compaction_finished", "compaction_time_micros": 110817, "compaction_time_cpu_micros": 27465, "output_level": 6, "num_output_files": 1, "total_output_size": 11316427, "num_input_records": 7334, "num_output_records": 6817, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403666370543, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403666372475, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.257894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.372579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.372586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.372589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.372591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:46 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:07:46.372593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-685e89a46973d21f5b23d12bc3a28cd90601a2a00caae8ec307fd3b2398b5015-merged.mount: Deactivated successfully.
Nov 29 08:07:46 compute-0 podman[298344]: 2025-11-29 08:07:46.403560704 +0000 UTC m=+0.276829941 container remove cb3e2b3daaff09966dedf2115b7ac02bb768cb7141b8e759ac89c343abcf2ed0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_black, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:07:46 compute-0 systemd[1]: libpod-conmon-cb3e2b3daaff09966dedf2115b7ac02bb768cb7141b8e759ac89c343abcf2ed0.scope: Deactivated successfully.
Nov 29 08:07:46 compute-0 podman[298385]: 2025-11-29 08:07:46.638036885 +0000 UTC m=+0.068318267 container create 3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 08:07:46 compute-0 systemd[1]: Started libpod-conmon-3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e.scope.
Nov 29 08:07:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:07:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/591489893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:46 compute-0 podman[298385]: 2025-11-29 08:07:46.612949623 +0000 UTC m=+0.043230985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:07:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7ffd5ec938c3fffd0c30156aed8bb0e4a60a43e3dfa21a41adc7c2741dbf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7ffd5ec938c3fffd0c30156aed8bb0e4a60a43e3dfa21a41adc7c2741dbf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7ffd5ec938c3fffd0c30156aed8bb0e4a60a43e3dfa21a41adc7c2741dbf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7ffd5ec938c3fffd0c30156aed8bb0e4a60a43e3dfa21a41adc7c2741dbf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7ffd5ec938c3fffd0c30156aed8bb0e4a60a43e3dfa21a41adc7c2741dbf3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:46 compute-0 podman[298385]: 2025-11-29 08:07:46.748710089 +0000 UTC m=+0.178991461 container init 3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 08:07:46 compute-0 podman[298385]: 2025-11-29 08:07:46.759799835 +0000 UTC m=+0.190081187 container start 3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 08:07:46 compute-0 podman[298385]: 2025-11-29 08:07:46.763285461 +0000 UTC m=+0.193566843 container attach 3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.812 256736 DEBUG nova.objects.instance [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.847 256736 DEBUG nova.virt.libvirt.driver [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attempting to attach volume d8df2601-93e4-4ad2-8abf-280e81e74ff0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.852 256736 DEBUG nova.virt.libvirt.guest [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:07:46 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:46 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-d8df2601-93e4-4ad2-8abf-280e81e74ff0">
Nov 29 08:07:46 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:46 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:46 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 08:07:46 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:07:46 compute-0 nova_compute[256729]:   </auth>
Nov 29 08:07:46 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:46 compute-0 nova_compute[256729]:   <serial>d8df2601-93e4-4ad2-8abf-280e81e74ff0</serial>
Nov 29 08:07:46 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:46 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.985 256736 DEBUG nova.virt.libvirt.driver [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.985 256736 DEBUG nova.virt.libvirt.driver [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.985 256736 DEBUG nova.virt.libvirt.driver [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:07:46 compute-0 nova_compute[256729]: 2025-11-29 08:07:46.985 256736 DEBUG nova.virt.libvirt.driver [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] No VIF found with MAC fa:16:3e:1e:cb:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:07:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 67 KiB/s wr, 23 op/s
Nov 29 08:07:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/591489893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:07:47 compute-0 nova_compute[256729]: 2025-11-29 08:07:47.378 256736 DEBUG oslo_concurrency.lockutils [None req-f3649dc0-86d3-4c83-aa52-e3368731250d 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:47 compute-0 ovn_controller[153383]: 2025-11-29T08:07:47Z|00258|binding|INFO|Releasing lport 9c3f688e-b00f-4b58-999f-eca278500698 from this chassis (sb_readonly=0)
Nov 29 08:07:47 compute-0 nova_compute[256729]: 2025-11-29 08:07:47.560 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:47 compute-0 vibrant_rhodes[298401]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:07:47 compute-0 vibrant_rhodes[298401]: --> relative data size: 1.0
Nov 29 08:07:47 compute-0 vibrant_rhodes[298401]: --> All data devices are unavailable
Nov 29 08:07:47 compute-0 nova_compute[256729]: 2025-11-29 08:07:47.884 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:47 compute-0 systemd[1]: libpod-3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e.scope: Deactivated successfully.
Nov 29 08:07:47 compute-0 systemd[1]: libpod-3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e.scope: Consumed 1.084s CPU time.
Nov 29 08:07:47 compute-0 podman[298451]: 2025-11-29 08:07:47.948242223 +0000 UTC m=+0.031330025 container died 3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 08:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-82d7ffd5ec938c3fffd0c30156aed8bb0e4a60a43e3dfa21a41adc7c2741dbf3-merged.mount: Deactivated successfully.
Nov 29 08:07:48 compute-0 podman[298451]: 2025-11-29 08:07:48.013540026 +0000 UTC m=+0.096627858 container remove 3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:07:48 compute-0 systemd[1]: libpod-conmon-3ed9e529113f4bee6427d20dba01afc7af6513f17036e18bf5edf19cc7b8c24e.scope: Deactivated successfully.
Nov 29 08:07:48 compute-0 sudo[298272]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:48 compute-0 sudo[298466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:48 compute-0 sudo[298466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:48 compute-0 sudo[298466]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:48 compute-0 sudo[298491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:07:48 compute-0 sudo[298491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:48 compute-0 sudo[298491]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:48 compute-0 ceph-mon[75050]: pgmap v2070: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 67 KiB/s wr, 23 op/s
Nov 29 08:07:48 compute-0 sudo[298516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:48 compute-0 sudo[298516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:48 compute-0 sudo[298516]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:48 compute-0 sudo[298541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:07:48 compute-0 sudo[298541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:48 compute-0 podman[298607]: 2025-11-29 08:07:48.807940919 +0000 UTC m=+0.064325037 container create dfba612719d865f3fd157030450ba61d3b0b293ed28ea729b959c293daf8eef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:07:48 compute-0 systemd[1]: Started libpod-conmon-dfba612719d865f3fd157030450ba61d3b0b293ed28ea729b959c293daf8eef1.scope.
Nov 29 08:07:48 compute-0 podman[298607]: 2025-11-29 08:07:48.775991477 +0000 UTC m=+0.032375645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:07:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:07:48 compute-0 podman[298607]: 2025-11-29 08:07:48.927416856 +0000 UTC m=+0.183801024 container init dfba612719d865f3fd157030450ba61d3b0b293ed28ea729b959c293daf8eef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:07:48 compute-0 podman[298607]: 2025-11-29 08:07:48.93991014 +0000 UTC m=+0.196294268 container start dfba612719d865f3fd157030450ba61d3b0b293ed28ea729b959c293daf8eef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:07:48 compute-0 podman[298607]: 2025-11-29 08:07:48.943922902 +0000 UTC m=+0.200307090 container attach dfba612719d865f3fd157030450ba61d3b0b293ed28ea729b959c293daf8eef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:07:48 compute-0 kind_rosalind[298624]: 167 167
Nov 29 08:07:48 compute-0 systemd[1]: libpod-dfba612719d865f3fd157030450ba61d3b0b293ed28ea729b959c293daf8eef1.scope: Deactivated successfully.
Nov 29 08:07:48 compute-0 podman[298607]: 2025-11-29 08:07:48.95038356 +0000 UTC m=+0.206767678 container died dfba612719d865f3fd157030450ba61d3b0b293ed28ea729b959c293daf8eef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f5dd02128aefa22442e9635b2b2bc372d8aafa1d94ad04a8072ed5af2459548-merged.mount: Deactivated successfully.
Nov 29 08:07:48 compute-0 podman[298607]: 2025-11-29 08:07:48.998854737 +0000 UTC m=+0.255238825 container remove dfba612719d865f3fd157030450ba61d3b0b293ed28ea729b959c293daf8eef1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 08:07:49 compute-0 systemd[1]: libpod-conmon-dfba612719d865f3fd157030450ba61d3b0b293ed28ea729b959c293daf8eef1.scope: Deactivated successfully.
Nov 29 08:07:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 60 KiB/s wr, 32 op/s
Nov 29 08:07:49 compute-0 podman[298649]: 2025-11-29 08:07:49.213761008 +0000 UTC m=+0.050002891 container create e3f6298b52b54154413381235a2d55ddf14ff6b40382813ad9f67ef80868c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_buck, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:07:49 compute-0 systemd[1]: Started libpod-conmon-e3f6298b52b54154413381235a2d55ddf14ff6b40382813ad9f67ef80868c68e.scope.
Nov 29 08:07:49 compute-0 podman[298649]: 2025-11-29 08:07:49.194006613 +0000 UTC m=+0.030248516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:07:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0806d0ec73afc727dd948affb035f5427c716081ccc728eda64841fe8a08d5f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0806d0ec73afc727dd948affb035f5427c716081ccc728eda64841fe8a08d5f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0806d0ec73afc727dd948affb035f5427c716081ccc728eda64841fe8a08d5f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0806d0ec73afc727dd948affb035f5427c716081ccc728eda64841fe8a08d5f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:49 compute-0 podman[298649]: 2025-11-29 08:07:49.31888805 +0000 UTC m=+0.155130033 container init e3f6298b52b54154413381235a2d55ddf14ff6b40382813ad9f67ef80868c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 29 08:07:49 compute-0 podman[298649]: 2025-11-29 08:07:49.325186064 +0000 UTC m=+0.161427987 container start e3f6298b52b54154413381235a2d55ddf14ff6b40382813ad9f67ef80868c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_buck, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 08:07:49 compute-0 podman[298649]: 2025-11-29 08:07:49.328827504 +0000 UTC m=+0.165069427 container attach e3f6298b52b54154413381235a2d55ddf14ff6b40382813ad9f67ef80868c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:07:49 compute-0 nova_compute[256729]: 2025-11-29 08:07:49.336 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.220 256736 DEBUG oslo_concurrency.lockutils [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.222 256736 DEBUG oslo_concurrency.lockutils [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.242 256736 INFO nova.compute.manager [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Detaching volume d8df2601-93e4-4ad2-8abf-280e81e74ff0
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.381 256736 INFO nova.virt.block_device [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Attempting to driver detach volume d8df2601-93e4-4ad2-8abf-280e81e74ff0 from mountpoint /dev/vdb
Nov 29 08:07:50 compute-0 ceph-mon[75050]: pgmap v2071: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 60 KiB/s wr, 32 op/s
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.577 256736 DEBUG nova.virt.libvirt.driver [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Attempting to detach device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.578 256736 DEBUG nova.virt.libvirt.guest [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-d8df2601-93e4-4ad2-8abf-280e81e74ff0">
Nov 29 08:07:50 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <serial>d8df2601-93e4-4ad2-8abf-280e81e74ff0</serial>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:50 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:50 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.587 256736 INFO nova.virt.libvirt.driver [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully detached device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the persistent domain config.
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.587 256736 DEBUG nova.virt.libvirt.driver [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.588 256736 DEBUG nova.virt.libvirt.guest [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-d8df2601-93e4-4ad2-8abf-280e81e74ff0">
Nov 29 08:07:50 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   </source>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <serial>d8df2601-93e4-4ad2-8abf-280e81e74ff0</serial>
Nov 29 08:07:50 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:07:50 compute-0 nova_compute[256729]: </disk>
Nov 29 08:07:50 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:07:50 compute-0 youthful_buck[298666]: {
Nov 29 08:07:50 compute-0 youthful_buck[298666]:     "0": [
Nov 29 08:07:50 compute-0 youthful_buck[298666]:         {
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "devices": [
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "/dev/loop3"
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             ],
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_name": "ceph_lv0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_size": "21470642176",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "name": "ceph_lv0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "tags": {
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.cluster_name": "ceph",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.crush_device_class": "",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.encrypted": "0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.osd_id": "0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.type": "block",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.vdo": "0"
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             },
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "type": "block",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "vg_name": "ceph_vg0"
Nov 29 08:07:50 compute-0 youthful_buck[298666]:         }
Nov 29 08:07:50 compute-0 youthful_buck[298666]:     ],
Nov 29 08:07:50 compute-0 youthful_buck[298666]:     "1": [
Nov 29 08:07:50 compute-0 youthful_buck[298666]:         {
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "devices": [
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "/dev/loop4"
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             ],
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_name": "ceph_lv1",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_size": "21470642176",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "name": "ceph_lv1",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "tags": {
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.cluster_name": "ceph",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.crush_device_class": "",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.encrypted": "0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.osd_id": "1",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.type": "block",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.vdo": "0"
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             },
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "type": "block",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "vg_name": "ceph_vg1"
Nov 29 08:07:50 compute-0 youthful_buck[298666]:         }
Nov 29 08:07:50 compute-0 youthful_buck[298666]:     ],
Nov 29 08:07:50 compute-0 youthful_buck[298666]:     "2": [
Nov 29 08:07:50 compute-0 youthful_buck[298666]:         {
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "devices": [
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "/dev/loop5"
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             ],
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_name": "ceph_lv2",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_size": "21470642176",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "name": "ceph_lv2",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "tags": {
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.cluster_name": "ceph",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.crush_device_class": "",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.encrypted": "0",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.osd_id": "2",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.type": "block",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:                 "ceph.vdo": "0"
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             },
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "type": "block",
Nov 29 08:07:50 compute-0 youthful_buck[298666]:             "vg_name": "ceph_vg2"
Nov 29 08:07:50 compute-0 youthful_buck[298666]:         }
Nov 29 08:07:50 compute-0 youthful_buck[298666]:     ]
Nov 29 08:07:50 compute-0 youthful_buck[298666]: }
Nov 29 08:07:50 compute-0 systemd[1]: libpod-e3f6298b52b54154413381235a2d55ddf14ff6b40382813ad9f67ef80868c68e.scope: Deactivated successfully.
Nov 29 08:07:50 compute-0 podman[298649]: 2025-11-29 08:07:50.705397433 +0000 UTC m=+1.541639316 container died e3f6298b52b54154413381235a2d55ddf14ff6b40382813ad9f67ef80868c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.731 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764403670.731118, f2cbf4cd-582b-408f-92b1-6b70364babcf => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0806d0ec73afc727dd948affb035f5427c716081ccc728eda64841fe8a08d5f4-merged.mount: Deactivated successfully.
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.734 256736 DEBUG nova.virt.libvirt.driver [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f2cbf4cd-582b-408f-92b1-6b70364babcf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.738 256736 INFO nova.virt.libvirt.driver [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully detached device vdb from instance f2cbf4cd-582b-408f-92b1-6b70364babcf from the live domain config.
Nov 29 08:07:50 compute-0 podman[298649]: 2025-11-29 08:07:50.764152945 +0000 UTC m=+1.600394838 container remove e3f6298b52b54154413381235a2d55ddf14ff6b40382813ad9f67ef80868c68e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 29 08:07:50 compute-0 systemd[1]: libpod-conmon-e3f6298b52b54154413381235a2d55ddf14ff6b40382813ad9f67ef80868c68e.scope: Deactivated successfully.
Nov 29 08:07:50 compute-0 sudo[298541]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:50 compute-0 sudo[298688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:50 compute-0 sudo[298688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:50 compute-0 sudo[298688]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.925 256736 DEBUG nova.objects.instance [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'flavor' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:07:50 compute-0 sudo[298713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:07:50 compute-0 sudo[298713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:50 compute-0 sudo[298713]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:50 compute-0 nova_compute[256729]: 2025-11-29 08:07:50.965 256736 DEBUG oslo_concurrency.lockutils [None req-85cb18aa-fc8a-4f13-951d-355ff3034edd 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:07:51 compute-0 sudo[298738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:51 compute-0 sudo[298738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:51 compute-0 sudo[298738]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:51 compute-0 sudo[298763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:07:51 compute-0 sudo[298763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 59 KiB/s wr, 32 op/s
Nov 29 08:07:51 compute-0 podman[298828]: 2025-11-29 08:07:51.567764992 +0000 UTC m=+0.061227741 container create ba53cb7fed5571204542743a6d6311122d840a9f95bcd88da08f0d6ac3796692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 08:07:51 compute-0 systemd[1]: Started libpod-conmon-ba53cb7fed5571204542743a6d6311122d840a9f95bcd88da08f0d6ac3796692.scope.
Nov 29 08:07:51 compute-0 podman[298828]: 2025-11-29 08:07:51.547578495 +0000 UTC m=+0.041041254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:07:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:07:51 compute-0 podman[298828]: 2025-11-29 08:07:51.664582265 +0000 UTC m=+0.158045094 container init ba53cb7fed5571204542743a6d6311122d840a9f95bcd88da08f0d6ac3796692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 08:07:51 compute-0 podman[298828]: 2025-11-29 08:07:51.675950298 +0000 UTC m=+0.169413037 container start ba53cb7fed5571204542743a6d6311122d840a9f95bcd88da08f0d6ac3796692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:07:51 compute-0 podman[298828]: 2025-11-29 08:07:51.680105353 +0000 UTC m=+0.173568142 container attach ba53cb7fed5571204542743a6d6311122d840a9f95bcd88da08f0d6ac3796692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:07:51 compute-0 unruffled_meninsky[298844]: 167 167
Nov 29 08:07:51 compute-0 systemd[1]: libpod-ba53cb7fed5571204542743a6d6311122d840a9f95bcd88da08f0d6ac3796692.scope: Deactivated successfully.
Nov 29 08:07:51 compute-0 podman[298828]: 2025-11-29 08:07:51.685945344 +0000 UTC m=+0.179408123 container died ba53cb7fed5571204542743a6d6311122d840a9f95bcd88da08f0d6ac3796692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 08:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-77450a0b1d1ff405f42b4654b8a8c6933d448b5d37c4b03fed21732b4136e897-merged.mount: Deactivated successfully.
Nov 29 08:07:51 compute-0 podman[298828]: 2025-11-29 08:07:51.733338022 +0000 UTC m=+0.226800761 container remove ba53cb7fed5571204542743a6d6311122d840a9f95bcd88da08f0d6ac3796692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_meninsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 08:07:51 compute-0 systemd[1]: libpod-conmon-ba53cb7fed5571204542743a6d6311122d840a9f95bcd88da08f0d6ac3796692.scope: Deactivated successfully.
Nov 29 08:07:51 compute-0 podman[298868]: 2025-11-29 08:07:51.910236814 +0000 UTC m=+0.043547963 container create 2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:07:51 compute-0 systemd[1]: Started libpod-conmon-2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9.scope.
Nov 29 08:07:51 compute-0 podman[298868]: 2025-11-29 08:07:51.893250215 +0000 UTC m=+0.026561384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:07:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c4680694f0748d519e10f5301fa3f6735c52bd4ac54d4dd396e90e4d4d9dd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c4680694f0748d519e10f5301fa3f6735c52bd4ac54d4dd396e90e4d4d9dd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c4680694f0748d519e10f5301fa3f6735c52bd4ac54d4dd396e90e4d4d9dd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c4680694f0748d519e10f5301fa3f6735c52bd4ac54d4dd396e90e4d4d9dd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:07:52 compute-0 podman[298868]: 2025-11-29 08:07:52.03039414 +0000 UTC m=+0.163705369 container init 2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:07:52 compute-0 podman[298868]: 2025-11-29 08:07:52.041826506 +0000 UTC m=+0.175137645 container start 2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nightingale, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:07:52 compute-0 podman[298868]: 2025-11-29 08:07:52.044763357 +0000 UTC m=+0.178074526 container attach 2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nightingale, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:07:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:07:52 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297359595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:07:52 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297359595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:52 compute-0 ceph-mon[75050]: pgmap v2072: 305 pgs: 305 active+clean; 171 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 59 KiB/s wr, 32 op/s
Nov 29 08:07:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1297359595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1297359595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:52 compute-0 nova_compute[256729]: 2025-11-29 08:07:52.886 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:53 compute-0 musing_nightingale[298884]: {
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "osd_id": 2,
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "type": "bluestore"
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:     },
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "osd_id": 1,
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "type": "bluestore"
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:     },
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "osd_id": 0,
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:         "type": "bluestore"
Nov 29 08:07:53 compute-0 musing_nightingale[298884]:     }
Nov 29 08:07:53 compute-0 musing_nightingale[298884]: }
Nov 29 08:07:53 compute-0 systemd[1]: libpod-2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9.scope: Deactivated successfully.
Nov 29 08:07:53 compute-0 podman[298868]: 2025-11-29 08:07:53.148478487 +0000 UTC m=+1.281789676 container died 2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nightingale, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:07:53 compute-0 systemd[1]: libpod-2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9.scope: Consumed 1.108s CPU time.
Nov 29 08:07:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 171 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 107 KiB/s wr, 38 op/s
Nov 29 08:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-32c4680694f0748d519e10f5301fa3f6735c52bd4ac54d4dd396e90e4d4d9dd8-merged.mount: Deactivated successfully.
Nov 29 08:07:53 compute-0 podman[298868]: 2025-11-29 08:07:53.2421 +0000 UTC m=+1.375411159 container remove 2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:07:53 compute-0 systemd[1]: libpod-conmon-2e0209ebace9b4b1da18871d16f4c9aab7ec055034fe86c969a189c6e48fafd9.scope: Deactivated successfully.
Nov 29 08:07:53 compute-0 sudo[298763]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:07:53 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:07:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:07:53 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:07:53 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 4b9e868b-5dba-41cc-a46e-748c28de7b42 does not exist
Nov 29 08:07:53 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3db15b76-cde1-449c-adc7-eb66bc50c9fb does not exist
Nov 29 08:07:53 compute-0 sudo[298930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:07:53 compute-0 sudo[298930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:53 compute-0 sudo[298930]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:53 compute-0 sudo[298955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:07:53 compute-0 sudo[298955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:07:53 compute-0 sudo[298955]: pam_unix(sudo:session): session closed for user root
Nov 29 08:07:53 compute-0 sshd-session[298380]: Connection closed by authenticating user root 143.14.121.41 port 37804 [preauth]
Nov 29 08:07:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:07:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2606509767' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:07:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2606509767' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:54 compute-0 ceph-mon[75050]: pgmap v2073: 305 pgs: 305 active+clean; 171 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 107 KiB/s wr, 38 op/s
Nov 29 08:07:54 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:07:54 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:07:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2606509767' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2606509767' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:54 compute-0 nova_compute[256729]: 2025-11-29 08:07:54.339 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Nov 29 08:07:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Nov 29 08:07:54 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Nov 29 08:07:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 172 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 73 KiB/s wr, 40 op/s
Nov 29 08:07:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:07:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:07:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2415415062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:07:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2415415062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:55 compute-0 ceph-mon[75050]: osdmap e410: 3 total, 3 up, 3 in
Nov 29 08:07:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2415415062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2415415062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Nov 29 08:07:56 compute-0 ceph-mon[75050]: pgmap v2075: 305 pgs: 305 active+clean; 172 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 73 KiB/s wr, 40 op/s
Nov 29 08:07:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Nov 29 08:07:56 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Nov 29 08:07:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 90 KiB/s wr, 100 op/s
Nov 29 08:07:57 compute-0 ceph-mon[75050]: osdmap e411: 3 total, 3 up, 3 in
Nov 29 08:07:57 compute-0 sshd-session[298980]: Connection closed by authenticating user root 143.14.121.41 port 52876 [preauth]
Nov 29 08:07:57 compute-0 nova_compute[256729]: 2025-11-29 08:07:57.889 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Nov 29 08:07:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Nov 29 08:07:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Nov 29 08:07:58 compute-0 ceph-mon[75050]: pgmap v2077: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 90 KiB/s wr, 100 op/s
Nov 29 08:07:58 compute-0 ceph-mon[75050]: osdmap e412: 3 total, 3 up, 3 in
Nov 29 08:07:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:07:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2558233695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:07:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2558233695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 29 KiB/s wr, 194 op/s
Nov 29 08:07:59 compute-0 nova_compute[256729]: 2025-11-29 08:07:59.341 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:07:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2558233695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:07:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2558233695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:07:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:59.785 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:07:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:59.786 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:07:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:07:59.787 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Nov 29 08:08:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Nov 29 08:08:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Nov 29 08:08:00 compute-0 ceph-mon[75050]: pgmap v2079: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 29 KiB/s wr, 194 op/s
Nov 29 08:08:00 compute-0 ceph-mon[75050]: osdmap e413: 3 total, 3 up, 3 in
Nov 29 08:08:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 9.3 KiB/s wr, 177 op/s
Nov 29 08:08:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3165323289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3165323289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:01 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3165323289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:01 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3165323289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:02 compute-0 sshd-session[298983]: Connection closed by authenticating user root 143.14.121.41 port 52880 [preauth]
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.594 256736 DEBUG oslo_concurrency.lockutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.595 256736 DEBUG oslo_concurrency.lockutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.595 256736 DEBUG oslo_concurrency.lockutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.595 256736 DEBUG oslo_concurrency.lockutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.596 256736 DEBUG oslo_concurrency.lockutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.597 256736 INFO nova.compute.manager [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Terminating instance
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.598 256736 DEBUG nova.compute.manager [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:08:02 compute-0 kernel: tapc112be9f-f9 (unregistering): left promiscuous mode
Nov 29 08:08:02 compute-0 NetworkManager[48962]: <info>  [1764403682.6580] device (tapc112be9f-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.673 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:02 compute-0 ovn_controller[153383]: 2025-11-29T08:08:02Z|00259|binding|INFO|Releasing lport c112be9f-f94a-4fd7-bf2c-4f4614918d8f from this chassis (sb_readonly=0)
Nov 29 08:08:02 compute-0 ovn_controller[153383]: 2025-11-29T08:08:02Z|00260|binding|INFO|Setting lport c112be9f-f94a-4fd7-bf2c-4f4614918d8f down in Southbound
Nov 29 08:08:02 compute-0 ovn_controller[153383]: 2025-11-29T08:08:02Z|00261|binding|INFO|Removing iface tapc112be9f-f9 ovn-installed in OVS
Nov 29 08:08:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:02.682 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:cb:1b 10.100.0.4'], port_security=['fa:16:3e:1e:cb:1b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f2cbf4cd-582b-408f-92b1-6b70364babcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d9be530-6530-495c-aa98-b2316438e1fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c2f274b1f924edba19c49761e8636bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58284a6e-1181-4efe-a885-d8a09d336e99 fd4dfdbf-227c-4e95-b1ab-fa20aeef8912', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f156bc90-0f03-49fe-bc45-8726c0e42606, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=c112be9f-f94a-4fd7-bf2c-4f4614918d8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:02.684 163655 INFO neutron.agent.ovn.metadata.agent [-] Port c112be9f-f94a-4fd7-bf2c-4f4614918d8f in datapath 0d9be530-6530-495c-aa98-b2316438e1fd unbound from our chassis
Nov 29 08:08:02 compute-0 ceph-mon[75050]: pgmap v2081: 305 pgs: 305 active+clean; 170 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 9.3 KiB/s wr, 177 op/s
Nov 29 08:08:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:02.687 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d9be530-6530-495c-aa98-b2316438e1fd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:08:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:02.688 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[51d758e1-b29c-4e8e-9d66-c07c4d9fa9b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:02 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:02.689 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd namespace which is not needed anymore
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.720 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:02 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 29 08:08:02 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 20.013s CPU time.
Nov 29 08:08:02 compute-0 systemd-machined[217781]: Machine qemu-25-instance-00000019 terminated.
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.840 256736 INFO nova.virt.libvirt.driver [-] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Instance destroyed successfully.
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.841 256736 DEBUG nova.objects.instance [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lazy-loading 'resources' on Instance uuid f2cbf4cd-582b-408f-92b1-6b70364babcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:02 compute-0 neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd[296742]: [NOTICE]   (296748) : haproxy version is 2.8.14-c23fe91
Nov 29 08:08:02 compute-0 neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd[296742]: [NOTICE]   (296748) : path to executable is /usr/sbin/haproxy
Nov 29 08:08:02 compute-0 neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd[296742]: [WARNING]  (296748) : Exiting Master process...
Nov 29 08:08:02 compute-0 neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd[296742]: [WARNING]  (296748) : Exiting Master process...
Nov 29 08:08:02 compute-0 neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd[296742]: [ALERT]    (296748) : Current worker (296752) exited with code 143 (Terminated)
Nov 29 08:08:02 compute-0 neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd[296742]: [WARNING]  (296748) : All workers exited. Exiting... (0)
Nov 29 08:08:02 compute-0 systemd[1]: libpod-030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491.scope: Deactivated successfully.
Nov 29 08:08:02 compute-0 podman[299010]: 2025-11-29 08:08:02.878696954 +0000 UTC m=+0.054776593 container died 030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.891 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491-userdata-shm.mount: Deactivated successfully.
Nov 29 08:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1dbc9cb883e66a936cd23585ede11c9716928dc77350d0fee7ec3db836e81fe-merged.mount: Deactivated successfully.
Nov 29 08:08:02 compute-0 podman[299010]: 2025-11-29 08:08:02.920903709 +0000 UTC m=+0.096983348 container cleanup 030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 08:08:02 compute-0 systemd[1]: libpod-conmon-030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491.scope: Deactivated successfully.
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.944 256736 DEBUG nova.compute.manager [req-f57cd1d4-27df-4180-8b60-19e6d6bf7e90 req-9dfceae8-8c49-4314-bb0d-91f7d23e722d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received event network-vif-unplugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.945 256736 DEBUG oslo_concurrency.lockutils [req-f57cd1d4-27df-4180-8b60-19e6d6bf7e90 req-9dfceae8-8c49-4314-bb0d-91f7d23e722d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.946 256736 DEBUG oslo_concurrency.lockutils [req-f57cd1d4-27df-4180-8b60-19e6d6bf7e90 req-9dfceae8-8c49-4314-bb0d-91f7d23e722d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.946 256736 DEBUG oslo_concurrency.lockutils [req-f57cd1d4-27df-4180-8b60-19e6d6bf7e90 req-9dfceae8-8c49-4314-bb0d-91f7d23e722d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.946 256736 DEBUG nova.compute.manager [req-f57cd1d4-27df-4180-8b60-19e6d6bf7e90 req-9dfceae8-8c49-4314-bb0d-91f7d23e722d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] No waiting events found dispatching network-vif-unplugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:02 compute-0 nova_compute[256729]: 2025-11-29 08:08:02.947 256736 DEBUG nova.compute.manager [req-f57cd1d4-27df-4180-8b60-19e6d6bf7e90 req-9dfceae8-8c49-4314-bb0d-91f7d23e722d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received event network-vif-unplugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:08:03 compute-0 podman[299052]: 2025-11-29 08:08:03.002244803 +0000 UTC m=+0.049379974 container remove 030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.005 256736 DEBUG nova.virt.libvirt.vif [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1938153464',display_name='tempest-SnapshotDataIntegrityTests-server-1938153464',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1938153464',id=25,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOzueeqjQFGrOlbn4utB/WDt1HT2fTR9vZS4MlTRSfHyAqmh1iCrJR4YQMfNazhLwEtND2MN7Di+NQETm+Mveut1YrwZowy8OY9ggEZ70bUUWirP0dRn530bPgh4HmSc2A==',key_name='tempest-keypair-18056619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2c2f274b1f924edba19c49761e8636bb',ramdisk_id='',reservation_id='r-ci340j2q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-9894843',owner_user_name='tempest-SnapshotDataIntegrityTests-9894843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d7bf857ed854504b6f769bea1a63cc4',uuid=f2cbf4cd-582b-408f-92b1-6b70364babcf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.006 256736 DEBUG nova.network.os_vif_util [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Converting VIF {"id": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "address": "fa:16:3e:1e:cb:1b", "network": {"id": "0d9be530-6530-495c-aa98-b2316438e1fd", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-500407474-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c2f274b1f924edba19c49761e8636bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc112be9f-f9", "ovs_interfaceid": "c112be9f-f94a-4fd7-bf2c-4f4614918d8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.007 256736 DEBUG nova.network.os_vif_util [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=c112be9f-f94a-4fd7-bf2c-4f4614918d8f,network=Network(0d9be530-6530-495c-aa98-b2316438e1fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc112be9f-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.007 256736 DEBUG os_vif [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=c112be9f-f94a-4fd7-bf2c-4f4614918d8f,network=Network(0d9be530-6530-495c-aa98-b2316438e1fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc112be9f-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:08:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:03.008 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b9b6bf-391a-4e2e-becf-82a2f9b5e513]: (4, ('Sat Nov 29 08:08:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd (030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491)\n030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491\nSat Nov 29 08:08:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd (030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491)\n030325629ce6a32e085b87b6e44a7369b1398347744a2c34a5820cb5d35d9491\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:03.010 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[15a5126e-ae0b-4ac1-93b6-d94f7e9533a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.010 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.011 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc112be9f-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:03.011 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d9be530-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:03 compute-0 kernel: tap0d9be530-60: left promiscuous mode
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.015 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.040 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:03.047 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[20cc2b87-c989-4a17-9353-7540a1f2de27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.047 256736 INFO os_vif [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:cb:1b,bridge_name='br-int',has_traffic_filtering=True,id=c112be9f-f94a-4fd7-bf2c-4f4614918d8f,network=Network(0d9be530-6530-495c-aa98-b2316438e1fd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc112be9f-f9')
Nov 29 08:08:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:03.065 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed6446e-7e65-4285-9efe-8debbffd1a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:03.067 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[eb14b105-947b-43b8-b305-03f51af3427e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:03.095 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8f17ab-ca01-4817-ac40-9a041ce0fbe7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591333, 'reachable_time': 26095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299082, 'error': None, 'target': 'ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:03.099 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0d9be530-6530-495c-aa98-b2316438e1fd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:08:03 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:03.099 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[69ee1a60-01e5-4524-844e-e91051718eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:08:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d0d9be530\x2d6530\x2d495c\x2daa98\x2db2316438e1fd.mount: Deactivated successfully.
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.147 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 168 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 13 KiB/s wr, 234 op/s
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.470 256736 INFO nova.virt.libvirt.driver [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Deleting instance files /var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf_del
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.471 256736 INFO nova.virt.libvirt.driver [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Deletion of /var/lib/nova/instances/f2cbf4cd-582b-408f-92b1-6b70364babcf_del complete
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.521 256736 INFO nova.compute.manager [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Took 0.92 seconds to destroy the instance on the hypervisor.
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.522 256736 DEBUG oslo.service.loopingcall [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.522 256736 DEBUG nova.compute.manager [-] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:08:03 compute-0 nova_compute[256729]: 2025-11-29 08:08:03.522 256736 DEBUG nova.network.neutron [-] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.255 256736 DEBUG nova.network.neutron [-] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.281 256736 INFO nova.compute.manager [-] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Took 0.76 seconds to deallocate network for instance.
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.347 256736 DEBUG oslo_concurrency.lockutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.348 256736 DEBUG oslo_concurrency.lockutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.437 256736 DEBUG oslo_concurrency.processutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.478 256736 DEBUG nova.compute.manager [req-1bf62bd9-81d8-4107-8ad6-71e297ec7182 req-7495ce4a-a76c-4284-bb5b-fc60b21de2b0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received event network-vif-deleted-c112be9f-f94a-4fd7-bf2c-4f4614918d8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:04 compute-0 ceph-mon[75050]: pgmap v2082: 305 pgs: 305 active+clean; 168 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 178 KiB/s rd, 13 KiB/s wr, 234 op/s
Nov 29 08:08:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:04 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727978458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.923 256736 DEBUG oslo_concurrency.processutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.932 256736 DEBUG nova.compute.provider_tree [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.952 256736 DEBUG nova.scheduler.client.report [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:04 compute-0 nova_compute[256729]: 2025-11-29 08:08:04.980 256736 DEBUG oslo_concurrency.lockutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:05 compute-0 nova_compute[256729]: 2025-11-29 08:08:05.009 256736 INFO nova.scheduler.client.report [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Deleted allocations for instance f2cbf4cd-582b-408f-92b1-6b70364babcf
Nov 29 08:08:05 compute-0 nova_compute[256729]: 2025-11-29 08:08:05.051 256736 DEBUG nova.compute.manager [req-fb349781-d582-41d4-b276-712537b9a5eb req-d8652994-e303-4aed-ad1a-8875f0dc5d8d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received event network-vif-plugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:05 compute-0 nova_compute[256729]: 2025-11-29 08:08:05.052 256736 DEBUG oslo_concurrency.lockutils [req-fb349781-d582-41d4-b276-712537b9a5eb req-d8652994-e303-4aed-ad1a-8875f0dc5d8d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:05 compute-0 nova_compute[256729]: 2025-11-29 08:08:05.053 256736 DEBUG oslo_concurrency.lockutils [req-fb349781-d582-41d4-b276-712537b9a5eb req-d8652994-e303-4aed-ad1a-8875f0dc5d8d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:05 compute-0 nova_compute[256729]: 2025-11-29 08:08:05.053 256736 DEBUG oslo_concurrency.lockutils [req-fb349781-d582-41d4-b276-712537b9a5eb req-d8652994-e303-4aed-ad1a-8875f0dc5d8d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:05 compute-0 nova_compute[256729]: 2025-11-29 08:08:05.054 256736 DEBUG nova.compute.manager [req-fb349781-d582-41d4-b276-712537b9a5eb req-d8652994-e303-4aed-ad1a-8875f0dc5d8d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] No waiting events found dispatching network-vif-plugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:08:05 compute-0 nova_compute[256729]: 2025-11-29 08:08:05.054 256736 WARNING nova.compute.manager [req-fb349781-d582-41d4-b276-712537b9a5eb req-d8652994-e303-4aed-ad1a-8875f0dc5d8d ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Received unexpected event network-vif-plugged-c112be9f-f94a-4fd7-bf2c-4f4614918d8f for instance with vm_state deleted and task_state None.
Nov 29 08:08:05 compute-0 nova_compute[256729]: 2025-11-29 08:08:05.084 256736 DEBUG oslo_concurrency.lockutils [None req-79335174-5cc3-4d86-906e-89dd20e715f5 4d7bf857ed854504b6f769bea1a63cc4 2c2f274b1f924edba19c49761e8636bb - - default default] Lock "f2cbf4cd-582b-408f-92b1-6b70364babcf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:05 compute-0 nova_compute[256729]: 2025-11-29 08:08:05.147 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 140 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 12 KiB/s wr, 187 op/s
Nov 29 08:08:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Nov 29 08:08:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Nov 29 08:08:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:08:05
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', 'images', '.mgr']
Nov 29 08:08:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:08:05 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1727978458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:05 compute-0 ceph-mon[75050]: osdmap e414: 3 total, 3 up, 3 in
Nov 29 08:08:06 compute-0 ceph-mon[75050]: pgmap v2083: 305 pgs: 305 active+clean; 140 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 12 KiB/s wr, 187 op/s
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.148 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 9.7 KiB/s wr, 166 op/s
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.175 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.176 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.176 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.176 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.177 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313491148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.602 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:07 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/313491148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.770 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.771 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4355MB free_disk=59.95791244506836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.772 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.772 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.829 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.830 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.853 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:07 compute-0 nova_compute[256729]: 2025-11-29 08:08:07.894 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3809538998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:08 compute-0 nova_compute[256729]: 2025-11-29 08:08:08.013 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/711163056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:08 compute-0 nova_compute[256729]: 2025-11-29 08:08:08.300 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:08 compute-0 nova_compute[256729]: 2025-11-29 08:08:08.307 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:08 compute-0 nova_compute[256729]: 2025-11-29 08:08:08.324 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:08 compute-0 nova_compute[256729]: 2025-11-29 08:08:08.349 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:08:08 compute-0 nova_compute[256729]: 2025-11-29 08:08:08.350 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2108250939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2108250939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Nov 29 08:08:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Nov 29 08:08:08 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Nov 29 08:08:08 compute-0 ceph-mon[75050]: pgmap v2085: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 9.7 KiB/s wr, 166 op/s
Nov 29 08:08:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3809538998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/711163056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2108250939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2108250939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 10 KiB/s wr, 168 op/s
Nov 29 08:08:09 compute-0 nova_compute[256729]: 2025-11-29 08:08:09.351 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:09 compute-0 nova_compute[256729]: 2025-11-29 08:08:09.352 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:08:09 compute-0 nova_compute[256729]: 2025-11-29 08:08:09.396 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:08:09 compute-0 nova_compute[256729]: 2025-11-29 08:08:09.397 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Nov 29 08:08:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Nov 29 08:08:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Nov 29 08:08:09 compute-0 ceph-mon[75050]: osdmap e415: 3 total, 3 up, 3 in
Nov 29 08:08:10 compute-0 nova_compute[256729]: 2025-11-29 08:08:10.142 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Nov 29 08:08:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Nov 29 08:08:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Nov 29 08:08:10 compute-0 nova_compute[256729]: 2025-11-29 08:08:10.304 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:10 compute-0 podman[299157]: 2025-11-29 08:08:10.705305517 +0000 UTC m=+0.073510030 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 08:08:10 compute-0 podman[299158]: 2025-11-29 08:08:10.749232988 +0000 UTC m=+0.101033199 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:08:10 compute-0 ceph-mon[75050]: pgmap v2087: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 10 KiB/s wr, 168 op/s
Nov 29 08:08:10 compute-0 ceph-mon[75050]: osdmap e416: 3 total, 3 up, 3 in
Nov 29 08:08:10 compute-0 ceph-mon[75050]: osdmap e417: 3 total, 3 up, 3 in
Nov 29 08:08:10 compute-0 podman[299156]: 2025-11-29 08:08:10.761439576 +0000 UTC m=+0.123802808 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 08:08:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Nov 29 08:08:12 compute-0 sshd-session[298997]: Connection closed by authenticating user root 143.14.121.41 port 51662 [preauth]
Nov 29 08:08:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Nov 29 08:08:12 compute-0 ceph-mon[75050]: pgmap v2090: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Nov 29 08:08:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Nov 29 08:08:12 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Nov 29 08:08:12 compute-0 nova_compute[256729]: 2025-11-29 08:08:12.895 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:13 compute-0 nova_compute[256729]: 2025-11-29 08:08:13.014 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:13 compute-0 nova_compute[256729]: 2025-11-29 08:08:13.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.9 KiB/s wr, 32 op/s
Nov 29 08:08:13 compute-0 ceph-mon[75050]: osdmap e418: 3 total, 3 up, 3 in
Nov 29 08:08:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/646350520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/646350520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:14 compute-0 ceph-mon[75050]: pgmap v2092: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.9 KiB/s wr, 32 op/s
Nov 29 08:08:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/646350520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/646350520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.7 KiB/s wr, 65 op/s
Nov 29 08:08:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003475232182392394 of space, bias 1.0, pg target 0.10425696547177182 quantized to 32 (current 32)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:08:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:08:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3079278216' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3079278216' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3079278216' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3079278216' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:16.215 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:08:16 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:16.216 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:08:16 compute-0 nova_compute[256729]: 2025-11-29 08:08:16.216 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:16 compute-0 ceph-mon[75050]: pgmap v2093: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.7 KiB/s wr, 65 op/s
Nov 29 08:08:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2273660842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2273660842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 6.1 KiB/s wr, 91 op/s
Nov 29 08:08:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2273660842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2273660842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:17 compute-0 nova_compute[256729]: 2025-11-29 08:08:17.839 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403682.8376079, f2cbf4cd-582b-408f-92b1-6b70364babcf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:08:17 compute-0 nova_compute[256729]: 2025-11-29 08:08:17.839 256736 INFO nova.compute.manager [-] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] VM Stopped (Lifecycle Event)
Nov 29 08:08:17 compute-0 nova_compute[256729]: 2025-11-29 08:08:17.870 256736 DEBUG nova.compute.manager [None req-1f22ffdf-75fe-4c15-8c7c-3ded679b9699 - - - - - -] [instance: f2cbf4cd-582b-408f-92b1-6b70364babcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:08:17 compute-0 nova_compute[256729]: 2025-11-29 08:08:17.896 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:18 compute-0 nova_compute[256729]: 2025-11-29 08:08:18.016 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:18 compute-0 sshd-session[299220]: Connection closed by authenticating user root 143.14.121.41 port 55542 [preauth]
Nov 29 08:08:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1828182924' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1828182924' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:18 compute-0 ceph-mon[75050]: pgmap v2094: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 6.1 KiB/s wr, 91 op/s
Nov 29 08:08:18 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1828182924' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:18 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1828182924' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 5.0 KiB/s wr, 98 op/s
Nov 29 08:08:19 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:19.218 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/630012017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/630012017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/630012017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/630012017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Nov 29 08:08:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Nov 29 08:08:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Nov 29 08:08:20 compute-0 ceph-mon[75050]: pgmap v2095: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 5.0 KiB/s wr, 98 op/s
Nov 29 08:08:20 compute-0 ceph-mon[75050]: osdmap e419: 3 total, 3 up, 3 in
Nov 29 08:08:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.8 KiB/s wr, 88 op/s
Nov 29 08:08:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Nov 29 08:08:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Nov 29 08:08:21 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Nov 29 08:08:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:08:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2316008786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:08:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2316008786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:22 compute-0 ceph-mon[75050]: pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.8 KiB/s wr, 88 op/s
Nov 29 08:08:22 compute-0 ceph-mon[75050]: osdmap e420: 3 total, 3 up, 3 in
Nov 29 08:08:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2316008786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:08:22 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2316008786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:08:22 compute-0 nova_compute[256729]: 2025-11-29 08:08:22.900 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:23 compute-0 nova_compute[256729]: 2025-11-29 08:08:23.017 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 3.4 KiB/s wr, 118 op/s
Nov 29 08:08:24 compute-0 ceph-mon[75050]: pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 3.4 KiB/s wr, 118 op/s
Nov 29 08:08:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 2.0 KiB/s wr, 100 op/s
Nov 29 08:08:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:26 compute-0 ceph-mon[75050]: pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 2.0 KiB/s wr, 100 op/s
Nov 29 08:08:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.0 KiB/s wr, 74 op/s
Nov 29 08:08:27 compute-0 nova_compute[256729]: 2025-11-29 08:08:27.902 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:28 compute-0 nova_compute[256729]: 2025-11-29 08:08:28.018 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:28 compute-0 ceph-mon[75050]: pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.0 KiB/s wr, 74 op/s
Nov 29 08:08:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 KiB/s wr, 66 op/s
Nov 29 08:08:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Nov 29 08:08:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Nov 29 08:08:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Nov 29 08:08:30 compute-0 ceph-mon[75050]: pgmap v2102: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 KiB/s wr, 66 op/s
Nov 29 08:08:30 compute-0 ceph-mon[75050]: osdmap e421: 3 total, 3 up, 3 in
Nov 29 08:08:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.6 KiB/s wr, 59 op/s
Nov 29 08:08:32 compute-0 ceph-mon[75050]: pgmap v2104: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.6 KiB/s wr, 59 op/s
Nov 29 08:08:32 compute-0 nova_compute[256729]: 2025-11-29 08:08:32.904 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:33 compute-0 nova_compute[256729]: 2025-11-29 08:08:33.019 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 14 op/s
Nov 29 08:08:34 compute-0 ceph-mon[75050]: pgmap v2105: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 14 op/s
Nov 29 08:08:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:08:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:08:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:08:36 compute-0 ceph-mon[75050]: pgmap v2106: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:08:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:08:37 compute-0 nova_compute[256729]: 2025-11-29 08:08:37.906 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:38 compute-0 nova_compute[256729]: 2025-11-29 08:08:38.021 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:38 compute-0 ceph-mon[75050]: pgmap v2107: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail
Nov 29 08:08:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 102 B/s wr, 0 op/s
Nov 29 08:08:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1754770409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Nov 29 08:08:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Nov 29 08:08:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Nov 29 08:08:40 compute-0 ceph-mon[75050]: pgmap v2108: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 102 B/s wr, 0 op/s
Nov 29 08:08:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1754770409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 102 B/s wr, 0 op/s
Nov 29 08:08:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Nov 29 08:08:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Nov 29 08:08:41 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Nov 29 08:08:41 compute-0 ceph-mon[75050]: osdmap e422: 3 total, 3 up, 3 in
Nov 29 08:08:41 compute-0 podman[299225]: 2025-11-29 08:08:41.709339268 +0000 UTC m=+0.076082870 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:41 compute-0 podman[299226]: 2025-11-29 08:08:41.725860465 +0000 UTC m=+0.082207880 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:08:41 compute-0 podman[299224]: 2025-11-29 08:08:41.764950223 +0000 UTC m=+0.129557316 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:42 compute-0 ceph-mon[75050]: pgmap v2110: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 102 B/s wr, 0 op/s
Nov 29 08:08:42 compute-0 ceph-mon[75050]: osdmap e423: 3 total, 3 up, 3 in
Nov 29 08:08:42 compute-0 nova_compute[256729]: 2025-11-29 08:08:42.908 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3264240974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:43 compute-0 nova_compute[256729]: 2025-11-29 08:08:43.022 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.9 KiB/s wr, 22 op/s
Nov 29 08:08:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3264240974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:44 compute-0 ceph-mon[75050]: pgmap v2112: 305 pgs: 305 active+clean; 88 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.9 KiB/s wr, 22 op/s
Nov 29 08:08:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 88 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 34 KiB/s wr, 25 op/s
Nov 29 08:08:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Nov 29 08:08:46 compute-0 ceph-mon[75050]: pgmap v2113: 305 pgs: 305 active+clean; 88 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 34 KiB/s wr, 25 op/s
Nov 29 08:08:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Nov 29 08:08:46 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Nov 29 08:08:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 312 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 33 MiB/s wr, 169 op/s
Nov 29 08:08:47 compute-0 ceph-mon[75050]: osdmap e424: 3 total, 3 up, 3 in
Nov 29 08:08:47 compute-0 nova_compute[256729]: 2025-11-29 08:08:47.912 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:48 compute-0 nova_compute[256729]: 2025-11-29 08:08:48.025 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:48 compute-0 ceph-mon[75050]: pgmap v2115: 305 pgs: 305 active+clean; 312 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 33 MiB/s wr, 169 op/s
Nov 29 08:08:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2079700777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 484 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 50 MiB/s wr, 154 op/s
Nov 29 08:08:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2079700777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:50 compute-0 ceph-mon[75050]: pgmap v2116: 305 pgs: 305 active+clean; 484 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 50 MiB/s wr, 154 op/s
Nov 29 08:08:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 484 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 40 MiB/s wr, 124 op/s
Nov 29 08:08:51 compute-0 ovn_controller[153383]: 2025-11-29T08:08:51Z|00262|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 08:08:51 compute-0 ceph-mon[75050]: pgmap v2117: 305 pgs: 305 active+clean; 484 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 40 MiB/s wr, 124 op/s
Nov 29 08:08:52 compute-0 nova_compute[256729]: 2025-11-29 08:08:52.914 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:53 compute-0 nova_compute[256729]: 2025-11-29 08:08:53.027 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 772 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 68 MiB/s wr, 226 op/s
Nov 29 08:08:53 compute-0 sshd-session[299222]: Connection closed by authenticating user root 143.14.121.41 port 55554 [preauth]
Nov 29 08:08:53 compute-0 sudo[299292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:53 compute-0 sudo[299292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:53 compute-0 sudo[299292]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:53 compute-0 sudo[299317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:08:53 compute-0 sudo[299317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:53 compute-0 sudo[299317]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:53 compute-0 sudo[299342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:53 compute-0 sudo[299342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:53 compute-0 sudo[299342]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:53 compute-0 sudo[299367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:08:53 compute-0 sudo[299367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:54 compute-0 ceph-mon[75050]: pgmap v2118: 305 pgs: 305 active+clean; 772 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 68 MiB/s wr, 226 op/s
Nov 29 08:08:54 compute-0 sudo[299367]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:08:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:08:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:08:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:08:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:08:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:08:54 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1c720173-8c7d-4709-b91a-8493618dc067 does not exist
Nov 29 08:08:54 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ddc4e365-ae03-4e2a-8220-8ecc068c56fd does not exist
Nov 29 08:08:54 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 09f144c3-4d97-4d85-8f52-293f78482172 does not exist
Nov 29 08:08:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:08:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:08:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:08:54 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:08:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:08:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:08:54 compute-0 sudo[299424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:54 compute-0 sudo[299424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:54 compute-0 sudo[299424]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:54 compute-0 nova_compute[256729]: 2025-11-29 08:08:54.626 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:54 compute-0 nova_compute[256729]: 2025-11-29 08:08:54.628 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:54 compute-0 nova_compute[256729]: 2025-11-29 08:08:54.653 256736 DEBUG nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:08:54 compute-0 sudo[299449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:08:54 compute-0 sudo[299449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:54 compute-0 sudo[299449]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:54 compute-0 nova_compute[256729]: 2025-11-29 08:08:54.744 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:54 compute-0 nova_compute[256729]: 2025-11-29 08:08:54.745 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:54 compute-0 nova_compute[256729]: 2025-11-29 08:08:54.757 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:08:54 compute-0 nova_compute[256729]: 2025-11-29 08:08:54.757 256736 INFO nova.compute.claims [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:08:54 compute-0 sudo[299474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:54 compute-0 sudo[299474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:54 compute-0 sudo[299474]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:54 compute-0 sudo[299499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:08:54 compute-0 sudo[299499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:54 compute-0 nova_compute[256729]: 2025-11-29 08:08:54.960 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 928 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 84 MiB/s wr, 229 op/s
Nov 29 08:08:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:08:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:08:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:08:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:08:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:08:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:08:55 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:08:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:08:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/94452579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:55 compute-0 podman[299582]: 2025-11-29 08:08:55.390626087 +0000 UTC m=+0.072152373 container create 6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.399 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.409 256736 DEBUG nova.compute.provider_tree [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.431 256736 DEBUG nova.scheduler.client.report [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:08:55 compute-0 podman[299582]: 2025-11-29 08:08:55.358694886 +0000 UTC m=+0.040221222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:55 compute-0 systemd[1]: Started libpod-conmon-6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568.scope.
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.459 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.461 256736 DEBUG nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:08:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.509 256736 DEBUG nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.510 256736 DEBUG nova.network.neutron [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:08:55 compute-0 podman[299582]: 2025-11-29 08:08:55.523627837 +0000 UTC m=+0.205154173 container init 6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:55 compute-0 podman[299582]: 2025-11-29 08:08:55.536865372 +0000 UTC m=+0.218391658 container start 6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.537 256736 INFO nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:08:55 compute-0 podman[299582]: 2025-11-29 08:08:55.541203423 +0000 UTC m=+0.222729749 container attach 6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 08:08:55 compute-0 systemd[1]: libpod-6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568.scope: Deactivated successfully.
Nov 29 08:08:55 compute-0 happy_lewin[299600]: 167 167
Nov 29 08:08:55 compute-0 conmon[299600]: conmon 6981e3a68d99b5d23bf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568.scope/container/memory.events
Nov 29 08:08:55 compute-0 podman[299605]: 2025-11-29 08:08:55.608444528 +0000 UTC m=+0.038776721 container died 6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 08:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b2e45acedfa4de38e8b4962908069ef9be77940216568e41d8fbae2067a37b6-merged.mount: Deactivated successfully.
Nov 29 08:08:55 compute-0 podman[299605]: 2025-11-29 08:08:55.65344051 +0000 UTC m=+0.083772683 container remove 6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:08:55 compute-0 systemd[1]: libpod-conmon-6981e3a68d99b5d23bf7699689d2813e80aa1811b2ff1e462e04a9d376ca7568.scope: Deactivated successfully.
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.696 256736 DEBUG nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.754 256736 DEBUG nova.policy [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '981b7946a749412f90d3d8148d99486a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '062fa36b3fb745529eb64d4b5bb52af6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.786 256736 DEBUG nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.787 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.788 256736 INFO nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Creating image(s)
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.822 256736 DEBUG nova.storage.rbd_utils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.859 256736 DEBUG nova.storage.rbd_utils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.894 256736 DEBUG nova.storage.rbd_utils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.899 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:55 compute-0 podman[299645]: 2025-11-29 08:08:55.908483868 +0000 UTC m=+0.063199875 container create 15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 08:08:55 compute-0 systemd[1]: Started libpod-conmon-15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d.scope.
Nov 29 08:08:55 compute-0 podman[299645]: 2025-11-29 08:08:55.878671405 +0000 UTC m=+0.033387492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.981 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.982 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.983 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:55 compute-0 nova_compute[256729]: 2025-11-29 08:08:55.983 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10d155754ba204841a9ebf4a9b60c8431a82320182d980c532b312b427079c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10d155754ba204841a9ebf4a9b60c8431a82320182d980c532b312b427079c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10d155754ba204841a9ebf4a9b60c8431a82320182d980c532b312b427079c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10d155754ba204841a9ebf4a9b60c8431a82320182d980c532b312b427079c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e10d155754ba204841a9ebf4a9b60c8431a82320182d980c532b312b427079c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:56 compute-0 podman[299645]: 2025-11-29 08:08:56.021430235 +0000 UTC m=+0.176146282 container init 15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.030 256736 DEBUG nova.storage.rbd_utils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:56 compute-0 podman[299645]: 2025-11-29 08:08:56.036535162 +0000 UTC m=+0.191251169 container start 15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 08:08:56 compute-0 podman[299645]: 2025-11-29 08:08:56.041220731 +0000 UTC m=+0.195936858 container attach 15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.044 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:56 compute-0 ceph-mon[75050]: pgmap v2119: 305 pgs: 305 active+clean; 928 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 84 MiB/s wr, 229 op/s
Nov 29 08:08:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/94452579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.440 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Nov 29 08:08:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Nov 29 08:08:56 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.512 256736 DEBUG nova.storage.rbd_utils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] resizing rbd image 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.629 256736 DEBUG nova.objects.instance [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'migration_context' on Instance uuid 07bebcf7-a7f6-4074-8d77-e89bbce7f710 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.649 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.649 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Ensure instance console log exists: /var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.650 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.650 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.650 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:56 compute-0 nova_compute[256729]: 2025-11-29 08:08:56.806 256736 DEBUG nova.network.neutron [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Successfully created port: c859afc6-4da0-4faa-8af3-72d4c6d25f9b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:08:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 78 MiB/s wr, 230 op/s
Nov 29 08:08:57 compute-0 condescending_satoshi[299700]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:08:57 compute-0 condescending_satoshi[299700]: --> relative data size: 1.0
Nov 29 08:08:57 compute-0 condescending_satoshi[299700]: --> All data devices are unavailable
Nov 29 08:08:57 compute-0 systemd[1]: libpod-15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d.scope: Deactivated successfully.
Nov 29 08:08:57 compute-0 systemd[1]: libpod-15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d.scope: Consumed 1.166s CPU time.
Nov 29 08:08:57 compute-0 podman[299645]: 2025-11-29 08:08:57.290139958 +0000 UTC m=+1.444856005 container died 15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 08:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e10d155754ba204841a9ebf4a9b60c8431a82320182d980c532b312b427079c4-merged.mount: Deactivated successfully.
Nov 29 08:08:57 compute-0 podman[299645]: 2025-11-29 08:08:57.374397173 +0000 UTC m=+1.529113190 container remove 15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_satoshi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:08:57 compute-0 systemd[1]: libpod-conmon-15e399cd4bf41079ebf7dcdbdf37c46b0560645c986295129c6fd394628ee04d.scope: Deactivated successfully.
Nov 29 08:08:57 compute-0 sudo[299499]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:57 compute-0 ceph-mon[75050]: osdmap e425: 3 total, 3 up, 3 in
Nov 29 08:08:57 compute-0 sudo[299852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:57 compute-0 sudo[299852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:57 compute-0 sudo[299852]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:57 compute-0 sudo[299877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:08:57 compute-0 sudo[299877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:57 compute-0 sudo[299877]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:57 compute-0 sudo[299902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:57 compute-0 sudo[299902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:57 compute-0 sudo[299902]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:57 compute-0 nova_compute[256729]: 2025-11-29 08:08:57.777 256736 DEBUG nova.network.neutron [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Successfully updated port: c859afc6-4da0-4faa-8af3-72d4c6d25f9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:08:57 compute-0 nova_compute[256729]: 2025-11-29 08:08:57.795 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:57 compute-0 nova_compute[256729]: 2025-11-29 08:08:57.796 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquired lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:57 compute-0 nova_compute[256729]: 2025-11-29 08:08:57.796 256736 DEBUG nova.network.neutron [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:08:57 compute-0 sudo[299927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:08:57 compute-0 sudo[299927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:57 compute-0 nova_compute[256729]: 2025-11-29 08:08:57.886 256736 DEBUG nova.compute.manager [req-1368f6f0-9f7e-4cbd-ba73-b911d018dc1f req-1ac777f3-b7fa-4061-a383-e441749bc817 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received event network-changed-c859afc6-4da0-4faa-8af3-72d4c6d25f9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:08:57 compute-0 nova_compute[256729]: 2025-11-29 08:08:57.886 256736 DEBUG nova.compute.manager [req-1368f6f0-9f7e-4cbd-ba73-b911d018dc1f req-1ac777f3-b7fa-4061-a383-e441749bc817 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Refreshing instance network info cache due to event network-changed-c859afc6-4da0-4faa-8af3-72d4c6d25f9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:08:57 compute-0 nova_compute[256729]: 2025-11-29 08:08:57.887 256736 DEBUG oslo_concurrency.lockutils [req-1368f6f0-9f7e-4cbd-ba73-b911d018dc1f req-1ac777f3-b7fa-4061-a383-e441749bc817 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:08:57 compute-0 nova_compute[256729]: 2025-11-29 08:08:57.918 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:57 compute-0 nova_compute[256729]: 2025-11-29 08:08:57.948 256736 DEBUG nova.network.neutron [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.028 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:08:58 compute-0 podman[299994]: 2025-11-29 08:08:58.26531799 +0000 UTC m=+0.064770168 container create 0e16546dd2786d8e6006a60c8afd1f7118426c4d25fd60916d440c4f6ce1c7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pike, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 08:08:58 compute-0 systemd[1]: Started libpod-conmon-0e16546dd2786d8e6006a60c8afd1f7118426c4d25fd60916d440c4f6ce1c7ef.scope.
Nov 29 08:08:58 compute-0 podman[299994]: 2025-11-29 08:08:58.238652085 +0000 UTC m=+0.038104323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:58 compute-0 podman[299994]: 2025-11-29 08:08:58.366142873 +0000 UTC m=+0.165595111 container init 0e16546dd2786d8e6006a60c8afd1f7118426c4d25fd60916d440c4f6ce1c7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 08:08:58 compute-0 podman[299994]: 2025-11-29 08:08:58.377402164 +0000 UTC m=+0.176854342 container start 0e16546dd2786d8e6006a60c8afd1f7118426c4d25fd60916d440c4f6ce1c7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pike, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:08:58 compute-0 podman[299994]: 2025-11-29 08:08:58.382843134 +0000 UTC m=+0.182295312 container attach 0e16546dd2786d8e6006a60c8afd1f7118426c4d25fd60916d440c4f6ce1c7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pike, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 08:08:58 compute-0 inspiring_pike[300011]: 167 167
Nov 29 08:08:58 compute-0 systemd[1]: libpod-0e16546dd2786d8e6006a60c8afd1f7118426c4d25fd60916d440c4f6ce1c7ef.scope: Deactivated successfully.
Nov 29 08:08:58 compute-0 podman[299994]: 2025-11-29 08:08:58.387206745 +0000 UTC m=+0.186658943 container died 0e16546dd2786d8e6006a60c8afd1f7118426c4d25fd60916d440c4f6ce1c7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pike, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 08:08:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-19a06a7395c251c08412a0a64b36126708a6a2e6935226733e807bce61e579cb-merged.mount: Deactivated successfully.
Nov 29 08:08:58 compute-0 podman[299994]: 2025-11-29 08:08:58.445079541 +0000 UTC m=+0.244531689 container remove 0e16546dd2786d8e6006a60c8afd1f7118426c4d25fd60916d440c4f6ce1c7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 08:08:58 compute-0 systemd[1]: libpod-conmon-0e16546dd2786d8e6006a60c8afd1f7118426c4d25fd60916d440c4f6ce1c7ef.scope: Deactivated successfully.
Nov 29 08:08:58 compute-0 ceph-mon[75050]: pgmap v2121: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 139 KiB/s rd, 78 MiB/s wr, 230 op/s
Nov 29 08:08:58 compute-0 podman[300034]: 2025-11-29 08:08:58.687939234 +0000 UTC m=+0.069614302 container create 2b1c9ff7a2b3d4a8b4bb601098fbab3c20fa920d2cd65f1c6984ce8e471bc168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:08:58 compute-0 systemd[1]: Started libpod-conmon-2b1c9ff7a2b3d4a8b4bb601098fbab3c20fa920d2cd65f1c6984ce8e471bc168.scope.
Nov 29 08:08:58 compute-0 podman[300034]: 2025-11-29 08:08:58.660763564 +0000 UTC m=+0.042438712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:08:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae255c2ab0b740ae76c12fad6fddcf02fc64ea2e5ef5efd4cc61d1ea303377/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae255c2ab0b740ae76c12fad6fddcf02fc64ea2e5ef5efd4cc61d1ea303377/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae255c2ab0b740ae76c12fad6fddcf02fc64ea2e5ef5efd4cc61d1ea303377/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae255c2ab0b740ae76c12fad6fddcf02fc64ea2e5ef5efd4cc61d1ea303377/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:08:58 compute-0 podman[300034]: 2025-11-29 08:08:58.80666544 +0000 UTC m=+0.188340598 container init 2b1c9ff7a2b3d4a8b4bb601098fbab3c20fa920d2cd65f1c6984ce8e471bc168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:08:58 compute-0 podman[300034]: 2025-11-29 08:08:58.819215167 +0000 UTC m=+0.200890235 container start 2b1c9ff7a2b3d4a8b4bb601098fbab3c20fa920d2cd65f1c6984ce8e471bc168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:08:58 compute-0 podman[300034]: 2025-11-29 08:08:58.823190206 +0000 UTC m=+0.204865304 container attach 2b1c9ff7a2b3d4a8b4bb601098fbab3c20fa920d2cd65f1c6984ce8e471bc168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.851 256736 DEBUG nova.network.neutron [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Updating instance_info_cache with network_info: [{"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.880 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Releasing lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.881 256736 DEBUG nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Instance network_info: |[{"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.881 256736 DEBUG oslo_concurrency.lockutils [req-1368f6f0-9f7e-4cbd-ba73-b911d018dc1f req-1ac777f3-b7fa-4061-a383-e441749bc817 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.882 256736 DEBUG nova.network.neutron [req-1368f6f0-9f7e-4cbd-ba73-b911d018dc1f req-1ac777f3-b7fa-4061-a383-e441749bc817 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Refreshing network info cache for port c859afc6-4da0-4faa-8af3-72d4c6d25f9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.886 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Start _get_guest_xml network_info=[{"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.893 256736 WARNING nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.899 256736 DEBUG nova.virt.libvirt.host [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.900 256736 DEBUG nova.virt.libvirt.host [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.906 256736 DEBUG nova.virt.libvirt.host [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.907 256736 DEBUG nova.virt.libvirt.host [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.907 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.907 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.908 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.908 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.908 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.909 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.909 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.909 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.909 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.910 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.910 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.910 256736 DEBUG nova.virt.hardware [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:08:58 compute-0 nova_compute[256729]: 2025-11-29 08:08:58.913 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 65 MiB/s wr, 226 op/s
Nov 29 08:08:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/327523564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.356 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.378 256736 DEBUG nova.storage.rbd_utils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.382 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:08:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/327523564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:59 compute-0 bold_rhodes[300050]: {
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:     "0": [
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:         {
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "devices": [
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "/dev/loop3"
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             ],
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_name": "ceph_lv0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_size": "21470642176",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "name": "ceph_lv0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "tags": {
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.cluster_name": "ceph",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.crush_device_class": "",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.encrypted": "0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.osd_id": "0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.type": "block",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.vdo": "0"
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             },
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "type": "block",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "vg_name": "ceph_vg0"
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:         }
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:     ],
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:     "1": [
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:         {
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "devices": [
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "/dev/loop4"
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             ],
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_name": "ceph_lv1",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_size": "21470642176",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "name": "ceph_lv1",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "tags": {
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.cluster_name": "ceph",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.crush_device_class": "",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.encrypted": "0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.osd_id": "1",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.type": "block",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.vdo": "0"
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             },
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "type": "block",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "vg_name": "ceph_vg1"
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:         }
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:     ],
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:     "2": [
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:         {
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "devices": [
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "/dev/loop5"
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             ],
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_name": "ceph_lv2",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_size": "21470642176",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "name": "ceph_lv2",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "tags": {
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.cluster_name": "ceph",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.crush_device_class": "",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.encrypted": "0",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.osd_id": "2",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.type": "block",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:                 "ceph.vdo": "0"
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             },
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "type": "block",
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:             "vg_name": "ceph_vg2"
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:         }
Nov 29 08:08:59 compute-0 bold_rhodes[300050]:     ]
Nov 29 08:08:59 compute-0 bold_rhodes[300050]: }
Nov 29 08:08:59 compute-0 systemd[1]: libpod-2b1c9ff7a2b3d4a8b4bb601098fbab3c20fa920d2cd65f1c6984ce8e471bc168.scope: Deactivated successfully.
Nov 29 08:08:59 compute-0 podman[300034]: 2025-11-29 08:08:59.616292304 +0000 UTC m=+0.997967362 container died 2b1c9ff7a2b3d4a8b4bb601098fbab3c20fa920d2cd65f1c6984ce8e471bc168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 08:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4ae255c2ab0b740ae76c12fad6fddcf02fc64ea2e5ef5efd4cc61d1ea303377-merged.mount: Deactivated successfully.
Nov 29 08:08:59 compute-0 podman[300034]: 2025-11-29 08:08:59.678002967 +0000 UTC m=+1.059678035 container remove 2b1c9ff7a2b3d4a8b4bb601098fbab3c20fa920d2cd65f1c6984ce8e471bc168 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 08:08:59 compute-0 systemd[1]: libpod-conmon-2b1c9ff7a2b3d4a8b4bb601098fbab3c20fa920d2cd65f1c6984ce8e471bc168.scope: Deactivated successfully.
Nov 29 08:08:59 compute-0 sudo[299927]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:59 compute-0 sudo[300129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:59 compute-0 sudo[300129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:59 compute-0 sudo[300129]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:59.787 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:59.788 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:08:59.788 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:59 compute-0 sudo[300154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:08:59 compute-0 sudo[300154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:59 compute-0 sudo[300154]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:08:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406490365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.899 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.902 256736 DEBUG nova.virt.libvirt.vif [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-971903970',display_name='tempest-TestEncryptedCinderVolumes-server-971903970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-971903970',id=27,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNPyxgKsh7PCaaI+tBeWHGeHUDCURjaZ0je4I23fzwWJ/E7nLNAXxSqXV+2eLKbsjY3xXgkiAGZSR5JLwTYFumburEs1G0ZjQEjEzXxvKLkb3fMWfbEdO/q5BsCfMP2zBQ==',key_name='tempest-keypair-1056345848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='062fa36b3fb745529eb64d4b5bb52af6',ramdisk_id='',reservation_id='r-v99yo2tc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-541864957',owner_user_name='tempest-TestEncryptedCinderVolumes-541864957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='981b7946a749412f90d3d8148d99486a',uuid=07bebcf7-a7f6-4074-8d77-e89bbce7f710,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.903 256736 DEBUG nova.network.os_vif_util [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converting VIF {"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.905 256736 DEBUG nova.network.os_vif_util [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a0:1f,bridge_name='br-int',has_traffic_filtering=True,id=c859afc6-4da0-4faa-8af3-72d4c6d25f9b,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc859afc6-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.908 256736 DEBUG nova.objects.instance [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 07bebcf7-a7f6-4074-8d77-e89bbce7f710 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:08:59 compute-0 sudo[300179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:08:59 compute-0 sudo[300179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:08:59 compute-0 sudo[300179]: pam_unix(sudo:session): session closed for user root
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.930 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <uuid>07bebcf7-a7f6-4074-8d77-e89bbce7f710</uuid>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <name>instance-0000001b</name>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-971903970</nova:name>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:08:58</nova:creationTime>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <nova:user uuid="981b7946a749412f90d3d8148d99486a">tempest-TestEncryptedCinderVolumes-541864957-project-member</nova:user>
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <nova:project uuid="062fa36b3fb745529eb64d4b5bb52af6">tempest-TestEncryptedCinderVolumes-541864957</nova:project>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <nova:port uuid="c859afc6-4da0-4faa-8af3-72d4c6d25f9b">
Nov 29 08:08:59 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <system>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <entry name="serial">07bebcf7-a7f6-4074-8d77-e89bbce7f710</entry>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <entry name="uuid">07bebcf7-a7f6-4074-8d77-e89bbce7f710</entry>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     </system>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <os>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   </os>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <features>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   </features>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk">
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       </source>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk.config">
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       </source>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:08:59 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:26:a0:1f"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <target dev="tapc859afc6-4d"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710/console.log" append="off"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <video>
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     </video>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:08:59 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:08:59 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:08:59 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:08:59 compute-0 nova_compute[256729]: </domain>
Nov 29 08:08:59 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.932 256736 DEBUG nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Preparing to wait for external event network-vif-plugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.933 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.933 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.933 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.934 256736 DEBUG nova.virt.libvirt.vif [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-971903970',display_name='tempest-TestEncryptedCinderVolumes-server-971903970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-971903970',id=27,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNPyxgKsh7PCaaI+tBeWHGeHUDCURjaZ0je4I23fzwWJ/E7nLNAXxSqXV+2eLKbsjY3xXgkiAGZSR5JLwTYFumburEs1G0ZjQEjEzXxvKLkb3fMWfbEdO/q5BsCfMP2zBQ==',key_name='tempest-keypair-1056345848',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='062fa36b3fb745529eb64d4b5bb52af6',ramdisk_id='',reservation_id='r-v99yo2tc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-541864957',owner_user_name='tempest-TestEncryptedCinderVolumes-541864957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='981b7946a749412f90d3d8148d99486a',uuid=07bebcf7-a7f6-4074-8d77-e89bbce7f710,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.934 256736 DEBUG nova.network.os_vif_util [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converting VIF {"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.935 256736 DEBUG nova.network.os_vif_util [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a0:1f,bridge_name='br-int',has_traffic_filtering=True,id=c859afc6-4da0-4faa-8af3-72d4c6d25f9b,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc859afc6-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.935 256736 DEBUG os_vif [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a0:1f,bridge_name='br-int',has_traffic_filtering=True,id=c859afc6-4da0-4faa-8af3-72d4c6d25f9b,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc859afc6-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.936 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.936 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.936 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.942 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.943 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc859afc6-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.943 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc859afc6-4d, col_values=(('external_ids', {'iface-id': 'c859afc6-4da0-4faa-8af3-72d4c6d25f9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:a0:1f', 'vm-uuid': '07bebcf7-a7f6-4074-8d77-e89bbce7f710'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.946 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:59 compute-0 NetworkManager[48962]: <info>  [1764403739.9482] manager: (tapc859afc6-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.948 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.956 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:08:59 compute-0 nova_compute[256729]: 2025-11-29 08:08:59.957 256736 INFO os_vif [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a0:1f,bridge_name='br-int',has_traffic_filtering=True,id=c859afc6-4da0-4faa-8af3-72d4c6d25f9b,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc859afc6-4d')
Nov 29 08:09:00 compute-0 sudo[300206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:09:00 compute-0 sudo[300206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.011 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.011 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.012 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No VIF found with MAC fa:16:3e:26:a0:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.012 256736 INFO nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Using config drive
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.039 256736 DEBUG nova.storage.rbd_utils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.046 256736 DEBUG nova.network.neutron [req-1368f6f0-9f7e-4cbd-ba73-b911d018dc1f req-1ac777f3-b7fa-4061-a383-e441749bc817 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Updated VIF entry in instance network info cache for port c859afc6-4da0-4faa-8af3-72d4c6d25f9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.046 256736 DEBUG nova.network.neutron [req-1368f6f0-9f7e-4cbd-ba73-b911d018dc1f req-1ac777f3-b7fa-4061-a383-e441749bc817 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Updating instance_info_cache with network_info: [{"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.074 256736 DEBUG oslo_concurrency.lockutils [req-1368f6f0-9f7e-4cbd-ba73-b911d018dc1f req-1ac777f3-b7fa-4061-a383-e441749bc817 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2210991119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Nov 29 08:09:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Nov 29 08:09:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.371 256736 INFO nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Creating config drive at /var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710/disk.config
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.382 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkot71vwn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:00 compute-0 podman[300290]: 2025-11-29 08:09:00.477754769 +0000 UTC m=+0.068348948 container create 714ab8d9417c70570b320887afb16277758624509ae0120923139dc6c5483483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:09:00 compute-0 systemd[1]: Started libpod-conmon-714ab8d9417c70570b320887afb16277758624509ae0120923139dc6c5483483.scope.
Nov 29 08:09:00 compute-0 ceph-mon[75050]: pgmap v2122: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 138 KiB/s rd, 65 MiB/s wr, 226 op/s
Nov 29 08:09:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/406490365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2210991119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:00 compute-0 ceph-mon[75050]: osdmap e426: 3 total, 3 up, 3 in
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.527 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkot71vwn" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:00 compute-0 podman[300290]: 2025-11-29 08:09:00.451875815 +0000 UTC m=+0.042470054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:09:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.571 256736 DEBUG nova.storage.rbd_utils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.576 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710/disk.config 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:00 compute-0 podman[300290]: 2025-11-29 08:09:00.578899601 +0000 UTC m=+0.169493810 container init 714ab8d9417c70570b320887afb16277758624509ae0120923139dc6c5483483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:00 compute-0 podman[300290]: 2025-11-29 08:09:00.591455787 +0000 UTC m=+0.182049976 container start 714ab8d9417c70570b320887afb16277758624509ae0120923139dc6c5483483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:09:00 compute-0 podman[300290]: 2025-11-29 08:09:00.59626216 +0000 UTC m=+0.186856389 container attach 714ab8d9417c70570b320887afb16277758624509ae0120923139dc6c5483483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 08:09:00 compute-0 laughing_johnson[300309]: 167 167
Nov 29 08:09:00 compute-0 systemd[1]: libpod-714ab8d9417c70570b320887afb16277758624509ae0120923139dc6c5483483.scope: Deactivated successfully.
Nov 29 08:09:00 compute-0 podman[300290]: 2025-11-29 08:09:00.602270826 +0000 UTC m=+0.192865015 container died 714ab8d9417c70570b320887afb16277758624509ae0120923139dc6c5483483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bf3843a4f1c2a29d5b0ff79a25519c7b2ae4773cbf5d7491d09177576d1fe64-merged.mount: Deactivated successfully.
Nov 29 08:09:00 compute-0 podman[300290]: 2025-11-29 08:09:00.654417555 +0000 UTC m=+0.245011754 container remove 714ab8d9417c70570b320887afb16277758624509ae0120923139dc6c5483483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:00 compute-0 systemd[1]: libpod-conmon-714ab8d9417c70570b320887afb16277758624509ae0120923139dc6c5483483.scope: Deactivated successfully.
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.752 256736 DEBUG oslo_concurrency.processutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710/disk.config 07bebcf7-a7f6-4074-8d77-e89bbce7f710_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.754 256736 INFO nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Deleting local config drive /var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710/disk.config because it was imported into RBD.
Nov 29 08:09:00 compute-0 kernel: tapc859afc6-4d: entered promiscuous mode
Nov 29 08:09:00 compute-0 NetworkManager[48962]: <info>  [1764403740.8252] manager: (tapc859afc6-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.828 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.832 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:00 compute-0 ovn_controller[153383]: 2025-11-29T08:09:00Z|00263|binding|INFO|Claiming lport c859afc6-4da0-4faa-8af3-72d4c6d25f9b for this chassis.
Nov 29 08:09:00 compute-0 ovn_controller[153383]: 2025-11-29T08:09:00Z|00264|binding|INFO|c859afc6-4da0-4faa-8af3-72d4c6d25f9b: Claiming fa:16:3e:26:a0:1f 10.100.0.8
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.846 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a0:1f 10.100.0.8'], port_security=['fa:16:3e:26:a0:1f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '07bebcf7-a7f6-4074-8d77-e89bbce7f710', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dda88d46-9162-4e7c-bb47-793ac4133966', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062fa36b3fb745529eb64d4b5bb52af6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1893efe9-96a9-44d1-bcc6-35fada673e59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=767afc55-24b1-431b-aeef-ddbbabf80029, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=c859afc6-4da0-4faa-8af3-72d4c6d25f9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.847 163655 INFO neutron.agent.ovn.metadata.agent [-] Port c859afc6-4da0-4faa-8af3-72d4c6d25f9b in datapath dda88d46-9162-4e7c-bb47-793ac4133966 bound to our chassis
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.848 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dda88d46-9162-4e7c-bb47-793ac4133966
Nov 29 08:09:00 compute-0 podman[300372]: 2025-11-29 08:09:00.856338847 +0000 UTC m=+0.063427671 container create 14396495ecb70055e9cd7e8d7750710bc1c4d669ee020b3230bfe1664e2c44fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_newton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.871 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[81a326a8-fb60-452d-9b17-ebaa6347d12f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.873 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdda88d46-91 in ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:09:00 compute-0 systemd-machined[217781]: New machine qemu-27-instance-0000001b.
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.875 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdda88d46-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.875 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d5dc3c0f-fc4d-4dec-a94a-e31edc264a38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.876 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[28c75829-685d-4a0c-a67c-2aa73eb0a06f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:00 compute-0 systemd-udevd[300398]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.888 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8b3018-ad55-4512-b8fe-0abf58a6dd6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:00 compute-0 NetworkManager[48962]: <info>  [1764403740.8956] device (tapc859afc6-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:09:00 compute-0 NetworkManager[48962]: <info>  [1764403740.8975] device (tapc859afc6-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:09:00 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.909 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:00 compute-0 podman[300372]: 2025-11-29 08:09:00.820899499 +0000 UTC m=+0.027988363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:09:00 compute-0 ovn_controller[153383]: 2025-11-29T08:09:00Z|00265|binding|INFO|Setting lport c859afc6-4da0-4faa-8af3-72d4c6d25f9b ovn-installed in OVS
Nov 29 08:09:00 compute-0 ovn_controller[153383]: 2025-11-29T08:09:00Z|00266|binding|INFO|Setting lport c859afc6-4da0-4faa-8af3-72d4c6d25f9b up in Southbound
Nov 29 08:09:00 compute-0 nova_compute[256729]: 2025-11-29 08:09:00.916 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.916 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd1da20-7da5-492f-b51b-76939e8dfbfd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:00 compute-0 systemd[1]: Started libpod-conmon-14396495ecb70055e9cd7e8d7750710bc1c4d669ee020b3230bfe1664e2c44fa.scope.
Nov 29 08:09:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.953 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[b218d649-186f-4ed0-9514-c7f46a657040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7ada35c3ff19be868bcf9a8c0a06eaf0d213eaa8cc0dd87e269a23aee38f7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7ada35c3ff19be868bcf9a8c0a06eaf0d213eaa8cc0dd87e269a23aee38f7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7ada35c3ff19be868bcf9a8c0a06eaf0d213eaa8cc0dd87e269a23aee38f7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7ada35c3ff19be868bcf9a8c0a06eaf0d213eaa8cc0dd87e269a23aee38f7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:00 compute-0 NetworkManager[48962]: <info>  [1764403740.9669] manager: (tapdda88d46-90): new Veth device (/org/freedesktop/NetworkManager/Devices/132)
Nov 29 08:09:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:00.966 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1d8e35-8285-4395-ac38-998d7b7dba66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:00 compute-0 podman[300372]: 2025-11-29 08:09:00.968125343 +0000 UTC m=+0.175214217 container init 14396495ecb70055e9cd7e8d7750710bc1c4d669ee020b3230bfe1664e2c44fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_newton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:09:00 compute-0 podman[300372]: 2025-11-29 08:09:00.978764886 +0000 UTC m=+0.185853700 container start 14396495ecb70055e9cd7e8d7750710bc1c4d669ee020b3230bfe1664e2c44fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 08:09:00 compute-0 podman[300372]: 2025-11-29 08:09:00.983045384 +0000 UTC m=+0.190134238 container attach 14396495ecb70055e9cd7e8d7750710bc1c4d669ee020b3230bfe1664e2c44fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_newton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.005 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[9adfd234-b747-45de-a48b-3908d02992ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.008 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[d21bb4bb-a786-475a-acb8-fa6cef49a916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 NetworkManager[48962]: <info>  [1764403741.0318] device (tapdda88d46-90): carrier: link connected
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.041 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[acf59c25-064d-47e5-be04-3e5b9e4c9459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.061 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[02888865-23a4-4f54-a6d0-ca0877992006]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdda88d46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:6b:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605609, 'reachable_time': 30859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300439, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.078 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a9f477-3ff2-49ef-9634-f029a320cdf2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:6bec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605609, 'tstamp': 605609}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300440, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.102 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[611e8637-7d7e-4f2f-97f5-36ab867c504a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdda88d46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:6b:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605609, 'reachable_time': 30859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300441, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.144 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[09328b1e-debd-49b2-a834-34ee6e10f453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 45 MiB/s wr, 131 op/s
Nov 29 08:09:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Nov 29 08:09:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Nov 29 08:09:01 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.231 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e9588ca1-2733-454a-adcb-ac608caf35ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.233 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdda88d46-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.233 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.234 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdda88d46-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:01 compute-0 NetworkManager[48962]: <info>  [1764403741.2367] manager: (tapdda88d46-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.236 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 kernel: tapdda88d46-90: entered promiscuous mode
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.244 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdda88d46-90, col_values=(('external_ids', {'iface-id': 'bf50d5e3-cc9a-491e-8a5a-4b199a4df39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:01 compute-0 ovn_controller[153383]: 2025-11-29T08:09:01Z|00267|binding|INFO|Releasing lport bf50d5e3-cc9a-491e-8a5a-4b199a4df39f from this chassis (sb_readonly=0)
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.256 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.276 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.277 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dda88d46-9162-4e7c-bb47-793ac4133966.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dda88d46-9162-4e7c-bb47-793ac4133966.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.278 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4e112f-7502-4841-a4c2-985871448dd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.279 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-dda88d46-9162-4e7c-bb47-793ac4133966
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/dda88d46-9162-4e7c-bb47-793ac4133966.pid.haproxy
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID dda88d46-9162-4e7c-bb47-793ac4133966
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:09:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:01.281 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'env', 'PROCESS_TAG=haproxy-dda88d46-9162-4e7c-bb47-793ac4133966', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dda88d46-9162-4e7c-bb47-793ac4133966.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.292 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403741.292106, 07bebcf7-a7f6-4074-8d77-e89bbce7f710 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.293 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] VM Started (Lifecycle Event)
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.332 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.340 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403741.2961764, 07bebcf7-a7f6-4074-8d77-e89bbce7f710 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.340 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] VM Paused (Lifecycle Event)
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.362 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.367 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.386 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.623 256736 DEBUG nova.compute.manager [req-27b68f59-eb88-4904-8072-55fe6a2b76b2 req-0fda920f-5c8e-40d7-a42b-947eb69d5188 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received event network-vif-plugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.623 256736 DEBUG oslo_concurrency.lockutils [req-27b68f59-eb88-4904-8072-55fe6a2b76b2 req-0fda920f-5c8e-40d7-a42b-947eb69d5188 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.624 256736 DEBUG oslo_concurrency.lockutils [req-27b68f59-eb88-4904-8072-55fe6a2b76b2 req-0fda920f-5c8e-40d7-a42b-947eb69d5188 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.624 256736 DEBUG oslo_concurrency.lockutils [req-27b68f59-eb88-4904-8072-55fe6a2b76b2 req-0fda920f-5c8e-40d7-a42b-947eb69d5188 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.625 256736 DEBUG nova.compute.manager [req-27b68f59-eb88-4904-8072-55fe6a2b76b2 req-0fda920f-5c8e-40d7-a42b-947eb69d5188 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Processing event network-vif-plugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.626 256736 DEBUG nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.629 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403741.6296034, 07bebcf7-a7f6-4074-8d77-e89bbce7f710 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.630 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] VM Resumed (Lifecycle Event)
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.632 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.637 256736 INFO nova.virt.libvirt.driver [-] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Instance spawned successfully.
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.638 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.654 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.662 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.667 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.667 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.668 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.669 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:01 compute-0 podman[300515]: 2025-11-29 08:09:01.670191507 +0000 UTC m=+0.074832175 container create 260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.670 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.670 256736 DEBUG nova.virt.libvirt.driver [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.690 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:01 compute-0 systemd[1]: Started libpod-conmon-260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2.scope.
Nov 29 08:09:01 compute-0 podman[300515]: 2025-11-29 08:09:01.631036678 +0000 UTC m=+0.035677336 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.737 256736 INFO nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Took 5.95 seconds to spawn the instance on the hypervisor.
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.738 256736 DEBUG nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9195b021ccf2192c16e8e4df32998b1e109c1ba678cc54fd504790282ae1f44/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:01 compute-0 podman[300515]: 2025-11-29 08:09:01.773317883 +0000 UTC m=+0.177958611 container init 260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 08:09:01 compute-0 podman[300515]: 2025-11-29 08:09:01.782170448 +0000 UTC m=+0.186811136 container start 260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.801 256736 INFO nova.compute.manager [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Took 7.09 seconds to build instance.
Nov 29 08:09:01 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[300534]: [NOTICE]   (300544) : New worker (300549) forked
Nov 29 08:09:01 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[300534]: [NOTICE]   (300544) : Loading success.
Nov 29 08:09:01 compute-0 nova_compute[256729]: 2025-11-29 08:09:01.823 256736 DEBUG oslo_concurrency.lockutils [None req-6cecbbc8-5d42-49f7-a4a3-11459973b412 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:01 compute-0 keen_newton[300406]: {
Nov 29 08:09:01 compute-0 keen_newton[300406]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "osd_id": 2,
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "type": "bluestore"
Nov 29 08:09:01 compute-0 keen_newton[300406]:     },
Nov 29 08:09:01 compute-0 keen_newton[300406]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "osd_id": 1,
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "type": "bluestore"
Nov 29 08:09:01 compute-0 keen_newton[300406]:     },
Nov 29 08:09:01 compute-0 keen_newton[300406]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "osd_id": 0,
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:09:01 compute-0 keen_newton[300406]:         "type": "bluestore"
Nov 29 08:09:01 compute-0 keen_newton[300406]:     }
Nov 29 08:09:01 compute-0 keen_newton[300406]: }
Nov 29 08:09:02 compute-0 systemd[1]: libpod-14396495ecb70055e9cd7e8d7750710bc1c4d669ee020b3230bfe1664e2c44fa.scope: Deactivated successfully.
Nov 29 08:09:02 compute-0 podman[300372]: 2025-11-29 08:09:02.004560946 +0000 UTC m=+1.211649770 container died 14396495ecb70055e9cd7e8d7750710bc1c4d669ee020b3230bfe1664e2c44fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 08:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a7ada35c3ff19be868bcf9a8c0a06eaf0d213eaa8cc0dd87e269a23aee38f7e-merged.mount: Deactivated successfully.
Nov 29 08:09:02 compute-0 podman[300372]: 2025-11-29 08:09:02.107109835 +0000 UTC m=+1.314198659 container remove 14396495ecb70055e9cd7e8d7750710bc1c4d669ee020b3230bfe1664e2c44fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_newton, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 08:09:02 compute-0 systemd[1]: libpod-conmon-14396495ecb70055e9cd7e8d7750710bc1c4d669ee020b3230bfe1664e2c44fa.scope: Deactivated successfully.
Nov 29 08:09:02 compute-0 sudo[300206]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:02 compute-0 nova_compute[256729]: 2025-11-29 08:09:02.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:02 compute-0 nova_compute[256729]: 2025-11-29 08:09:02.148 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:09:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:09:02 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:09:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:09:02 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:09:02 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev cb8e0d28-931d-4988-bd7b-099f44057965 does not exist
Nov 29 08:09:02 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 6089a4fe-1542-435b-81f1-24c19c664d4e does not exist
Nov 29 08:09:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Nov 29 08:09:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Nov 29 08:09:02 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Nov 29 08:09:02 compute-0 ceph-mon[75050]: pgmap v2124: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 45 MiB/s wr, 131 op/s
Nov 29 08:09:02 compute-0 ceph-mon[75050]: osdmap e427: 3 total, 3 up, 3 in
Nov 29 08:09:02 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:09:02 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:09:02 compute-0 sudo[300583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:09:02 compute-0 sudo[300583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:02 compute-0 sudo[300583]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:02 compute-0 sudo[300608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:09:02 compute-0 sudo[300608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:09:02 compute-0 sudo[300608]: pam_unix(sudo:session): session closed for user root
Nov 29 08:09:02 compute-0 nova_compute[256729]: 2025-11-29 08:09:02.920 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.6 MiB/s wr, 41 op/s
Nov 29 08:09:03 compute-0 ceph-mon[75050]: osdmap e428: 3 total, 3 up, 3 in
Nov 29 08:09:03 compute-0 nova_compute[256729]: 2025-11-29 08:09:03.730 256736 DEBUG nova.compute.manager [req-b40862d9-5eec-4430-b055-fb5b05036500 req-f488548e-0d10-4fe8-be7c-b9542d0490a8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received event network-vif-plugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:03 compute-0 nova_compute[256729]: 2025-11-29 08:09:03.730 256736 DEBUG oslo_concurrency.lockutils [req-b40862d9-5eec-4430-b055-fb5b05036500 req-f488548e-0d10-4fe8-be7c-b9542d0490a8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:03 compute-0 nova_compute[256729]: 2025-11-29 08:09:03.731 256736 DEBUG oslo_concurrency.lockutils [req-b40862d9-5eec-4430-b055-fb5b05036500 req-f488548e-0d10-4fe8-be7c-b9542d0490a8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:03 compute-0 nova_compute[256729]: 2025-11-29 08:09:03.731 256736 DEBUG oslo_concurrency.lockutils [req-b40862d9-5eec-4430-b055-fb5b05036500 req-f488548e-0d10-4fe8-be7c-b9542d0490a8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:03 compute-0 nova_compute[256729]: 2025-11-29 08:09:03.732 256736 DEBUG nova.compute.manager [req-b40862d9-5eec-4430-b055-fb5b05036500 req-f488548e-0d10-4fe8-be7c-b9542d0490a8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] No waiting events found dispatching network-vif-plugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:03 compute-0 nova_compute[256729]: 2025-11-29 08:09:03.732 256736 WARNING nova.compute.manager [req-b40862d9-5eec-4430-b055-fb5b05036500 req-f488548e-0d10-4fe8-be7c-b9542d0490a8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received unexpected event network-vif-plugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b for instance with vm_state active and task_state None.
Nov 29 08:09:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/895366699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:04 compute-0 NetworkManager[48962]: <info>  [1764403744.0731] manager: (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Nov 29 08:09:04 compute-0 NetworkManager[48962]: <info>  [1764403744.0749] manager: (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Nov 29 08:09:04 compute-0 nova_compute[256729]: 2025-11-29 08:09:04.071 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:04 compute-0 nova_compute[256729]: 2025-11-29 08:09:04.167 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Nov 29 08:09:04 compute-0 ceph-mon[75050]: pgmap v2127: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 6.6 MiB/s wr, 41 op/s
Nov 29 08:09:04 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/895366699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:04 compute-0 nova_compute[256729]: 2025-11-29 08:09:04.243 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Nov 29 08:09:04 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Nov 29 08:09:04 compute-0 ovn_controller[153383]: 2025-11-29T08:09:04Z|00268|binding|INFO|Releasing lport bf50d5e3-cc9a-491e-8a5a-4b199a4df39f from this chassis (sb_readonly=0)
Nov 29 08:09:04 compute-0 nova_compute[256729]: 2025-11-29 08:09:04.259 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:04 compute-0 nova_compute[256729]: 2025-11-29 08:09:04.945 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:05 compute-0 nova_compute[256729]: 2025-11-29 08:09:05.142 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 38 KiB/s wr, 136 op/s
Nov 29 08:09:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Nov 29 08:09:05 compute-0 ceph-mon[75050]: osdmap e429: 3 total, 3 up, 3 in
Nov 29 08:09:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Nov 29 08:09:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:09:05
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'vms', '.mgr', 'images', 'default.rgw.control']
Nov 29 08:09:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:09:05 compute-0 nova_compute[256729]: 2025-11-29 08:09:05.815 256736 DEBUG nova.compute.manager [req-74f7d0ea-8739-4bb6-9e9e-2232a9afb845 req-a2780cfd-02d7-4269-bb48-a1d0156bf968 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received event network-changed-c859afc6-4da0-4faa-8af3-72d4c6d25f9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:05 compute-0 nova_compute[256729]: 2025-11-29 08:09:05.815 256736 DEBUG nova.compute.manager [req-74f7d0ea-8739-4bb6-9e9e-2232a9afb845 req-a2780cfd-02d7-4269-bb48-a1d0156bf968 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Refreshing instance network info cache due to event network-changed-c859afc6-4da0-4faa-8af3-72d4c6d25f9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:05 compute-0 nova_compute[256729]: 2025-11-29 08:09:05.816 256736 DEBUG oslo_concurrency.lockutils [req-74f7d0ea-8739-4bb6-9e9e-2232a9afb845 req-a2780cfd-02d7-4269-bb48-a1d0156bf968 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:05 compute-0 nova_compute[256729]: 2025-11-29 08:09:05.816 256736 DEBUG oslo_concurrency.lockutils [req-74f7d0ea-8739-4bb6-9e9e-2232a9afb845 req-a2780cfd-02d7-4269-bb48-a1d0156bf968 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:05 compute-0 nova_compute[256729]: 2025-11-29 08:09:05.817 256736 DEBUG nova.network.neutron [req-74f7d0ea-8739-4bb6-9e9e-2232a9afb845 req-a2780cfd-02d7-4269-bb48-a1d0156bf968 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Refreshing network info cache for port c859afc6-4da0-4faa-8af3-72d4c6d25f9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:06 compute-0 ceph-mon[75050]: pgmap v2129: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 38 KiB/s wr, 136 op/s
Nov 29 08:09:06 compute-0 ceph-mon[75050]: osdmap e430: 3 total, 3 up, 3 in
Nov 29 08:09:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2482303743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:09:07 compute-0 nova_compute[256729]: 2025-11-29 08:09:07.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:07 compute-0 nova_compute[256729]: 2025-11-29 08:09:07.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:07 compute-0 nova_compute[256729]: 2025-11-29 08:09:07.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:09:07 compute-0 nova_compute[256729]: 2025-11-29 08:09:07.166 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:09:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 204 op/s
Nov 29 08:09:07 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2482303743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:07 compute-0 nova_compute[256729]: 2025-11-29 08:09:07.923 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:07 compute-0 nova_compute[256729]: 2025-11-29 08:09:07.957 256736 DEBUG nova.network.neutron [req-74f7d0ea-8739-4bb6-9e9e-2232a9afb845 req-a2780cfd-02d7-4269-bb48-a1d0156bf968 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Updated VIF entry in instance network info cache for port c859afc6-4da0-4faa-8af3-72d4c6d25f9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:07 compute-0 nova_compute[256729]: 2025-11-29 08:09:07.957 256736 DEBUG nova.network.neutron [req-74f7d0ea-8739-4bb6-9e9e-2232a9afb845 req-a2780cfd-02d7-4269-bb48-a1d0156bf968 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Updating instance_info_cache with network_info: [{"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:07 compute-0 nova_compute[256729]: 2025-11-29 08:09:07.982 256736 DEBUG oslo_concurrency.lockutils [req-74f7d0ea-8739-4bb6-9e9e-2232a9afb845 req-a2780cfd-02d7-4269-bb48-a1d0156bf968 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.168 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.169 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.194 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.195 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.195 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.196 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.197 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:08 compute-0 ceph-mon[75050]: pgmap v2131: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 204 op/s
Nov 29 08:09:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1117146974' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1117146974' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563606514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.782 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.890 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:09:08 compute-0 nova_compute[256729]: 2025-11-29 08:09:08.890 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.109 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.110 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4137MB free_disk=59.96735763549805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.110 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.111 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 1.2 GiB data, 1.5 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.5 MiB/s wr, 169 op/s
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.260 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 07bebcf7-a7f6-4074-8d77-e89bbce7f710 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.260 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.261 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.329 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1117146974' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1117146974' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/563606514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:09 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2560825876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.817 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.823 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.843 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.870 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.870 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:09 compute-0 nova_compute[256729]: 2025-11-29 08:09:09.947 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:10 compute-0 ceph-mon[75050]: pgmap v2132: 305 pgs: 305 active+clean; 1.2 GiB data, 1.5 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.5 MiB/s wr, 169 op/s
Nov 29 08:09:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2560825876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:10 compute-0 nova_compute[256729]: 2025-11-29 08:09:10.807 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:10 compute-0 nova_compute[256729]: 2025-11-29 08:09:10.808 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:09:10 compute-0 nova_compute[256729]: 2025-11-29 08:09:10.808 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:09:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 1.2 GiB data, 1.5 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.5 MiB/s wr, 147 op/s
Nov 29 08:09:11 compute-0 nova_compute[256729]: 2025-11-29 08:09:11.469 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:11 compute-0 nova_compute[256729]: 2025-11-29 08:09:11.470 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:11 compute-0 nova_compute[256729]: 2025-11-29 08:09:11.470 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:09:11 compute-0 nova_compute[256729]: 2025-11-29 08:09:11.471 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid 07bebcf7-a7f6-4074-8d77-e89bbce7f710 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:12 compute-0 ceph-mon[75050]: pgmap v2133: 305 pgs: 305 active+clean; 1.2 GiB data, 1.5 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.5 MiB/s wr, 147 op/s
Nov 29 08:09:12 compute-0 nova_compute[256729]: 2025-11-29 08:09:12.699 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Updating instance_info_cache with network_info: [{"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:12 compute-0 nova_compute[256729]: 2025-11-29 08:09:12.725 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-07bebcf7-a7f6-4074-8d77-e89bbce7f710" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:12 compute-0 nova_compute[256729]: 2025-11-29 08:09:12.726 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:09:12 compute-0 nova_compute[256729]: 2025-11-29 08:09:12.727 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:12 compute-0 nova_compute[256729]: 2025-11-29 08:09:12.728 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:12 compute-0 nova_compute[256729]: 2025-11-29 08:09:12.728 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:09:12 compute-0 nova_compute[256729]: 2025-11-29 08:09:12.728 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:12 compute-0 podman[300682]: 2025-11-29 08:09:12.7314893 +0000 UTC m=+0.083702601 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:09:12 compute-0 podman[300681]: 2025-11-29 08:09:12.743934543 +0000 UTC m=+0.093110290 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:09:12 compute-0 podman[300680]: 2025-11-29 08:09:12.7568388 +0000 UTC m=+0.122906613 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 08:09:12 compute-0 nova_compute[256729]: 2025-11-29 08:09:12.925 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 1.4 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 1.3 MiB/s rd, 28 MiB/s wr, 188 op/s
Nov 29 08:09:13 compute-0 nova_compute[256729]: 2025-11-29 08:09:13.197 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:14 compute-0 ceph-mon[75050]: pgmap v2134: 305 pgs: 305 active+clean; 1.4 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 1.3 MiB/s rd, 28 MiB/s wr, 188 op/s
Nov 29 08:09:14 compute-0 nova_compute[256729]: 2025-11-29 08:09:14.773 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:14 compute-0 nova_compute[256729]: 2025-11-29 08:09:14.800 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Triggering sync for uuid 07bebcf7-a7f6-4074-8d77-e89bbce7f710 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 29 08:09:14 compute-0 nova_compute[256729]: 2025-11-29 08:09:14.801 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:14 compute-0 nova_compute[256729]: 2025-11-29 08:09:14.802 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:14 compute-0 nova_compute[256729]: 2025-11-29 08:09:14.843 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:14 compute-0 nova_compute[256729]: 2025-11-29 08:09:14.996 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 1.6 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 1.2 MiB/s rd, 45 MiB/s wr, 173 op/s
Nov 29 08:09:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.024705371532525885 of space, bias 1.0, pg target 7.411611459757766 quantized to 32 (current 32)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013042454173992315 quantized to 32 (current 32)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1951896427524907 quantized to 32 (current 32)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005962264765253629 quantized to 16 (current 16)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.452830956567037e-05 quantized to 32 (current 32)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006334906313081982 quantized to 32 (current 32)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:09:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014905661913134074 quantized to 32 (current 32)
Nov 29 08:09:15 compute-0 ovn_controller[153383]: 2025-11-29T08:09:15Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:a0:1f 10.100.0.8
Nov 29 08:09:15 compute-0 ovn_controller[153383]: 2025-11-29T08:09:15Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:a0:1f 10.100.0.8
Nov 29 08:09:16 compute-0 nova_compute[256729]: 2025-11-29 08:09:16.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:09:16 compute-0 sshd-session[299290]: ssh_dispatch_run_fatal: Connection from 143.14.121.41 port 45038: Connection timed out [preauth]
Nov 29 08:09:16 compute-0 ceph-mon[75050]: pgmap v2135: 305 pgs: 305 active+clean; 1.6 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 1.2 MiB/s rd, 45 MiB/s wr, 173 op/s
Nov 29 08:09:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 1.8 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 1.3 MiB/s rd, 61 MiB/s wr, 258 op/s
Nov 29 08:09:17 compute-0 nova_compute[256729]: 2025-11-29 08:09:17.927 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:18 compute-0 ceph-mon[75050]: pgmap v2136: 305 pgs: 305 active+clean; 1.8 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 1.3 MiB/s rd, 61 MiB/s wr, 258 op/s
Nov 29 08:09:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 2.0 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 404 KiB/s rd, 74 MiB/s wr, 222 op/s
Nov 29 08:09:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Nov 29 08:09:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Nov 29 08:09:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Nov 29 08:09:19 compute-0 nova_compute[256729]: 2025-11-29 08:09:19.999 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:20 compute-0 ceph-mon[75050]: pgmap v2137: 305 pgs: 305 active+clean; 2.0 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 404 KiB/s rd, 74 MiB/s wr, 222 op/s
Nov 29 08:09:20 compute-0 ceph-mon[75050]: osdmap e431: 3 total, 3 up, 3 in
Nov 29 08:09:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 2.0 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 471 KiB/s rd, 83 MiB/s wr, 248 op/s
Nov 29 08:09:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Nov 29 08:09:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Nov 29 08:09:21 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Nov 29 08:09:22 compute-0 ceph-mon[75050]: pgmap v2139: 305 pgs: 305 active+clean; 2.0 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 471 KiB/s rd, 83 MiB/s wr, 248 op/s
Nov 29 08:09:22 compute-0 ceph-mon[75050]: osdmap e432: 3 total, 3 up, 3 in
Nov 29 08:09:22 compute-0 nova_compute[256729]: 2025-11-29 08:09:22.958 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 589 KiB/s rd, 74 MiB/s wr, 267 op/s
Nov 29 08:09:24 compute-0 ceph-mon[75050]: pgmap v2141: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 589 KiB/s rd, 74 MiB/s wr, 267 op/s
Nov 29 08:09:25 compute-0 nova_compute[256729]: 2025-11-29 08:09:25.002 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 204 KiB/s rd, 40 MiB/s wr, 135 op/s
Nov 29 08:09:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:26 compute-0 ceph-mon[75050]: pgmap v2142: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 204 KiB/s rd, 40 MiB/s wr, 135 op/s
Nov 29 08:09:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 21 MiB/s wr, 131 op/s
Nov 29 08:09:27 compute-0 nova_compute[256729]: 2025-11-29 08:09:27.961 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:28 compute-0 ceph-mon[75050]: pgmap v2143: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 21 MiB/s wr, 131 op/s
Nov 29 08:09:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 MiB/s wr, 114 op/s
Nov 29 08:09:30 compute-0 nova_compute[256729]: 2025-11-29 08:09:30.004 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Nov 29 08:09:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Nov 29 08:09:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Nov 29 08:09:30 compute-0 ceph-mon[75050]: pgmap v2144: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 MiB/s wr, 114 op/s
Nov 29 08:09:30 compute-0 ceph-mon[75050]: osdmap e433: 3 total, 3 up, 3 in
Nov 29 08:09:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 8.6 MiB/s wr, 55 op/s
Nov 29 08:09:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/267765722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Nov 29 08:09:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Nov 29 08:09:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Nov 29 08:09:32 compute-0 ceph-mon[75050]: pgmap v2146: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 8.6 MiB/s wr, 55 op/s
Nov 29 08:09:32 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/267765722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:33 compute-0 nova_compute[256729]: 2025-11-29 08:09:33.011 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 32 op/s
Nov 29 08:09:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Nov 29 08:09:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Nov 29 08:09:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Nov 29 08:09:33 compute-0 ceph-mon[75050]: osdmap e434: 3 total, 3 up, 3 in
Nov 29 08:09:34 compute-0 ovn_controller[153383]: 2025-11-29T08:09:34Z|00269|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Nov 29 08:09:34 compute-0 ceph-mon[75050]: pgmap v2148: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 32 op/s
Nov 29 08:09:34 compute-0 ceph-mon[75050]: osdmap e435: 3 total, 3 up, 3 in
Nov 29 08:09:34 compute-0 nova_compute[256729]: 2025-11-29 08:09:34.835 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:34 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:34.835 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:34 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:34.838 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:09:35 compute-0 nova_compute[256729]: 2025-11-29 08:09:35.006 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.6 MiB/s wr, 78 op/s
Nov 29 08:09:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:09:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:09:36 compute-0 ceph-mon[75050]: pgmap v2150: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.6 MiB/s wr, 78 op/s
Nov 29 08:09:36 compute-0 nova_compute[256729]: 2025-11-29 08:09:36.926 256736 DEBUG oslo_concurrency.lockutils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:36 compute-0 nova_compute[256729]: 2025-11-29 08:09:36.927 256736 DEBUG oslo_concurrency.lockutils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:36 compute-0 nova_compute[256729]: 2025-11-29 08:09:36.950 256736 DEBUG nova.objects.instance [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'flavor' on Instance uuid 07bebcf7-a7f6-4074-8d77-e89bbce7f710 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.008 256736 DEBUG oslo_concurrency.lockutils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.1 MiB/s wr, 108 op/s
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.227 256736 DEBUG oslo_concurrency.lockutils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.228 256736 DEBUG oslo_concurrency.lockutils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.228 256736 INFO nova.compute.manager [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Attaching volume 7d474e98-723f-4121-8dd3-616ee149d172 to /dev/vdb
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.425 256736 DEBUG os_brick.utils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.428 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.446 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.446 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bc30d9-b1b7-451a-9e1f-87573b3c1800]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.449 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.461 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.462 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[0845d459-686d-4b6d-b8e8-1cf082d60321]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.464 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.478 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.478 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[e874321c-6a66-4684-beb8-b7145a791573]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.480 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[0290ff94-586b-49cb-ac32-7e370c172496]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.481 256736 DEBUG oslo_concurrency.processutils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.517 256736 DEBUG oslo_concurrency.processutils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.520 256736 DEBUG os_brick.initiator.connectors.lightos [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.521 256736 DEBUG os_brick.initiator.connectors.lightos [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.521 256736 DEBUG os_brick.initiator.connectors.lightos [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.522 256736 DEBUG os_brick.utils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:09:37 compute-0 nova_compute[256729]: 2025-11-29 08:09:37.523 256736 DEBUG nova.virt.block_device [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Updating existing volume attachment record: 03d77248-07d0-46bc-b56d-701bd368ba6b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.014 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:38 compute-0 ceph-mon[75050]: pgmap v2151: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.1 MiB/s rd, 6.1 MiB/s wr, 108 op/s
Nov 29 08:09:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/564389364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.680 256736 DEBUG os_brick.encryptors [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Using volume encryption metadata '{'encryption_key_id': '75720754-ef49-4355-8097-002004a3eac6', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7d474e98-723f-4121-8dd3-616ee149d172', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7d474e98-723f-4121-8dd3-616ee149d172', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '07bebcf7-a7f6-4074-8d77-e89bbce7f710', 'attached_at': '', 'detached_at': '', 'volume_id': '7d474e98-723f-4121-8dd3-616ee149d172', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.691 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.711 256736 DEBUG barbicanclient.v1.secrets [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/75720754-ef49-4355-8097-002004a3eac6 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.712 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.735 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.736 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.761 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.762 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.789 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.790 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.815 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.816 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.832 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquiring lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.833 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.842 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.843 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.855 256736 DEBUG nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.866 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.867 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.890 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.892 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.915 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.916 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.936 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.936 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.939 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.939 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.947 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.948 256736 INFO nova.compute.claims [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.964 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.965 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.985 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:38 compute-0 nova_compute[256729]: 2025-11-29 08:09:38.986 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.004 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.005 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.034 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.035 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.061 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.062 256736 INFO barbicanclient.base [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/75720754-ef49-4355-8097-002004a3eac6
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.070 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.104 256736 DEBUG barbicanclient.client [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.106 256736 DEBUG nova.virt.libvirt.host [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:09:39 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 08:09:39 compute-0 nova_compute[256729]:     <volume>7d474e98-723f-4121-8dd3-616ee149d172</volume>
Nov 29 08:09:39 compute-0 nova_compute[256729]:   </usage>
Nov 29 08:09:39 compute-0 nova_compute[256729]: </secret>
Nov 29 08:09:39 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.125 256736 DEBUG nova.objects.instance [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'flavor' on Instance uuid 07bebcf7-a7f6-4074-8d77-e89bbce7f710 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.148 256736 DEBUG nova.virt.libvirt.driver [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Attempting to attach volume 7d474e98-723f-4121-8dd3-616ee149d172 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.153 256736 DEBUG nova.virt.libvirt.guest [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 08:09:39 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:09:39 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-7d474e98-723f-4121-8dd3-616ee149d172">
Nov 29 08:09:39 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:39 compute-0 nova_compute[256729]:   </source>
Nov 29 08:09:39 compute-0 nova_compute[256729]:   <auth username="openstack">
Nov 29 08:09:39 compute-0 nova_compute[256729]:     <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:09:39 compute-0 nova_compute[256729]:   </auth>
Nov 29 08:09:39 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:09:39 compute-0 nova_compute[256729]:   <serial>7d474e98-723f-4121-8dd3-616ee149d172</serial>
Nov 29 08:09:39 compute-0 nova_compute[256729]:   <encryption format="luks">
Nov 29 08:09:39 compute-0 nova_compute[256729]:     <secret type="passphrase" uuid="fbd46bb2-7207-4f17-87c8-1954992e6bf2"/>
Nov 29 08:09:39 compute-0 nova_compute[256729]:   </encryption>
Nov 29 08:09:39 compute-0 nova_compute[256729]: </disk>
Nov 29 08:09:39 compute-0 nova_compute[256729]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 29 08:09:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/564389364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 95 op/s
Nov 29 08:09:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/29498939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.561 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.568 256736 DEBUG nova.compute.provider_tree [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.584 256736 DEBUG nova.scheduler.client.report [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.604 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.604 256736 DEBUG nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.661 256736 DEBUG nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.662 256736 DEBUG nova.network.neutron [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.685 256736 INFO nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.735 256736 DEBUG nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.779 256736 INFO nova.virt.block_device [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Booting with volume 039f302e-2efb-45f2-8e07-07b07300a202 at /dev/vdb
Nov 29 08:09:39 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:39.842 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.881 256736 DEBUG os_brick.utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.883 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.901 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.902 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[3a79cb2b-8b99-4f4f-a756-53065fe6d292]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.904 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.917 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.918 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[143cdcaa-79b3-4c99-acc7-167386b38310]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.920 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.936 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.937 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[9bad475f-7335-4d08-a1c8-579623eaad88]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.939 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd2b533-53fd-4dde-9780-0ce12d04c037]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.940 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.981 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.985 256736 DEBUG os_brick.initiator.connectors.lightos [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.986 256736 DEBUG os_brick.initiator.connectors.lightos [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.986 256736 DEBUG os_brick.initiator.connectors.lightos [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.987 256736 DEBUG os_brick.utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] <== get_connector_properties: return (104ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:09:39 compute-0 nova_compute[256729]: 2025-11-29 08:09:39.988 256736 DEBUG nova.virt.block_device [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Updating existing volume attachment record: 94c2f0aa-40ea-4c1d-b6f9-906d93346376 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:09:40 compute-0 nova_compute[256729]: 2025-11-29 08:09:40.009 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:40 compute-0 ceph-mon[75050]: pgmap v2152: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 95 op/s
Nov 29 08:09:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/29498939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2622485515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:40 compute-0 nova_compute[256729]: 2025-11-29 08:09:40.697 256736 DEBUG nova.policy [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3303bef652f040c9b42b7e6b8290911f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '364f361ce7b54bc6a4799a29705c1d0a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:09:40 compute-0 nova_compute[256729]: 2025-11-29 08:09:40.890 256736 DEBUG nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:09:40 compute-0 nova_compute[256729]: 2025-11-29 08:09:40.892 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:09:40 compute-0 nova_compute[256729]: 2025-11-29 08:09:40.893 256736 INFO nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Creating image(s)
Nov 29 08:09:40 compute-0 nova_compute[256729]: 2025-11-29 08:09:40.921 256736 DEBUG nova.storage.rbd_utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] rbd image aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:40 compute-0 nova_compute[256729]: 2025-11-29 08:09:40.955 256736 DEBUG nova.storage.rbd_utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] rbd image aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:40 compute-0 nova_compute[256729]: 2025-11-29 08:09:40.987 256736 DEBUG nova.storage.rbd_utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] rbd image aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:40 compute-0 nova_compute[256729]: 2025-11-29 08:09:40.991 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.084 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.085 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquiring lock "b24649b5caed77158f656e381ae039c7945f1389" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.086 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.086 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "b24649b5caed77158f656e381ae039c7945f1389" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.110 256736 DEBUG nova.storage.rbd_utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] rbd image aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.114 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2622485515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 73 op/s
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.283 256736 DEBUG nova.network.neutron [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Successfully created port: 3e7651b6-5be6-447a-86e2-4009c6aac334 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.499 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b24649b5caed77158f656e381ae039c7945f1389 aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.589 256736 DEBUG nova.virt.libvirt.driver [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.589 256736 DEBUG nova.virt.libvirt.driver [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.590 256736 DEBUG nova.virt.libvirt.driver [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.590 256736 DEBUG nova.virt.libvirt.driver [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No VIF found with MAC fa:16:3e:26:a0:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.600 256736 DEBUG nova.storage.rbd_utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] resizing rbd image aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.721 256736 DEBUG nova.objects.instance [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lazy-loading 'migration_context' on Instance uuid aee08d25-d8a2-48f8-ac6e-a5b99c377db1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.734 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.735 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Ensure instance console log exists: /var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.736 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.736 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.737 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:41 compute-0 nova_compute[256729]: 2025-11-29 08:09:41.861 256736 DEBUG oslo_concurrency.lockutils [None req-8ca898fc-fe8f-45c1-9d49-d8cf26b680d2 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:42 compute-0 nova_compute[256729]: 2025-11-29 08:09:42.094 256736 DEBUG nova.network.neutron [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Successfully updated port: 3e7651b6-5be6-447a-86e2-4009c6aac334 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:09:42 compute-0 nova_compute[256729]: 2025-11-29 08:09:42.114 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquiring lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:42 compute-0 nova_compute[256729]: 2025-11-29 08:09:42.115 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquired lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:42 compute-0 nova_compute[256729]: 2025-11-29 08:09:42.115 256736 DEBUG nova.network.neutron [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:09:42 compute-0 ceph-mon[75050]: pgmap v2153: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 73 op/s
Nov 29 08:09:42 compute-0 nova_compute[256729]: 2025-11-29 08:09:42.211 256736 DEBUG nova.compute.manager [req-a16ed48a-a85f-4ed8-a2f2-cae847647a13 req-25819b0b-070b-4fff-80f9-3c5a4bc4f3f7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received event network-changed-3e7651b6-5be6-447a-86e2-4009c6aac334 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:42 compute-0 nova_compute[256729]: 2025-11-29 08:09:42.211 256736 DEBUG nova.compute.manager [req-a16ed48a-a85f-4ed8-a2f2-cae847647a13 req-25819b0b-070b-4fff-80f9-3c5a4bc4f3f7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Refreshing instance network info cache due to event network-changed-3e7651b6-5be6-447a-86e2-4009c6aac334. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:42 compute-0 nova_compute[256729]: 2025-11-29 08:09:42.212 256736 DEBUG oslo_concurrency.lockutils [req-a16ed48a-a85f-4ed8-a2f2-cae847647a13 req-25819b0b-070b-4fff-80f9-3c5a4bc4f3f7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:42 compute-0 nova_compute[256729]: 2025-11-29 08:09:42.273 256736 DEBUG nova.network.neutron [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.017 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.4 MiB/s wr, 93 op/s
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.211 256736 DEBUG oslo_concurrency.lockutils [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.211 256736 DEBUG oslo_concurrency.lockutils [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.229 256736 INFO nova.compute.manager [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Detaching volume 7d474e98-723f-4121-8dd3-616ee149d172
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.241 256736 DEBUG nova.network.neutron [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Updating instance_info_cache with network_info: [{"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.272 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Releasing lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.273 256736 DEBUG nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Instance network_info: |[{"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.273 256736 DEBUG oslo_concurrency.lockutils [req-a16ed48a-a85f-4ed8-a2f2-cae847647a13 req-25819b0b-070b-4fff-80f9-3c5a4bc4f3f7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.274 256736 DEBUG nova.network.neutron [req-a16ed48a-a85f-4ed8-a2f2-cae847647a13 req-25819b0b-070b-4fff-80f9-3c5a4bc4f3f7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Refreshing network info cache for port 3e7651b6-5be6-447a-86e2-4009c6aac334 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.278 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Start _get_guest_xml network_info=[{"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'guest_format': None, 'image_id': '0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae'}], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': -1, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-039f302e-2efb-45f2-8e07-07b07300a202', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '039f302e-2efb-45f2-8e07-07b07300a202', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'aee08d25-d8a2-48f8-ac6e-a5b99c377db1', 'attached_at': '', 'detached_at': '', 'volume_id': '039f302e-2efb-45f2-8e07-07b07300a202', 'serial': '039f302e-2efb-45f2-8e07-07b07300a202'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vdb', 'guest_format': None, 'attachment_id': '94c2f0aa-40ea-4c1d-b6f9-906d93346376', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.286 256736 WARNING nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.291 256736 DEBUG nova.virt.libvirt.host [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.292 256736 DEBUG nova.virt.libvirt.host [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.297 256736 DEBUG nova.virt.libvirt.host [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.297 256736 DEBUG nova.virt.libvirt.host [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.298 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.298 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:45:01Z,direct_url=<?>,disk_format='qcow2',id=0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d1f5dc9b350d4861a761ebc842cae01b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:45:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.299 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.299 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.299 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.299 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.300 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.300 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.300 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.301 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.301 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.301 256736 DEBUG nova.virt.hardware [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.304 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.388 256736 INFO nova.virt.block_device [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Attempting to driver detach volume 7d474e98-723f-4121-8dd3-616ee149d172 from mountpoint /dev/vdb
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.541 256736 DEBUG os_brick.encryptors [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Using volume encryption metadata '{'encryption_key_id': '75720754-ef49-4355-8097-002004a3eac6', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7d474e98-723f-4121-8dd3-616ee149d172', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7d474e98-723f-4121-8dd3-616ee149d172', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '07bebcf7-a7f6-4074-8d77-e89bbce7f710', 'attached_at': '', 'detached_at': '', 'volume_id': '7d474e98-723f-4121-8dd3-616ee149d172', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.557 256736 DEBUG nova.virt.libvirt.driver [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Attempting to detach device vdb from instance 07bebcf7-a7f6-4074-8d77-e89bbce7f710 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.558 256736 DEBUG nova.virt.libvirt.guest [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-7d474e98-723f-4121-8dd3-616ee149d172">
Nov 29 08:09:43 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   </source>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <serial>7d474e98-723f-4121-8dd3-616ee149d172</serial>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <encryption format="luks">
Nov 29 08:09:43 compute-0 nova_compute[256729]:     <secret type="passphrase" uuid="fbd46bb2-7207-4f17-87c8-1954992e6bf2"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   </encryption>
Nov 29 08:09:43 compute-0 nova_compute[256729]: </disk>
Nov 29 08:09:43 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.569 256736 INFO nova.virt.libvirt.driver [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Successfully detached device vdb from instance 07bebcf7-a7f6-4074-8d77-e89bbce7f710 from the persistent domain config.
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.570 256736 DEBUG nova.virt.libvirt.driver [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 07bebcf7-a7f6-4074-8d77-e89bbce7f710 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.571 256736 DEBUG nova.virt.libvirt.guest [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <source protocol="rbd" name="volumes/volume-7d474e98-723f-4121-8dd3-616ee149d172">
Nov 29 08:09:43 compute-0 nova_compute[256729]:     <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   </source>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <target dev="vdb" bus="virtio"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <serial>7d474e98-723f-4121-8dd3-616ee149d172</serial>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   <encryption format="luks">
Nov 29 08:09:43 compute-0 nova_compute[256729]:     <secret type="passphrase" uuid="fbd46bb2-7207-4f17-87c8-1954992e6bf2"/>
Nov 29 08:09:43 compute-0 nova_compute[256729]:   </encryption>
Nov 29 08:09:43 compute-0 nova_compute[256729]: </disk>
Nov 29 08:09:43 compute-0 nova_compute[256729]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.696 256736 DEBUG nova.virt.libvirt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Received event <DeviceRemovedEvent: 1764403783.6947353, 07bebcf7-a7f6-4074-8d77-e89bbce7f710 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.696 256736 DEBUG nova.virt.libvirt.driver [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 07bebcf7-a7f6-4074-8d77-e89bbce7f710 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.701 256736 INFO nova.virt.libvirt.driver [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Successfully detached device vdb from instance 07bebcf7-a7f6-4074-8d77-e89bbce7f710 from the live domain config.
Nov 29 08:09:43 compute-0 podman[300988]: 2025-11-29 08:09:43.704693581 +0000 UTC m=+0.071340630 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386361754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:43 compute-0 podman[300989]: 2025-11-29 08:09:43.725688751 +0000 UTC m=+0.085765709 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.735 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:43 compute-0 podman[300987]: 2025-11-29 08:09:43.736800297 +0000 UTC m=+0.102597803 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.758 256736 DEBUG nova.storage.rbd_utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] rbd image aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.762 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.887 256736 DEBUG nova.objects.instance [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'flavor' on Instance uuid 07bebcf7-a7f6-4074-8d77-e89bbce7f710 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:43 compute-0 nova_compute[256729]: 2025-11-29 08:09:43.935 256736 DEBUG oslo_concurrency.lockutils [None req-429200ab-121c-4f16-a78c-8402a04f6ceb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:44 compute-0 ceph-mon[75050]: pgmap v2154: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.4 MiB/s wr, 93 op/s
Nov 29 08:09:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3386361754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:09:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183888358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.240 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.271 256736 DEBUG nova.virt.libvirt.vif [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1927056742',display_name='tempest-instance-1927056742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1927056742',id=28,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkR+KC5GJgnm3VT/VgUqGwBYkXpgsUVnTAJ2kwI/941njtP/yeTjKM30rwlX1J6tojE3FLqtS//3vJORxbUyooTMIdJO3ey3s/JCS+TkLOmI4JkGss4sSpD1EttwPeg5A==',key_name='tempest-keypair-311340034',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='364f361ce7b54bc6a4799a29705c1d0a',ramdisk_id='',reservation_id='r-t5u7m80n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1467932066',owner_user_name='tempest-VolumesBackupsTest-1467932066-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3303bef652f040c9b42b7e6b8290911f',uuid=aee08d25-d8a2-48f8-ac6e-a5b99c377db1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.272 256736 DEBUG nova.network.os_vif_util [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Converting VIF {"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.273 256736 DEBUG nova.network.os_vif_util [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:1b:ea,bridge_name='br-int',has_traffic_filtering=True,id=3e7651b6-5be6-447a-86e2-4009c6aac334,network=Network(125cb0ae-5b9b-472c-a598-63b3f1d26e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e7651b6-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.275 256736 DEBUG nova.objects.instance [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lazy-loading 'pci_devices' on Instance uuid aee08d25-d8a2-48f8-ac6e-a5b99c377db1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.294 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <uuid>aee08d25-d8a2-48f8-ac6e-a5b99c377db1</uuid>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <name>instance-0000001c</name>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <nova:name>tempest-instance-1927056742</nova:name>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:09:43</nova:creationTime>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <nova:user uuid="3303bef652f040c9b42b7e6b8290911f">tempest-VolumesBackupsTest-1467932066-project-member</nova:user>
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <nova:project uuid="364f361ce7b54bc6a4799a29705c1d0a">tempest-VolumesBackupsTest-1467932066</nova:project>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <nova:root type="image" uuid="0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <nova:port uuid="3e7651b6-5be6-447a-86e2-4009c6aac334">
Nov 29 08:09:44 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <system>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <entry name="serial">aee08d25-d8a2-48f8-ac6e-a5b99c377db1</entry>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <entry name="uuid">aee08d25-d8a2-48f8-ac6e-a5b99c377db1</entry>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </system>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <os>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   </os>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <features>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   </features>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk">
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       </source>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk.config">
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       </source>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-039f302e-2efb-45f2-8e07-07b07300a202">
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       </source>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:09:44 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <target dev="vdb" bus="virtio"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <serial>039f302e-2efb-45f2-8e07-07b07300a202</serial>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:af:1b:ea"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <target dev="tap3e7651b6-5b"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1/console.log" append="off"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <video>
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </video>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:09:44 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:09:44 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:09:44 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:09:44 compute-0 nova_compute[256729]: </domain>
Nov 29 08:09:44 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.295 256736 DEBUG nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Preparing to wait for external event network-vif-plugged-3e7651b6-5be6-447a-86e2-4009c6aac334 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.296 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquiring lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.296 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.296 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.297 256736 DEBUG nova.virt.libvirt.vif [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1927056742',display_name='tempest-instance-1927056742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1927056742',id=28,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkR+KC5GJgnm3VT/VgUqGwBYkXpgsUVnTAJ2kwI/941njtP/yeTjKM30rwlX1J6tojE3FLqtS//3vJORxbUyooTMIdJO3ey3s/JCS+TkLOmI4JkGss4sSpD1EttwPeg5A==',key_name='tempest-keypair-311340034',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='364f361ce7b54bc6a4799a29705c1d0a',ramdisk_id='',reservation_id='r-t5u7m80n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1467932066',owner_user_name='tempest-VolumesBackupsTest-1467932066-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3303bef652f040c9b42b7e6b8290911f',uuid=aee08d25-d8a2-48f8-ac6e-a5b99c377db1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.298 256736 DEBUG nova.network.os_vif_util [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Converting VIF {"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.299 256736 DEBUG nova.network.os_vif_util [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:1b:ea,bridge_name='br-int',has_traffic_filtering=True,id=3e7651b6-5be6-447a-86e2-4009c6aac334,network=Network(125cb0ae-5b9b-472c-a598-63b3f1d26e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e7651b6-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.299 256736 DEBUG os_vif [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:1b:ea,bridge_name='br-int',has_traffic_filtering=True,id=3e7651b6-5be6-447a-86e2-4009c6aac334,network=Network(125cb0ae-5b9b-472c-a598-63b3f1d26e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e7651b6-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.300 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.301 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.301 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.306 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.307 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e7651b6-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.308 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e7651b6-5b, col_values=(('external_ids', {'iface-id': '3e7651b6-5be6-447a-86e2-4009c6aac334', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:1b:ea', 'vm-uuid': 'aee08d25-d8a2-48f8-ac6e-a5b99c377db1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.310 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:44 compute-0 NetworkManager[48962]: <info>  [1764403784.3112] manager: (tap3e7651b6-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.313 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.319 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.320 256736 INFO os_vif [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:1b:ea,bridge_name='br-int',has_traffic_filtering=True,id=3e7651b6-5be6-447a-86e2-4009c6aac334,network=Network(125cb0ae-5b9b-472c-a598-63b3f1d26e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e7651b6-5b')
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.374 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.374 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.375 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.375 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] No VIF found with MAC fa:16:3e:af:1b:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.376 256736 INFO nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Using config drive
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.398 256736 DEBUG nova.storage.rbd_utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] rbd image aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.769 256736 DEBUG oslo_concurrency.lockutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.769 256736 DEBUG oslo_concurrency.lockutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.770 256736 DEBUG oslo_concurrency.lockutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.770 256736 DEBUG oslo_concurrency.lockutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.770 256736 DEBUG oslo_concurrency.lockutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.772 256736 INFO nova.compute.manager [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Terminating instance
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.774 256736 DEBUG nova.compute.manager [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.822 256736 INFO nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Creating config drive at /var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1/disk.config
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.838 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4d_33jnr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:44 compute-0 kernel: tapc859afc6-4d (unregistering): left promiscuous mode
Nov 29 08:09:44 compute-0 NetworkManager[48962]: <info>  [1764403784.8598] device (tapc859afc6-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:09:44 compute-0 ovn_controller[153383]: 2025-11-29T08:09:44Z|00270|binding|INFO|Releasing lport c859afc6-4da0-4faa-8af3-72d4c6d25f9b from this chassis (sb_readonly=0)
Nov 29 08:09:44 compute-0 ovn_controller[153383]: 2025-11-29T08:09:44Z|00271|binding|INFO|Setting lport c859afc6-4da0-4faa-8af3-72d4c6d25f9b down in Southbound
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.871 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:44 compute-0 ovn_controller[153383]: 2025-11-29T08:09:44Z|00272|binding|INFO|Removing iface tapc859afc6-4d ovn-installed in OVS
Nov 29 08:09:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:44.898 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a0:1f 10.100.0.8'], port_security=['fa:16:3e:26:a0:1f 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '07bebcf7-a7f6-4074-8d77-e89bbce7f710', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dda88d46-9162-4e7c-bb47-793ac4133966', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062fa36b3fb745529eb64d4b5bb52af6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1893efe9-96a9-44d1-bcc6-35fada673e59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=767afc55-24b1-431b-aeef-ddbbabf80029, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=c859afc6-4da0-4faa-8af3-72d4c6d25f9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.900 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:44.900 163655 INFO neutron.agent.ovn.metadata.agent [-] Port c859afc6-4da0-4faa-8af3-72d4c6d25f9b in datapath dda88d46-9162-4e7c-bb47-793ac4133966 unbound from our chassis
Nov 29 08:09:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:44.903 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dda88d46-9162-4e7c-bb47-793ac4133966, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:09:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:44.904 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a471176b-1c02-4404-bb07-16203f0c0fa0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:44 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:44.905 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 namespace which is not needed anymore
Nov 29 08:09:44 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 29 08:09:44 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 17.071s CPU time.
Nov 29 08:09:44 compute-0 systemd-machined[217781]: Machine qemu-27-instance-0000001b terminated.
Nov 29 08:09:44 compute-0 nova_compute[256729]: 2025-11-29 08:09:44.976 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4d_33jnr" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.006 256736 DEBUG nova.storage.rbd_utils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] rbd image aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.013 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1/disk.config aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.054 256736 DEBUG nova.network.neutron [req-a16ed48a-a85f-4ed8-a2f2-cae847647a13 req-25819b0b-070b-4fff-80f9-3c5a4bc4f3f7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Updated VIF entry in instance network info cache for port 3e7651b6-5be6-447a-86e2-4009c6aac334. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.055 256736 DEBUG nova.network.neutron [req-a16ed48a-a85f-4ed8-a2f2-cae847647a13 req-25819b0b-070b-4fff-80f9-3c5a4bc4f3f7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Updating instance_info_cache with network_info: [{"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.060 256736 INFO nova.virt.libvirt.driver [-] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Instance destroyed successfully.
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.061 256736 DEBUG nova.objects.instance [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'resources' on Instance uuid 07bebcf7-a7f6-4074-8d77-e89bbce7f710 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.076 256736 DEBUG nova.virt.libvirt.vif [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-971903970',display_name='tempest-TestEncryptedCinderVolumes-server-971903970',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-971903970',id=27,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNPyxgKsh7PCaaI+tBeWHGeHUDCURjaZ0je4I23fzwWJ/E7nLNAXxSqXV+2eLKbsjY3xXgkiAGZSR5JLwTYFumburEs1G0ZjQEjEzXxvKLkb3fMWfbEdO/q5BsCfMP2zBQ==',key_name='tempest-keypair-1056345848',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='062fa36b3fb745529eb64d4b5bb52af6',ramdisk_id='',reservation_id='r-v99yo2tc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-541864957',owner_user_name='tempest-TestEncryptedCinderVolumes-541864957-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='981b7946a749412f90d3d8148d99486a',uuid=07bebcf7-a7f6-4074-8d77-e89bbce7f710,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.077 256736 DEBUG nova.network.os_vif_util [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converting VIF {"id": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "address": "fa:16:3e:26:a0:1f", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc859afc6-4d", "ovs_interfaceid": "c859afc6-4da0-4faa-8af3-72d4c6d25f9b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.078 256736 DEBUG nova.network.os_vif_util [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:a0:1f,bridge_name='br-int',has_traffic_filtering=True,id=c859afc6-4da0-4faa-8af3-72d4c6d25f9b,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc859afc6-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.078 256736 DEBUG os_vif [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:a0:1f,bridge_name='br-int',has_traffic_filtering=True,id=c859afc6-4da0-4faa-8af3-72d4c6d25f9b,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc859afc6-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.081 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.082 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc859afc6-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.083 256736 DEBUG oslo_concurrency.lockutils [req-a16ed48a-a85f-4ed8-a2f2-cae847647a13 req-25819b0b-070b-4fff-80f9-3c5a4bc4f3f7 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.084 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.087 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.089 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.091 256736 INFO os_vif [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:a0:1f,bridge_name='br-int',has_traffic_filtering=True,id=c859afc6-4da0-4faa-8af3-72d4c6d25f9b,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc859afc6-4d')
Nov 29 08:09:45 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[300534]: [NOTICE]   (300544) : haproxy version is 2.8.14-c23fe91
Nov 29 08:09:45 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[300534]: [NOTICE]   (300544) : path to executable is /usr/sbin/haproxy
Nov 29 08:09:45 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[300534]: [WARNING]  (300544) : Exiting Master process...
Nov 29 08:09:45 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[300534]: [WARNING]  (300544) : Exiting Master process...
Nov 29 08:09:45 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[300534]: [ALERT]    (300544) : Current worker (300549) exited with code 143 (Terminated)
Nov 29 08:09:45 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[300534]: [WARNING]  (300544) : All workers exited. Exiting... (0)
Nov 29 08:09:45 compute-0 systemd[1]: libpod-260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2.scope: Deactivated successfully.
Nov 29 08:09:45 compute-0 podman[301173]: 2025-11-29 08:09:45.117298756 +0000 UTC m=+0.067847194 container died 260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:09:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2-userdata-shm.mount: Deactivated successfully.
Nov 29 08:09:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9195b021ccf2192c16e8e4df32998b1e109c1ba678cc54fd504790282ae1f44-merged.mount: Deactivated successfully.
Nov 29 08:09:45 compute-0 podman[301173]: 2025-11-29 08:09:45.168441227 +0000 UTC m=+0.118989705 container cleanup 260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:09:45 compute-0 systemd[1]: libpod-conmon-260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2.scope: Deactivated successfully.
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.187 256736 DEBUG oslo_concurrency.processutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1/disk.config aee08d25-d8a2-48f8-ac6e-a5b99c377db1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.188 256736 INFO nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Deleting local config drive /var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1/disk.config because it was imported into RBD.
Nov 29 08:09:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 83 op/s
Nov 29 08:09:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/183888358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:09:45 compute-0 podman[301239]: 2025-11-29 08:09:45.235035905 +0000 UTC m=+0.044012206 container remove 260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.242 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[a284b7fd-7ce1-404c-847d-dac248e57448]: (4, ('Sat Nov 29 08:09:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 (260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2)\n260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2\nSat Nov 29 08:09:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 (260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2)\n260657fff169cd2035ba9fdd64a5e9674c4011264d288f71282f403116d486e2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.245 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[720e4fb0-6e0c-4e73-86f6-2018d937709c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.246 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdda88d46-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.247 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 systemd-udevd[301122]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:09:45 compute-0 NetworkManager[48962]: <info>  [1764403785.2590] manager: (tap3e7651b6-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Nov 29 08:09:45 compute-0 kernel: tapdda88d46-90: left promiscuous mode
Nov 29 08:09:45 compute-0 kernel: tap3e7651b6-5b: entered promiscuous mode
Nov 29 08:09:45 compute-0 ovn_controller[153383]: 2025-11-29T08:09:45Z|00273|binding|INFO|Claiming lport 3e7651b6-5be6-447a-86e2-4009c6aac334 for this chassis.
Nov 29 08:09:45 compute-0 ovn_controller[153383]: 2025-11-29T08:09:45Z|00274|binding|INFO|3e7651b6-5be6-447a-86e2-4009c6aac334: Claiming fa:16:3e:af:1b:ea 10.100.0.11
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.265 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.269 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[38f5dc64-41bc-4f2c-9218-80e7e339c3bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.277 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:1b:ea 10.100.0.11'], port_security=['fa:16:3e:af:1b:ea 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee08d25-d8a2-48f8-ac6e-a5b99c377db1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-125cb0ae-5b9b-472c-a598-63b3f1d26e12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '364f361ce7b54bc6a4799a29705c1d0a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6c90c70-9e8f-4ea0-aaf7-2c748510bf4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3f5f237-0ed4-4ea2-9f76-b70e9626d9cf, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=3e7651b6-5be6-447a-86e2-4009c6aac334) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:09:45 compute-0 NetworkManager[48962]: <info>  [1764403785.2803] device (tap3e7651b6-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:09:45 compute-0 NetworkManager[48962]: <info>  [1764403785.2811] device (tap3e7651b6-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:09:45 compute-0 ovn_controller[153383]: 2025-11-29T08:09:45Z|00275|binding|INFO|Setting lport 3e7651b6-5be6-447a-86e2-4009c6aac334 ovn-installed in OVS
Nov 29 08:09:45 compute-0 ovn_controller[153383]: 2025-11-29T08:09:45Z|00276|binding|INFO|Setting lport 3e7651b6-5be6-447a-86e2-4009c6aac334 up in Southbound
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.282 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd6538c-d742-46df-9bd1-302a837266e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.283 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.285 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.285 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca1ce2b-4104-41b9-b247-0aa55c10fec2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 systemd-machined[217781]: New machine qemu-28-instance-0000001c.
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.303 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8623c4-b1a9-4e90-a688-dc90c2368707]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605601, 'reachable_time': 37600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301262, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.305 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.305 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[890ab5ea-12db-4833-9c29-28818e589454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.306 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 3e7651b6-5be6-447a-86e2-4009c6aac334 in datapath 125cb0ae-5b9b-472c-a598-63b3f1d26e12 unbound from our chassis
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.307 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 125cb0ae-5b9b-472c-a598-63b3f1d26e12
Nov 29 08:09:45 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Nov 29 08:09:45 compute-0 systemd[1]: run-netns-ovnmeta\x2ddda88d46\x2d9162\x2d4e7c\x2dbb47\x2d793ac4133966.mount: Deactivated successfully.
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.318 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[283face9-6d56-47bc-a050-ac305a22904a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.319 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap125cb0ae-51 in ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.321 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap125cb0ae-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.321 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[8e67c157-99b4-47d9-ae6b-fbf967e54827]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.322 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4c2f85-0f92-4eac-a7ac-1fccaaacdffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.335 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[e5808a06-c373-42e9-b133-d5e33c28a8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.348 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8c342a-141d-4c86-ac3d-f0d1f379d670]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.379 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a01a87-2895-4bc1-880e-004739b7a3db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 NetworkManager[48962]: <info>  [1764403785.3898] manager: (tap125cb0ae-50): new Veth device (/org/freedesktop/NetworkManager/Devices/138)
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.388 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[120ea852-a8cd-4665-94cc-72f8548b5e1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.426 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[81b1e5c2-89f0-4c92-850b-1be224562fb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.429 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[5990ef16-b3a3-4619-8164-1c7f3231b215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 NetworkManager[48962]: <info>  [1764403785.4519] device (tap125cb0ae-50): carrier: link connected
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.455 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[c7046a96-4c8d-4ee0-8455-7823005e1c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.469 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[07fd88ac-2799-45ce-a448-3198e31b40a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap125cb0ae-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:da:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610051, 'reachable_time': 19738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301297, 'error': None, 'target': 'ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.481 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b23531a3-7fc7-47b5-b665-50f358d8dee5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:dab1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610051, 'tstamp': 610051}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301298, 'error': None, 'target': 'ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.494 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[580074ff-d396-4d9a-a35f-3e2273f9f803]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap125cb0ae-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:da:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610051, 'reachable_time': 19738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301299, 'error': None, 'target': 'ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.524 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[bc03827a-8b4c-4efd-8c67-264021bde59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.592 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[7700eccc-e202-427a-9836-0d624868d4e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.594 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap125cb0ae-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.595 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.595 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap125cb0ae-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:45 compute-0 kernel: tap125cb0ae-50: entered promiscuous mode
Nov 29 08:09:45 compute-0 NetworkManager[48962]: <info>  [1764403785.5991] manager: (tap125cb0ae-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.602 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap125cb0ae-50, col_values=(('external_ids', {'iface-id': '0e3b7294-07ff-452a-8f6e-c23bd1ea2a73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:09:45 compute-0 ovn_controller[153383]: 2025-11-29T08:09:45Z|00277|binding|INFO|Releasing lport 0e3b7294-07ff-452a-8f6e-c23bd1ea2a73 from this chassis (sb_readonly=0)
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.598 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.600 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.603 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.634 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.635 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/125cb0ae-5b9b-472c-a598-63b3f1d26e12.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/125cb0ae-5b9b-472c-a598-63b3f1d26e12.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.636 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ed62a2-0928-4b82-9320-9c0a724d7878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.637 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-125cb0ae-5b9b-472c-a598-63b3f1d26e12
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/125cb0ae-5b9b-472c-a598-63b3f1d26e12.pid.haproxy
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID 125cb0ae-5b9b-472c-a598-63b3f1d26e12
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:09:45 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:45.638 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12', 'env', 'PROCESS_TAG=haproxy-125cb0ae-5b9b-472c-a598-63b3f1d26e12', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/125cb0ae-5b9b-472c-a598-63b3f1d26e12.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.773 256736 DEBUG nova.compute.manager [req-ce30df31-739f-483e-9974-6d4ff16d7af5 req-efa7e214-2028-4b92-bbf3-73da444f19d8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received event network-vif-unplugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.773 256736 DEBUG oslo_concurrency.lockutils [req-ce30df31-739f-483e-9974-6d4ff16d7af5 req-efa7e214-2028-4b92-bbf3-73da444f19d8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.774 256736 DEBUG oslo_concurrency.lockutils [req-ce30df31-739f-483e-9974-6d4ff16d7af5 req-efa7e214-2028-4b92-bbf3-73da444f19d8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.774 256736 DEBUG oslo_concurrency.lockutils [req-ce30df31-739f-483e-9974-6d4ff16d7af5 req-efa7e214-2028-4b92-bbf3-73da444f19d8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.774 256736 DEBUG nova.compute.manager [req-ce30df31-739f-483e-9974-6d4ff16d7af5 req-efa7e214-2028-4b92-bbf3-73da444f19d8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] No waiting events found dispatching network-vif-unplugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.775 256736 DEBUG nova.compute.manager [req-ce30df31-739f-483e-9974-6d4ff16d7af5 req-efa7e214-2028-4b92-bbf3-73da444f19d8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received event network-vif-unplugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.879 256736 INFO nova.virt.libvirt.driver [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Deleting instance files /var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710_del
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.880 256736 INFO nova.virt.libvirt.driver [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Deletion of /var/lib/nova/instances/07bebcf7-a7f6-4074-8d77-e89bbce7f710_del complete
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.906 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403785.9054782, aee08d25-d8a2-48f8-ac6e-a5b99c377db1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.906 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] VM Started (Lifecycle Event)
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.937 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.944 256736 INFO nova.compute.manager [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Took 1.17 seconds to destroy the instance on the hypervisor.
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.945 256736 DEBUG oslo.service.loopingcall [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.945 256736 DEBUG nova.compute.manager [-] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.946 256736 DEBUG nova.network.neutron [-] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.951 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403785.9055932, aee08d25-d8a2-48f8-ac6e-a5b99c377db1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.951 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] VM Paused (Lifecycle Event)
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.975 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:45 compute-0 nova_compute[256729]: 2025-11-29 08:09:45.979 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:46 compute-0 nova_compute[256729]: 2025-11-29 08:09:46.014 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:46 compute-0 podman[301391]: 2025-11-29 08:09:46.083358656 +0000 UTC m=+0.036914890 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:09:46 compute-0 podman[301391]: 2025-11-29 08:09:46.231876285 +0000 UTC m=+0.185432499 container create 74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:09:46 compute-0 ceph-mon[75050]: pgmap v2155: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 83 op/s
Nov 29 08:09:46 compute-0 systemd[1]: Started libpod-conmon-74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37.scope.
Nov 29 08:09:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf7ec41c485bfa12bcf90bb84a476395856b16d4a2d16cc6c84977a7139dc9d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:09:46 compute-0 podman[301391]: 2025-11-29 08:09:46.664180525 +0000 UTC m=+0.617736819 container init 74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:09:46 compute-0 podman[301391]: 2025-11-29 08:09:46.67089644 +0000 UTC m=+0.624452694 container start 74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 08:09:46 compute-0 neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12[301407]: [NOTICE]   (301411) : New worker (301413) forked
Nov 29 08:09:46 compute-0 neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12[301407]: [NOTICE]   (301411) : Loading success.
Nov 29 08:09:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.0 MiB/s wr, 94 op/s
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.577 256736 DEBUG nova.network.neutron [-] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.601 256736 INFO nova.compute.manager [-] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Took 1.66 seconds to deallocate network for instance.
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.655 256736 DEBUG oslo_concurrency.lockutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.655 256736 DEBUG oslo_concurrency.lockutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.669 256736 DEBUG nova.compute.manager [req-7e5b42ac-3b25-4b15-b8ba-bb000caf81f5 req-7cbba597-69f5-48fd-9168-67708a4def86 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received event network-vif-deleted-c859afc6-4da0-4faa-8af3-72d4c6d25f9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.728 256736 DEBUG oslo_concurrency.processutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.907 256736 DEBUG nova.compute.manager [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received event network-vif-plugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.908 256736 DEBUG oslo_concurrency.lockutils [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.909 256736 DEBUG oslo_concurrency.lockutils [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.909 256736 DEBUG oslo_concurrency.lockutils [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.909 256736 DEBUG nova.compute.manager [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] No waiting events found dispatching network-vif-plugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.910 256736 WARNING nova.compute.manager [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Received unexpected event network-vif-plugged-c859afc6-4da0-4faa-8af3-72d4c6d25f9b for instance with vm_state deleted and task_state None.
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.910 256736 DEBUG nova.compute.manager [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received event network-vif-plugged-3e7651b6-5be6-447a-86e2-4009c6aac334 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.910 256736 DEBUG oslo_concurrency.lockutils [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.911 256736 DEBUG oslo_concurrency.lockutils [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.911 256736 DEBUG oslo_concurrency.lockutils [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.911 256736 DEBUG nova.compute.manager [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Processing event network-vif-plugged-3e7651b6-5be6-447a-86e2-4009c6aac334 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.912 256736 DEBUG nova.compute.manager [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received event network-vif-plugged-3e7651b6-5be6-447a-86e2-4009c6aac334 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.912 256736 DEBUG oslo_concurrency.lockutils [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.912 256736 DEBUG oslo_concurrency.lockutils [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.913 256736 DEBUG oslo_concurrency.lockutils [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.913 256736 DEBUG nova.compute.manager [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] No waiting events found dispatching network-vif-plugged-3e7651b6-5be6-447a-86e2-4009c6aac334 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.913 256736 WARNING nova.compute.manager [req-d242e8a9-a4b7-42b6-9c72-21338f2cbfca req-631f4ab4-8e07-4ede-a8cd-21f1082aed9f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received unexpected event network-vif-plugged-3e7651b6-5be6-447a-86e2-4009c6aac334 for instance with vm_state building and task_state spawning.
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.914 256736 DEBUG nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.920 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.923 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403787.922391, aee08d25-d8a2-48f8-ac6e-a5b99c377db1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.923 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] VM Resumed (Lifecycle Event)
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.931 256736 INFO nova.virt.libvirt.driver [-] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Instance spawned successfully.
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.931 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.944 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.953 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.956 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.957 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.958 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.958 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.959 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.959 256736 DEBUG nova.virt.libvirt.driver [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:09:47 compute-0 nova_compute[256729]: 2025-11-29 08:09:47.979 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.016 256736 INFO nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Took 7.13 seconds to spawn the instance on the hypervisor.
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.017 256736 DEBUG nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.018 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.097 256736 INFO nova.compute.manager [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Took 9.20 seconds to build instance.
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.119 256736 DEBUG oslo_concurrency.lockutils [None req-21e03b26-2728-4d23-8b19-f728bc33993d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:09:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209034745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.227 256736 DEBUG oslo_concurrency.processutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.234 256736 DEBUG nova.compute.provider_tree [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:09:48 compute-0 ceph-mon[75050]: pgmap v2156: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.0 MiB/s wr, 94 op/s
Nov 29 08:09:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/209034745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.250 256736 DEBUG nova.scheduler.client.report [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.273 256736 DEBUG oslo_concurrency.lockutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.311 256736 INFO nova.scheduler.client.report [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Deleted allocations for instance 07bebcf7-a7f6-4074-8d77-e89bbce7f710
Nov 29 08:09:48 compute-0 nova_compute[256729]: 2025-11-29 08:09:48.370 256736 DEBUG oslo_concurrency.lockutils [None req-ad3b8792-12fd-49ca-96cc-e4f1cd9ab70d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "07bebcf7-a7f6-4074-8d77-e89bbce7f710" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:09:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 145 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 29 08:09:49 compute-0 nova_compute[256729]: 2025-11-29 08:09:49.875 256736 DEBUG nova.compute.manager [req-c4af8789-383b-4abb-b781-41fc96652cd6 req-28d3d3bc-5f1e-482b-a0f1-b9599daef436 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received event network-changed-3e7651b6-5be6-447a-86e2-4009c6aac334 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:09:49 compute-0 nova_compute[256729]: 2025-11-29 08:09:49.876 256736 DEBUG nova.compute.manager [req-c4af8789-383b-4abb-b781-41fc96652cd6 req-28d3d3bc-5f1e-482b-a0f1-b9599daef436 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Refreshing instance network info cache due to event network-changed-3e7651b6-5be6-447a-86e2-4009c6aac334. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:09:49 compute-0 nova_compute[256729]: 2025-11-29 08:09:49.876 256736 DEBUG oslo_concurrency.lockutils [req-c4af8789-383b-4abb-b781-41fc96652cd6 req-28d3d3bc-5f1e-482b-a0f1-b9599daef436 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:09:49 compute-0 nova_compute[256729]: 2025-11-29 08:09:49.877 256736 DEBUG oslo_concurrency.lockutils [req-c4af8789-383b-4abb-b781-41fc96652cd6 req-28d3d3bc-5f1e-482b-a0f1-b9599daef436 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:09:49 compute-0 nova_compute[256729]: 2025-11-29 08:09:49.877 256736 DEBUG nova.network.neutron [req-c4af8789-383b-4abb-b781-41fc96652cd6 req-28d3d3bc-5f1e-482b-a0f1-b9599daef436 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Refreshing network info cache for port 3e7651b6-5be6-447a-86e2-4009c6aac334 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:09:50 compute-0 nova_compute[256729]: 2025-11-29 08:09:50.085 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:50 compute-0 ceph-mon[75050]: pgmap v2157: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 145 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 29 08:09:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4009823386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4009823386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:51 compute-0 nova_compute[256729]: 2025-11-29 08:09:51.075 256736 DEBUG nova.network.neutron [req-c4af8789-383b-4abb-b781-41fc96652cd6 req-28d3d3bc-5f1e-482b-a0f1-b9599daef436 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Updated VIF entry in instance network info cache for port 3e7651b6-5be6-447a-86e2-4009c6aac334. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:09:51 compute-0 nova_compute[256729]: 2025-11-29 08:09:51.076 256736 DEBUG nova.network.neutron [req-c4af8789-383b-4abb-b781-41fc96652cd6 req-28d3d3bc-5f1e-482b-a0f1-b9599daef436 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Updating instance_info_cache with network_info: [{"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:09:51 compute-0 nova_compute[256729]: 2025-11-29 08:09:51.095 256736 DEBUG oslo_concurrency.lockutils [req-c4af8789-383b-4abb-b781-41fc96652cd6 req-28d3d3bc-5f1e-482b-a0f1-b9599daef436 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:09:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 144 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 29 08:09:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4009823386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4009823386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:09:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145082176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:09:51 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145082176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:52 compute-0 ceph-mon[75050]: pgmap v2158: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 144 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 29 08:09:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/145082176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:09:52 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/145082176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:09:53 compute-0 nova_compute[256729]: 2025-11-29 08:09:53.021 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Nov 29 08:09:54 compute-0 ceph-mon[75050]: pgmap v2159: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Nov 29 08:09:55 compute-0 nova_compute[256729]: 2025-11-29 08:09:55.131 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 607 KiB/s wr, 136 op/s
Nov 29 08:09:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:09:56 compute-0 ceph-mon[75050]: pgmap v2160: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 607 KiB/s wr, 136 op/s
Nov 29 08:09:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 134 op/s
Nov 29 08:09:58 compute-0 nova_compute[256729]: 2025-11-29 08:09:58.022 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:09:58 compute-0 ceph-mon[75050]: pgmap v2161: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 134 op/s
Nov 29 08:09:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.3 KiB/s wr, 93 op/s
Nov 29 08:09:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:59.788 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:09:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:59.789 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:09:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:09:59.790 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:00 compute-0 nova_compute[256729]: 2025-11-29 08:10:00.049 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403785.026383, 07bebcf7-a7f6-4074-8d77-e89bbce7f710 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:00 compute-0 nova_compute[256729]: 2025-11-29 08:10:00.049 256736 INFO nova.compute.manager [-] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] VM Stopped (Lifecycle Event)
Nov 29 08:10:00 compute-0 nova_compute[256729]: 2025-11-29 08:10:00.080 256736 DEBUG nova.compute.manager [None req-f38c7ad5-db98-4d38-bec5-59107ddb5591 - - - - - -] [instance: 07bebcf7-a7f6-4074-8d77-e89bbce7f710] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:00 compute-0 nova_compute[256729]: 2025-11-29 08:10:00.191 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:00 compute-0 ceph-mon[75050]: pgmap v2162: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.3 KiB/s wr, 93 op/s
Nov 29 08:10:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.9 MiB/s rd, 596 B/s wr, 87 op/s
Nov 29 08:10:01 compute-0 ovn_controller[153383]: 2025-11-29T08:10:01Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:1b:ea 10.100.0.11
Nov 29 08:10:01 compute-0 ovn_controller[153383]: 2025-11-29T08:10:01Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:1b:ea 10.100.0.11
Nov 29 08:10:02 compute-0 ceph-mon[75050]: pgmap v2163: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.9 MiB/s rd, 596 B/s wr, 87 op/s
Nov 29 08:10:02 compute-0 sudo[301446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:02 compute-0 sudo[301446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:02 compute-0 sudo[301446]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:02 compute-0 sudo[301471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:02 compute-0 sudo[301471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:02 compute-0 sudo[301471]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:02 compute-0 sudo[301496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:02 compute-0 sudo[301496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:02 compute-0 sudo[301496]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:02 compute-0 sudo[301521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 08:10:02 compute-0 sudo[301521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:03 compute-0 nova_compute[256729]: 2025-11-29 08:10:03.024 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 145 op/s
Nov 29 08:10:03 compute-0 podman[301618]: 2025-11-29 08:10:03.373253822 +0000 UTC m=+0.088984796 container exec 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:10:03 compute-0 podman[301618]: 2025-11-29 08:10:03.488557825 +0000 UTC m=+0.204288709 container exec_died 21a56ae912cb8d8d1f0dc09cd0d64941e849dd5a597340fef403575f5f6dca90 (image=quay.io/ceph/ceph:v18, name=ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:10:04 compute-0 sudo[301521]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:10:04 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:10:04 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:04 compute-0 sudo[301777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:04 compute-0 sudo[301777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:04 compute-0 sudo[301777]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:04 compute-0 sudo[301802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:04 compute-0 sudo[301802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:04 compute-0 sudo[301802]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:04 compute-0 ceph-mon[75050]: pgmap v2164: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 145 op/s
Nov 29 08:10:04 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:04 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:04 compute-0 sudo[301827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:04 compute-0 sudo[301827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:04 compute-0 sudo[301827]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:04 compute-0 sudo[301852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:10:04 compute-0 sudo[301852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:05 compute-0 sudo[301852]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 08:10:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:10:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:10:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:10:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev cc41b2f4-7a73-47d1-8660-0df55ccb233a does not exist
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 74b0ee52-0621-47d2-91ed-52d544faa852 does not exist
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ac7caccc-8068-401c-8655-63d913ff0d01 does not exist
Nov 29 08:10:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:10:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:10:05 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:10:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:05 compute-0 nova_compute[256729]: 2025-11-29 08:10:05.192 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.219224) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403805219306, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1817, "num_deletes": 261, "total_data_size": 2569038, "memory_usage": 2615464, "flush_reason": "Manual Compaction"}
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 29 08:10:05 compute-0 sudo[301908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:05 compute-0 sudo[301908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:05 compute-0 sudo[301908]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403805234499, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1686275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36973, "largest_seqno": 38789, "table_properties": {"data_size": 1679531, "index_size": 3626, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17407, "raw_average_key_size": 21, "raw_value_size": 1664741, "raw_average_value_size": 2060, "num_data_blocks": 161, "num_entries": 808, "num_filter_entries": 808, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403667, "oldest_key_time": 1764403667, "file_creation_time": 1764403805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 15326 microseconds, and 6878 cpu microseconds.
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.234565) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1686275 bytes OK
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.234582) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.235889) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.235902) EVENT_LOG_v1 {"time_micros": 1764403805235898, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.235917) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2561098, prev total WAL file size 2561098, number of live WAL files 2.
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.236885) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1646KB)], [77(10MB)]
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403805236950, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 13002702, "oldest_snapshot_seqno": -1}
Nov 29 08:10:05 compute-0 sudo[301933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:05 compute-0 sudo[301933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:05 compute-0 sudo[301933]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7156 keys, 10514282 bytes, temperature: kUnknown
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403805325447, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 10514282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10462705, "index_size": 32585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 181231, "raw_average_key_size": 25, "raw_value_size": 10330395, "raw_average_value_size": 1443, "num_data_blocks": 1303, "num_entries": 7156, "num_filter_entries": 7156, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764403805, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.325812) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10514282 bytes
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.328117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.6 rd, 118.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.8 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(13.9) write-amplify(6.2) OK, records in: 7625, records dropped: 469 output_compression: NoCompression
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.328134) EVENT_LOG_v1 {"time_micros": 1764403805328125, "job": 44, "event": "compaction_finished", "compaction_time_micros": 88673, "compaction_time_cpu_micros": 48591, "output_level": 6, "num_output_files": 1, "total_output_size": 10514282, "num_input_records": 7625, "num_output_records": 7156, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403805328771, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403805330826, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.236772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.330950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.330954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.330957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.330962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:05.330982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:05 compute-0 sudo[301958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:05 compute-0 sudo[301958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:05 compute-0 sudo[301958]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:05 compute-0 sudo[301983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:10:05 compute-0 sudo[301983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:10:05 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:10:05 compute-0 nova_compute[256729]: 2025-11-29 08:10:05.581 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:10:05
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.mgr', 'images', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 29 08:10:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:10:05 compute-0 podman[302048]: 2025-11-29 08:10:05.808286864 +0000 UTC m=+0.048360347 container create 45af714021bef7d2a1a6229b7b3b21a97989e60ef1805e5462a113e5469ccbe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 08:10:05 compute-0 systemd[1]: Started libpod-conmon-45af714021bef7d2a1a6229b7b3b21a97989e60ef1805e5462a113e5469ccbe6.scope.
Nov 29 08:10:05 compute-0 podman[302048]: 2025-11-29 08:10:05.787581582 +0000 UTC m=+0.027655115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:05 compute-0 podman[302048]: 2025-11-29 08:10:05.914324119 +0000 UTC m=+0.154397622 container init 45af714021bef7d2a1a6229b7b3b21a97989e60ef1805e5462a113e5469ccbe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:10:05 compute-0 podman[302048]: 2025-11-29 08:10:05.926729062 +0000 UTC m=+0.166802585 container start 45af714021bef7d2a1a6229b7b3b21a97989e60ef1805e5462a113e5469ccbe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:10:05 compute-0 podman[302048]: 2025-11-29 08:10:05.932351407 +0000 UTC m=+0.172424950 container attach 45af714021bef7d2a1a6229b7b3b21a97989e60ef1805e5462a113e5469ccbe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 08:10:05 compute-0 nice_nightingale[302064]: 167 167
Nov 29 08:10:05 compute-0 systemd[1]: libpod-45af714021bef7d2a1a6229b7b3b21a97989e60ef1805e5462a113e5469ccbe6.scope: Deactivated successfully.
Nov 29 08:10:05 compute-0 podman[302048]: 2025-11-29 08:10:05.936540652 +0000 UTC m=+0.176614175 container died 45af714021bef7d2a1a6229b7b3b21a97989e60ef1805e5462a113e5469ccbe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0723b8f3067fc6a6a915b6cae225c2429fb22972bbf17de4afe72cc0558f8138-merged.mount: Deactivated successfully.
Nov 29 08:10:05 compute-0 podman[302048]: 2025-11-29 08:10:05.9904319 +0000 UTC m=+0.230505383 container remove 45af714021bef7d2a1a6229b7b3b21a97989e60ef1805e5462a113e5469ccbe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 08:10:06 compute-0 systemd[1]: libpod-conmon-45af714021bef7d2a1a6229b7b3b21a97989e60ef1805e5462a113e5469ccbe6.scope: Deactivated successfully.
Nov 29 08:10:06 compute-0 nova_compute[256729]: 2025-11-29 08:10:06.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:06 compute-0 podman[302087]: 2025-11-29 08:10:06.275870578 +0000 UTC m=+0.102279184 container create 635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haibt, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:10:06 compute-0 podman[302087]: 2025-11-29 08:10:06.218094113 +0000 UTC m=+0.044502719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:06 compute-0 systemd[1]: Started libpod-conmon-635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c.scope.
Nov 29 08:10:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1002b0d68e813ca1e9a6588d63cb0f0994eef38e9d3e27fbf5098736dfcaa084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1002b0d68e813ca1e9a6588d63cb0f0994eef38e9d3e27fbf5098736dfcaa084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1002b0d68e813ca1e9a6588d63cb0f0994eef38e9d3e27fbf5098736dfcaa084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1002b0d68e813ca1e9a6588d63cb0f0994eef38e9d3e27fbf5098736dfcaa084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1002b0d68e813ca1e9a6588d63cb0f0994eef38e9d3e27fbf5098736dfcaa084/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:06 compute-0 podman[302087]: 2025-11-29 08:10:06.366337834 +0000 UTC m=+0.192746420 container init 635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haibt, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:06 compute-0 podman[302087]: 2025-11-29 08:10:06.379763495 +0000 UTC m=+0.206172071 container start 635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 08:10:06 compute-0 podman[302087]: 2025-11-29 08:10:06.383095126 +0000 UTC m=+0.209503692 container attach 635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:10:06 compute-0 ceph-mon[75050]: pgmap v2165: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:10:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 29 08:10:07 compute-0 elastic_haibt[302104]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:10:07 compute-0 elastic_haibt[302104]: --> relative data size: 1.0
Nov 29 08:10:07 compute-0 elastic_haibt[302104]: --> All data devices are unavailable
Nov 29 08:10:07 compute-0 systemd[1]: libpod-635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c.scope: Deactivated successfully.
Nov 29 08:10:07 compute-0 podman[302087]: 2025-11-29 08:10:07.463009189 +0000 UTC m=+1.289417825 container died 635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haibt, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:10:07 compute-0 systemd[1]: libpod-635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c.scope: Consumed 1.022s CPU time.
Nov 29 08:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1002b0d68e813ca1e9a6588d63cb0f0994eef38e9d3e27fbf5098736dfcaa084-merged.mount: Deactivated successfully.
Nov 29 08:10:07 compute-0 podman[302087]: 2025-11-29 08:10:07.532767374 +0000 UTC m=+1.359175960 container remove 635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_haibt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:07 compute-0 systemd[1]: libpod-conmon-635766f12a94d71e07e196533d8aaeda45021f3809ec753f6def06464840b27c.scope: Deactivated successfully.
Nov 29 08:10:07 compute-0 sudo[301983]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:07 compute-0 sudo[302142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:07 compute-0 sudo[302142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:07 compute-0 sudo[302142]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:07 compute-0 sudo[302167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:07 compute-0 sudo[302167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:07 compute-0 sudo[302167]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:07 compute-0 sudo[302192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:07 compute-0 sudo[302192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:07 compute-0 sudo[302192]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:07 compute-0 sudo[302217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:10:07 compute-0 sudo[302217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:08 compute-0 nova_compute[256729]: 2025-11-29 08:10:08.028 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:08 compute-0 nova_compute[256729]: 2025-11-29 08:10:08.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:08 compute-0 nova_compute[256729]: 2025-11-29 08:10:08.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:08 compute-0 podman[302282]: 2025-11-29 08:10:08.261926337 +0000 UTC m=+0.043877592 container create 3364e9130aec1ebb40fb41c2999496ba6c127a38ea5deb150a2e1871854e3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:10:08 compute-0 systemd[1]: Started libpod-conmon-3364e9130aec1ebb40fb41c2999496ba6c127a38ea5deb150a2e1871854e3164.scope.
Nov 29 08:10:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:08 compute-0 podman[302282]: 2025-11-29 08:10:08.241573845 +0000 UTC m=+0.023525070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:08 compute-0 podman[302282]: 2025-11-29 08:10:08.356352253 +0000 UTC m=+0.138303518 container init 3364e9130aec1ebb40fb41c2999496ba6c127a38ea5deb150a2e1871854e3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 08:10:08 compute-0 podman[302282]: 2025-11-29 08:10:08.368315063 +0000 UTC m=+0.150266318 container start 3364e9130aec1ebb40fb41c2999496ba6c127a38ea5deb150a2e1871854e3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:10:08 compute-0 podman[302282]: 2025-11-29 08:10:08.372660363 +0000 UTC m=+0.154611688 container attach 3364e9130aec1ebb40fb41c2999496ba6c127a38ea5deb150a2e1871854e3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:10:08 compute-0 admiring_mirzakhani[302298]: 167 167
Nov 29 08:10:08 compute-0 podman[302282]: 2025-11-29 08:10:08.377431025 +0000 UTC m=+0.159382250 container died 3364e9130aec1ebb40fb41c2999496ba6c127a38ea5deb150a2e1871854e3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 08:10:08 compute-0 systemd[1]: libpod-3364e9130aec1ebb40fb41c2999496ba6c127a38ea5deb150a2e1871854e3164.scope: Deactivated successfully.
Nov 29 08:10:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d671bf661907fa3deb9573c0618c5192b9b0d2b1ec377c7363352e983f6c9f38-merged.mount: Deactivated successfully.
Nov 29 08:10:08 compute-0 podman[302282]: 2025-11-29 08:10:08.422230681 +0000 UTC m=+0.204181916 container remove 3364e9130aec1ebb40fb41c2999496ba6c127a38ea5deb150a2e1871854e3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 08:10:08 compute-0 systemd[1]: libpod-conmon-3364e9130aec1ebb40fb41c2999496ba6c127a38ea5deb150a2e1871854e3164.scope: Deactivated successfully.
Nov 29 08:10:08 compute-0 ceph-mon[75050]: pgmap v2166: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 29 08:10:08 compute-0 podman[302322]: 2025-11-29 08:10:08.635123817 +0000 UTC m=+0.064829840 container create 9dcd9ab8818502cf24e3a72ddc04d9f5433bad83c190615545572a0f307c8f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 08:10:08 compute-0 systemd[1]: Started libpod-conmon-9dcd9ab8818502cf24e3a72ddc04d9f5433bad83c190615545572a0f307c8f3a.scope.
Nov 29 08:10:08 compute-0 podman[302322]: 2025-11-29 08:10:08.612158733 +0000 UTC m=+0.041864796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4198528947' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4198528947' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7efd994155c97d88296e53574fd960604b77c348729d58d2a8087ef2e4d0bcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7efd994155c97d88296e53574fd960604b77c348729d58d2a8087ef2e4d0bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7efd994155c97d88296e53574fd960604b77c348729d58d2a8087ef2e4d0bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7efd994155c97d88296e53574fd960604b77c348729d58d2a8087ef2e4d0bcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:08 compute-0 podman[302322]: 2025-11-29 08:10:08.741890693 +0000 UTC m=+0.171596746 container init 9dcd9ab8818502cf24e3a72ddc04d9f5433bad83c190615545572a0f307c8f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 08:10:08 compute-0 podman[302322]: 2025-11-29 08:10:08.758526582 +0000 UTC m=+0.188232625 container start 9dcd9ab8818502cf24e3a72ddc04d9f5433bad83c190615545572a0f307c8f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:10:08 compute-0 podman[302322]: 2025-11-29 08:10:08.762491291 +0000 UTC m=+0.192197344 container attach 9dcd9ab8818502cf24e3a72ddc04d9f5433bad83c190615545572a0f307c8f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 08:10:09 compute-0 nova_compute[256729]: 2025-11-29 08:10:09.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 103 op/s
Nov 29 08:10:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4198528947' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4198528947' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]: {
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:     "0": [
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:         {
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "devices": [
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "/dev/loop3"
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             ],
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_name": "ceph_lv0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_size": "21470642176",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "name": "ceph_lv0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "tags": {
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.cluster_name": "ceph",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.crush_device_class": "",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.encrypted": "0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.osd_id": "0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.type": "block",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.vdo": "0"
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             },
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "type": "block",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "vg_name": "ceph_vg0"
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:         }
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:     ],
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:     "1": [
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:         {
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "devices": [
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "/dev/loop4"
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             ],
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_name": "ceph_lv1",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_size": "21470642176",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "name": "ceph_lv1",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "tags": {
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.cluster_name": "ceph",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.crush_device_class": "",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.encrypted": "0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.osd_id": "1",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.type": "block",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.vdo": "0"
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             },
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "type": "block",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "vg_name": "ceph_vg1"
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:         }
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:     ],
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:     "2": [
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:         {
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "devices": [
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "/dev/loop5"
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             ],
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_name": "ceph_lv2",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_size": "21470642176",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "name": "ceph_lv2",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "tags": {
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.cluster_name": "ceph",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.crush_device_class": "",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.encrypted": "0",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.osd_id": "2",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.type": "block",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:                 "ceph.vdo": "0"
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             },
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "type": "block",
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:             "vg_name": "ceph_vg2"
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:         }
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]:     ]
Nov 29 08:10:09 compute-0 quizzical_davinci[302339]: }
Nov 29 08:10:09 compute-0 systemd[1]: libpod-9dcd9ab8818502cf24e3a72ddc04d9f5433bad83c190615545572a0f307c8f3a.scope: Deactivated successfully.
Nov 29 08:10:09 compute-0 podman[302322]: 2025-11-29 08:10:09.567870688 +0000 UTC m=+0.997576741 container died 9dcd9ab8818502cf24e3a72ddc04d9f5433bad83c190615545572a0f307c8f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7efd994155c97d88296e53574fd960604b77c348729d58d2a8087ef2e4d0bcd-merged.mount: Deactivated successfully.
Nov 29 08:10:09 compute-0 podman[302322]: 2025-11-29 08:10:09.68972959 +0000 UTC m=+1.119435643 container remove 9dcd9ab8818502cf24e3a72ddc04d9f5433bad83c190615545572a0f307c8f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_davinci, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:10:09 compute-0 systemd[1]: libpod-conmon-9dcd9ab8818502cf24e3a72ddc04d9f5433bad83c190615545572a0f307c8f3a.scope: Deactivated successfully.
Nov 29 08:10:09 compute-0 sudo[302217]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:09 compute-0 sudo[302362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:09 compute-0 sudo[302362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:09 compute-0 sudo[302362]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:09 compute-0 sudo[302387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:10:09 compute-0 sudo[302387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:09 compute-0 sudo[302387]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:09 compute-0 sudo[302412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:09 compute-0 sudo[302412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:09 compute-0 sudo[302412]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:10 compute-0 sudo[302437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:10:10 compute-0 sudo[302437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:10 compute-0 nova_compute[256729]: 2025-11-29 08:10:10.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:10 compute-0 nova_compute[256729]: 2025-11-29 08:10:10.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:10:10 compute-0 nova_compute[256729]: 2025-11-29 08:10:10.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:10:10 compute-0 nova_compute[256729]: 2025-11-29 08:10:10.214 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:10 compute-0 podman[302503]: 2025-11-29 08:10:10.453591641 +0000 UTC m=+0.070402303 container create d68b23ee26be4c2863e3f614f9642e3520eaa0e9774811763f3c5db0a58cdfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:10:10 compute-0 systemd[1]: Started libpod-conmon-d68b23ee26be4c2863e3f614f9642e3520eaa0e9774811763f3c5db0a58cdfb3.scope.
Nov 29 08:10:10 compute-0 podman[302503]: 2025-11-29 08:10:10.425893437 +0000 UTC m=+0.042704179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:10 compute-0 ceph-mon[75050]: pgmap v2167: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 103 op/s
Nov 29 08:10:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:10 compute-0 podman[302503]: 2025-11-29 08:10:10.553374495 +0000 UTC m=+0.170185247 container init d68b23ee26be4c2863e3f614f9642e3520eaa0e9774811763f3c5db0a58cdfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:10:10 compute-0 podman[302503]: 2025-11-29 08:10:10.562640791 +0000 UTC m=+0.179451493 container start d68b23ee26be4c2863e3f614f9642e3520eaa0e9774811763f3c5db0a58cdfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:10:10 compute-0 podman[302503]: 2025-11-29 08:10:10.566682342 +0000 UTC m=+0.183493024 container attach d68b23ee26be4c2863e3f614f9642e3520eaa0e9774811763f3c5db0a58cdfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:10:10 compute-0 quizzical_lamport[302519]: 167 167
Nov 29 08:10:10 compute-0 systemd[1]: libpod-d68b23ee26be4c2863e3f614f9642e3520eaa0e9774811763f3c5db0a58cdfb3.scope: Deactivated successfully.
Nov 29 08:10:10 compute-0 podman[302503]: 2025-11-29 08:10:10.572289547 +0000 UTC m=+0.189100239 container died d68b23ee26be4c2863e3f614f9642e3520eaa0e9774811763f3c5db0a58cdfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 29 08:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-30b0456ca61c8d79af8e47daf5e9be64892a59282055ff3a8166bff7e7bcb910-merged.mount: Deactivated successfully.
Nov 29 08:10:10 compute-0 podman[302503]: 2025-11-29 08:10:10.6216698 +0000 UTC m=+0.238480502 container remove d68b23ee26be4c2863e3f614f9642e3520eaa0e9774811763f3c5db0a58cdfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 08:10:10 compute-0 systemd[1]: libpod-conmon-d68b23ee26be4c2863e3f614f9642e3520eaa0e9774811763f3c5db0a58cdfb3.scope: Deactivated successfully.
Nov 29 08:10:10 compute-0 nova_compute[256729]: 2025-11-29 08:10:10.686 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:10 compute-0 nova_compute[256729]: 2025-11-29 08:10:10.687 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquired lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:10 compute-0 nova_compute[256729]: 2025-11-29 08:10:10.687 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 29 08:10:10 compute-0 nova_compute[256729]: 2025-11-29 08:10:10.687 256736 DEBUG nova.objects.instance [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lazy-loading 'info_cache' on Instance uuid aee08d25-d8a2-48f8-ac6e-a5b99c377db1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:10 compute-0 podman[302542]: 2025-11-29 08:10:10.888650958 +0000 UTC m=+0.067361330 container create a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 08:10:10 compute-0 systemd[1]: Started libpod-conmon-a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f.scope.
Nov 29 08:10:10 compute-0 podman[302542]: 2025-11-29 08:10:10.859483833 +0000 UTC m=+0.038194255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:10:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6b718b73736342c439971831bbed3773a4dbf00a68fb05eed864353daea3f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6b718b73736342c439971831bbed3773a4dbf00a68fb05eed864353daea3f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6b718b73736342c439971831bbed3773a4dbf00a68fb05eed864353daea3f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6b718b73736342c439971831bbed3773a4dbf00a68fb05eed864353daea3f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:11 compute-0 podman[302542]: 2025-11-29 08:10:11.009572555 +0000 UTC m=+0.188282917 container init a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 08:10:11 compute-0 podman[302542]: 2025-11-29 08:10:11.026296056 +0000 UTC m=+0.205006428 container start a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:11 compute-0 podman[302542]: 2025-11-29 08:10:11.03040343 +0000 UTC m=+0.209113812 container attach a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wescoff, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.730 256736 DEBUG nova.network.neutron [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Updating instance_info_cache with network_info: [{"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.771 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Releasing lock "refresh_cache-aee08d25-d8a2-48f8-ac6e-a5b99c377db1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.771 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.772 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.773 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.773 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.802 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.803 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.803 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.804 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:10:11 compute-0 nova_compute[256729]: 2025-11-29 08:10:11.804 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:12 compute-0 bold_wescoff[302559]: {
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "osd_id": 2,
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "type": "bluestore"
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:     },
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "osd_id": 1,
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "type": "bluestore"
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:     },
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "osd_id": 0,
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:         "type": "bluestore"
Nov 29 08:10:12 compute-0 bold_wescoff[302559]:     }
Nov 29 08:10:12 compute-0 bold_wescoff[302559]: }
Nov 29 08:10:12 compute-0 systemd[1]: libpod-a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f.scope: Deactivated successfully.
Nov 29 08:10:12 compute-0 systemd[1]: libpod-a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f.scope: Consumed 1.055s CPU time.
Nov 29 08:10:12 compute-0 podman[302542]: 2025-11-29 08:10:12.078902896 +0000 UTC m=+1.257613288 container died a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wescoff, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc6b718b73736342c439971831bbed3773a4dbf00a68fb05eed864353daea3f0-merged.mount: Deactivated successfully.
Nov 29 08:10:12 compute-0 podman[302542]: 2025-11-29 08:10:12.138844179 +0000 UTC m=+1.317554521 container remove a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wescoff, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:12 compute-0 systemd[1]: libpod-conmon-a1fb9db1be6617f49e5ffd88031f09dccd9a0075b12261f3733081316d95de3f.scope: Deactivated successfully.
Nov 29 08:10:12 compute-0 sudo[302437]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:10:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:10:12 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:12 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 897c31cc-9f77-4146-b687-d7912fba7ea1 does not exist
Nov 29 08:10:12 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev d0f1eb5f-50bd-4840-bed0-3ff1514ef64c does not exist
Nov 29 08:10:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991688769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.260 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:12 compute-0 sudo[302624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:10:12 compute-0 sudo[302624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:12 compute-0 sudo[302624]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.339 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.340 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.340 256736 DEBUG nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 29 08:10:12 compute-0 sudo[302652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:10:12 compute-0 sudo[302652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:10:12 compute-0 sudo[302652]: pam_unix(sudo:session): session closed for user root
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.536 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.538 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4139MB free_disk=59.96735763549805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.538 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.539 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:12 compute-0 ceph-mon[75050]: pgmap v2168: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Nov 29 08:10:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:12 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:10:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2991688769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.681 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance aee08d25-d8a2-48f8-ac6e-a5b99c377db1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.681 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.682 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:10:12 compute-0 nova_compute[256729]: 2025-11-29 08:10:12.754 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:13 compute-0 nova_compute[256729]: 2025-11-29 08:10:13.030 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 103 op/s
Nov 29 08:10:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201843872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:13 compute-0 nova_compute[256729]: 2025-11-29 08:10:13.241 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:13 compute-0 nova_compute[256729]: 2025-11-29 08:10:13.249 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:13 compute-0 nova_compute[256729]: 2025-11-29 08:10:13.268 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:13 compute-0 nova_compute[256729]: 2025-11-29 08:10:13.302 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:10:13 compute-0 nova_compute[256729]: 2025-11-29 08:10:13.303 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4201843872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:14 compute-0 ceph-mon[75050]: pgmap v2169: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 103 op/s
Nov 29 08:10:14 compute-0 podman[302700]: 2025-11-29 08:10:14.722517682 +0000 UTC m=+0.087569788 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:10:14 compute-0 podman[302701]: 2025-11-29 08:10:14.727371376 +0000 UTC m=+0.080630826 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 08:10:14 compute-0 podman[302699]: 2025-11-29 08:10:14.80901706 +0000 UTC m=+0.168652057 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 08:10:15 compute-0 nova_compute[256729]: 2025-11-29 08:10:15.216 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 144 KiB/s rd, 684 KiB/s wr, 44 op/s
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487950323956502 of space, bias 1.0, pg target 0.10463850971869505 quantized to 32 (current 32)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.034454335767076905 of space, bias 1.0, pg target 10.336300730123071 quantized to 32 (current 32)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10037602457348441 quantized to 32 (current 32)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19319111398710687 quantized to 32 (current 32)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 16)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:10:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Nov 29 08:10:16 compute-0 ceph-mon[75050]: pgmap v2170: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 144 KiB/s rd, 684 KiB/s wr, 44 op/s
Nov 29 08:10:16 compute-0 nova_compute[256729]: 2025-11-29 08:10:16.679 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.0 KiB/s rd, 34 KiB/s wr, 5 op/s
Nov 29 08:10:18 compute-0 nova_compute[256729]: 2025-11-29 08:10:18.034 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:18 compute-0 ceph-mon[75050]: pgmap v2171: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.0 KiB/s rd, 34 KiB/s wr, 5 op/s
Nov 29 08:10:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 14 KiB/s rd, 3.4 MiB/s wr, 19 op/s
Nov 29 08:10:19 compute-0 ovn_controller[153383]: 2025-11-29T08:10:19Z|00278|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 08:10:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:20 compute-0 nova_compute[256729]: 2025-11-29 08:10:20.219 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:20 compute-0 ceph-mon[75050]: pgmap v2172: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 14 KiB/s rd, 3.4 MiB/s wr, 19 op/s
Nov 29 08:10:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 11 KiB/s rd, 3.3 MiB/s wr, 15 op/s
Nov 29 08:10:21 compute-0 nova_compute[256729]: 2025-11-29 08:10:21.992 256736 DEBUG oslo_concurrency.lockutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquiring lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:21 compute-0 nova_compute[256729]: 2025-11-29 08:10:21.992 256736 DEBUG oslo_concurrency.lockutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:21 compute-0 nova_compute[256729]: 2025-11-29 08:10:21.993 256736 DEBUG oslo_concurrency.lockutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquiring lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:21 compute-0 nova_compute[256729]: 2025-11-29 08:10:21.994 256736 DEBUG oslo_concurrency.lockutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:21 compute-0 nova_compute[256729]: 2025-11-29 08:10:21.994 256736 DEBUG oslo_concurrency.lockutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:21 compute-0 nova_compute[256729]: 2025-11-29 08:10:21.997 256736 INFO nova.compute.manager [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Terminating instance
Nov 29 08:10:21 compute-0 nova_compute[256729]: 2025-11-29 08:10:21.999 256736 DEBUG nova.compute.manager [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:10:22 compute-0 kernel: tap3e7651b6-5b (unregistering): left promiscuous mode
Nov 29 08:10:22 compute-0 NetworkManager[48962]: <info>  [1764403822.0662] device (tap3e7651b6-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:10:22 compute-0 ovn_controller[153383]: 2025-11-29T08:10:22Z|00279|binding|INFO|Releasing lport 3e7651b6-5be6-447a-86e2-4009c6aac334 from this chassis (sb_readonly=0)
Nov 29 08:10:22 compute-0 ovn_controller[153383]: 2025-11-29T08:10:22Z|00280|binding|INFO|Setting lport 3e7651b6-5be6-447a-86e2-4009c6aac334 down in Southbound
Nov 29 08:10:22 compute-0 ovn_controller[153383]: 2025-11-29T08:10:22Z|00281|binding|INFO|Removing iface tap3e7651b6-5b ovn-installed in OVS
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.082 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.089 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:1b:ea 10.100.0.11'], port_security=['fa:16:3e:af:1b:ea 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee08d25-d8a2-48f8-ac6e-a5b99c377db1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-125cb0ae-5b9b-472c-a598-63b3f1d26e12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '364f361ce7b54bc6a4799a29705c1d0a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd6c90c70-9e8f-4ea0-aaf7-2c748510bf4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3f5f237-0ed4-4ea2-9f76-b70e9626d9cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=3e7651b6-5be6-447a-86e2-4009c6aac334) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.092 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 3e7651b6-5be6-447a-86e2-4009c6aac334 in datapath 125cb0ae-5b9b-472c-a598-63b3f1d26e12 unbound from our chassis
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.094 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 125cb0ae-5b9b-472c-a598-63b3f1d26e12, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.096 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[cc827d74-53da-4df8-9f8c-94377588f137]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.097 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12 namespace which is not needed anymore
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.117 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:22 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 29 08:10:22 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 15.047s CPU time.
Nov 29 08:10:22 compute-0 systemd-machined[217781]: Machine qemu-28-instance-0000001c terminated.
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.241 256736 INFO nova.virt.libvirt.driver [-] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Instance destroyed successfully.
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.242 256736 DEBUG nova.objects.instance [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lazy-loading 'resources' on Instance uuid aee08d25-d8a2-48f8-ac6e-a5b99c377db1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:22 compute-0 neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12[301407]: [NOTICE]   (301411) : haproxy version is 2.8.14-c23fe91
Nov 29 08:10:22 compute-0 neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12[301407]: [NOTICE]   (301411) : path to executable is /usr/sbin/haproxy
Nov 29 08:10:22 compute-0 neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12[301407]: [WARNING]  (301411) : Exiting Master process...
Nov 29 08:10:22 compute-0 neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12[301407]: [WARNING]  (301411) : Exiting Master process...
Nov 29 08:10:22 compute-0 neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12[301407]: [ALERT]    (301411) : Current worker (301413) exited with code 143 (Terminated)
Nov 29 08:10:22 compute-0 neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12[301407]: [WARNING]  (301411) : All workers exited. Exiting... (0)
Nov 29 08:10:22 compute-0 systemd[1]: libpod-74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37.scope: Deactivated successfully.
Nov 29 08:10:22 compute-0 podman[302781]: 2025-11-29 08:10:22.258172137 +0000 UTC m=+0.056628445 container died 74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.266 256736 DEBUG nova.virt.libvirt.vif [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-1927056742',display_name='tempest-instance-1927056742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1927056742',id=28,image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkR+KC5GJgnm3VT/VgUqGwBYkXpgsUVnTAJ2kwI/941njtP/yeTjKM30rwlX1J6tojE3FLqtS//3vJORxbUyooTMIdJO3ey3s/JCS+TkLOmI4JkGss4sSpD1EttwPeg5A==',key_name='tempest-keypair-311340034',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='364f361ce7b54bc6a4799a29705c1d0a',ramdisk_id='',reservation_id='r-t5u7m80n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0e1e6d51-69c9-47e5-8ffd-bcf4bb434fae',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1467932066',owner_user_name='tempest-VolumesBackupsTest-1467932066-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3303bef652f040c9b42b7e6b8290911f',uuid=aee08d25-d8a2-48f8-ac6e-a5b99c377db1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.268 256736 DEBUG nova.network.os_vif_util [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Converting VIF {"id": "3e7651b6-5be6-447a-86e2-4009c6aac334", "address": "fa:16:3e:af:1b:ea", "network": {"id": "125cb0ae-5b9b-472c-a598-63b3f1d26e12", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1994544979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "364f361ce7b54bc6a4799a29705c1d0a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e7651b6-5b", "ovs_interfaceid": "3e7651b6-5be6-447a-86e2-4009c6aac334", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.270 256736 DEBUG nova.network.os_vif_util [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:1b:ea,bridge_name='br-int',has_traffic_filtering=True,id=3e7651b6-5be6-447a-86e2-4009c6aac334,network=Network(125cb0ae-5b9b-472c-a598-63b3f1d26e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e7651b6-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.271 256736 DEBUG os_vif [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:1b:ea,bridge_name='br-int',has_traffic_filtering=True,id=3e7651b6-5be6-447a-86e2-4009c6aac334,network=Network(125cb0ae-5b9b-472c-a598-63b3f1d26e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e7651b6-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.276 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.277 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e7651b6-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.282 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.285 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.289 256736 INFO os_vif [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:1b:ea,bridge_name='br-int',has_traffic_filtering=True,id=3e7651b6-5be6-447a-86e2-4009c6aac334,network=Network(125cb0ae-5b9b-472c-a598-63b3f1d26e12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e7651b6-5b')
Nov 29 08:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37-userdata-shm.mount: Deactivated successfully.
Nov 29 08:10:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-daf7ec41c485bfa12bcf90bb84a476395856b16d4a2d16cc6c84977a7139dc9d-merged.mount: Deactivated successfully.
Nov 29 08:10:22 compute-0 podman[302781]: 2025-11-29 08:10:22.301199014 +0000 UTC m=+0.099655332 container cleanup 74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 08:10:22 compute-0 systemd[1]: libpod-conmon-74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37.scope: Deactivated successfully.
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.321 256736 DEBUG nova.compute.manager [req-2fa0d87f-761c-44ec-b72d-4c8dd09db23c req-db6a7797-7cbd-4d36-b7cc-a059d77062a0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received event network-vif-unplugged-3e7651b6-5be6-447a-86e2-4009c6aac334 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.321 256736 DEBUG oslo_concurrency.lockutils [req-2fa0d87f-761c-44ec-b72d-4c8dd09db23c req-db6a7797-7cbd-4d36-b7cc-a059d77062a0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.322 256736 DEBUG oslo_concurrency.lockutils [req-2fa0d87f-761c-44ec-b72d-4c8dd09db23c req-db6a7797-7cbd-4d36-b7cc-a059d77062a0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.322 256736 DEBUG oslo_concurrency.lockutils [req-2fa0d87f-761c-44ec-b72d-4c8dd09db23c req-db6a7797-7cbd-4d36-b7cc-a059d77062a0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.322 256736 DEBUG nova.compute.manager [req-2fa0d87f-761c-44ec-b72d-4c8dd09db23c req-db6a7797-7cbd-4d36-b7cc-a059d77062a0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] No waiting events found dispatching network-vif-unplugged-3e7651b6-5be6-447a-86e2-4009c6aac334 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.322 256736 DEBUG nova.compute.manager [req-2fa0d87f-761c-44ec-b72d-4c8dd09db23c req-db6a7797-7cbd-4d36-b7cc-a059d77062a0 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received event network-vif-unplugged-3e7651b6-5be6-447a-86e2-4009c6aac334 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:10:22 compute-0 podman[302837]: 2025-11-29 08:10:22.381646814 +0000 UTC m=+0.056147080 container remove 74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.389 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0459b9-d418-4753-8a1e-dd2c1da34ce9]: (4, ('Sat Nov 29 08:10:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12 (74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37)\n74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37\nSat Nov 29 08:10:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12 (74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37)\n74c92f8fc9197db555ce05b2210564f8590b3d2c8d306f8eaf79653884731f37\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.391 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa84c06-b075-407a-8279-2befa5842345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.392 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap125cb0ae-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.393 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:22 compute-0 kernel: tap125cb0ae-50: left promiscuous mode
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.420 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.424 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d980a027-752a-4672-a00a-3d90791ce6ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.444 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e4060dd7-1717-4bda-9689-08501b45415d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.446 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[35f9dbfa-8dc7-4c71-96e6-a7850f5a1715]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.452 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.452 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.473 256736 DEBUG nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.476 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[cb281395-83d1-4fd6-96c6-4fd359bcfe69]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610043, 'reachable_time': 20156, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302856, 'error': None, 'target': 'ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.479 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-125cb0ae-5b9b-472c-a598-63b3f1d26e12 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:10:22 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:22.479 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[86ca3e9e-aa94-4704-a452-71d2afb26e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d125cb0ae\x2d5b9b\x2d472c\x2da598\x2d63b3f1d26e12.mount: Deactivated successfully.
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.548 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.549 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.560 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.560 256736 INFO nova.compute.claims [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.646 256736 INFO nova.virt.libvirt.driver [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Deleting instance files /var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1_del
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.647 256736 INFO nova.virt.libvirt.driver [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Deletion of /var/lib/nova/instances/aee08d25-d8a2-48f8-ac6e-a5b99c377db1_del complete
Nov 29 08:10:22 compute-0 ceph-mon[75050]: pgmap v2173: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 11 KiB/s rd, 3.3 MiB/s wr, 15 op/s
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.690 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.717 256736 INFO nova.compute.manager [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Took 0.72 seconds to destroy the instance on the hypervisor.
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.717 256736 DEBUG oslo.service.loopingcall [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.718 256736 DEBUG nova.compute.manager [-] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:10:22 compute-0 nova_compute[256729]: 2025-11-29 08:10:22.718 256736 DEBUG nova.network.neutron [-] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.034 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:23 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437661961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.120 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.128 256736 DEBUG nova.compute.provider_tree [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.146 256736 DEBUG nova.scheduler.client.report [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.177 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.178 256736 DEBUG nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:10:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 45 op/s
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.231 256736 DEBUG nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.231 256736 DEBUG nova.network.neutron [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.254 256736 INFO nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.277 256736 DEBUG nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.332 256736 INFO nova.virt.block_device [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Booting with volume 9ca82a13-68c7-4eb8-b2f4-6410faf62051 at /dev/vda
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.484 256736 DEBUG os_brick.utils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.486 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.498 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.499 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[5a05192d-46b8-4856-814f-63b9a6ffb313]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.500 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.509 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.510 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a8b51d-4ad3-46cf-8f64-46234c6889c8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.511 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.521 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.521 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[f76292cf-6aa4-48a5-8ddf-300ce022cc16]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.522 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[1e59fe4d-a39c-40a9-80c3-e71b4a116004]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.523 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.557 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.560 256736 DEBUG os_brick.initiator.connectors.lightos [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.560 256736 DEBUG os_brick.initiator.connectors.lightos [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.560 256736 DEBUG os_brick.initiator.connectors.lightos [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.561 256736 DEBUG os_brick.utils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.561 256736 DEBUG nova.virt.block_device [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Updating existing volume attachment record: 12d4e903-2a83-4d5d-a3b8-9d376ee966f4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:10:23 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2437661961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:23 compute-0 nova_compute[256729]: 2025-11-29 08:10:23.751 256736 DEBUG nova.policy [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '981b7946a749412f90d3d8148d99486a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '062fa36b3fb745529eb64d4b5bb52af6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:10:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1848584527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.466 256736 DEBUG nova.compute.manager [req-437be2c6-590f-4d86-ab52-c60355d24a1b req-038293a2-4e29-4a61-bd8f-d99e86c07bf8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received event network-vif-plugged-3e7651b6-5be6-447a-86e2-4009c6aac334 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.466 256736 DEBUG oslo_concurrency.lockutils [req-437be2c6-590f-4d86-ab52-c60355d24a1b req-038293a2-4e29-4a61-bd8f-d99e86c07bf8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.467 256736 DEBUG oslo_concurrency.lockutils [req-437be2c6-590f-4d86-ab52-c60355d24a1b req-038293a2-4e29-4a61-bd8f-d99e86c07bf8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.467 256736 DEBUG oslo_concurrency.lockutils [req-437be2c6-590f-4d86-ab52-c60355d24a1b req-038293a2-4e29-4a61-bd8f-d99e86c07bf8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.468 256736 DEBUG nova.compute.manager [req-437be2c6-590f-4d86-ab52-c60355d24a1b req-038293a2-4e29-4a61-bd8f-d99e86c07bf8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] No waiting events found dispatching network-vif-plugged-3e7651b6-5be6-447a-86e2-4009c6aac334 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.468 256736 WARNING nova.compute.manager [req-437be2c6-590f-4d86-ab52-c60355d24a1b req-038293a2-4e29-4a61-bd8f-d99e86c07bf8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received unexpected event network-vif-plugged-3e7651b6-5be6-447a-86e2-4009c6aac334 for instance with vm_state active and task_state deleting.
Nov 29 08:10:24 compute-0 ceph-mon[75050]: pgmap v2174: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 45 op/s
Nov 29 08:10:24 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1848584527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.708 256736 DEBUG nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.710 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.711 256736 INFO nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Creating image(s)
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.712 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.712 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Ensure instance console log exists: /var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.713 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.713 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:24 compute-0 nova_compute[256729]: 2025-11-29 08:10:24.714 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 31 KiB/s rd, 9.4 MiB/s wr, 49 op/s
Nov 29 08:10:25 compute-0 nova_compute[256729]: 2025-11-29 08:10:25.265 256736 DEBUG nova.network.neutron [-] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:25 compute-0 nova_compute[256729]: 2025-11-29 08:10:25.286 256736 INFO nova.compute.manager [-] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Took 2.57 seconds to deallocate network for instance.
Nov 29 08:10:25 compute-0 nova_compute[256729]: 2025-11-29 08:10:25.335 256736 DEBUG nova.compute.manager [req-502eab73-466b-4618-a0df-f213048627fb req-ef47cb30-7f04-4725-ab9a-a5470c26797f ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Received event network-vif-deleted-3e7651b6-5be6-447a-86e2-4009c6aac334 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:25 compute-0 nova_compute[256729]: 2025-11-29 08:10:25.458 256736 DEBUG nova.network.neutron [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Successfully created port: 1691debb-0a81-4c08-b125-ba66d384f0d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:10:25 compute-0 nova_compute[256729]: 2025-11-29 08:10:25.482 256736 INFO nova.compute.manager [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Took 0.20 seconds to detach 1 volumes for instance.
Nov 29 08:10:25 compute-0 nova_compute[256729]: 2025-11-29 08:10:25.527 256736 DEBUG oslo_concurrency.lockutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:25 compute-0 nova_compute[256729]: 2025-11-29 08:10:25.527 256736 DEBUG oslo_concurrency.lockutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:25 compute-0 nova_compute[256729]: 2025-11-29 08:10:25.590 256736 DEBUG oslo_concurrency.processutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832408959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.014 256736 DEBUG oslo_concurrency.processutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.026 256736 DEBUG nova.compute.provider_tree [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.049 256736 DEBUG nova.scheduler.client.report [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.076 256736 DEBUG oslo_concurrency.lockutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.101 256736 INFO nova.scheduler.client.report [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Deleted allocations for instance aee08d25-d8a2-48f8-ac6e-a5b99c377db1
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.162 256736 DEBUG oslo_concurrency.lockutils [None req-62c99c67-46aa-4a3a-99b8-1e3761573e5d 3303bef652f040c9b42b7e6b8290911f 364f361ce7b54bc6a4799a29705c1d0a - - default default] Lock "aee08d25-d8a2-48f8-ac6e-a5b99c377db1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.281 256736 DEBUG nova.network.neutron [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Successfully updated port: 1691debb-0a81-4c08-b125-ba66d384f0d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.296 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "refresh_cache-df037f63-8d47-48a2-ac4a-94fd2490dc6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.297 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquired lock "refresh_cache-df037f63-8d47-48a2-ac4a-94fd2490dc6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.297 256736 DEBUG nova.network.neutron [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.467 256736 DEBUG nova.network.neutron [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.547 256736 DEBUG nova.compute.manager [req-81791955-0fb5-4b95-a83d-0f832d3ddeff req-0ecdeb2b-0d6b-4e86-8056-59fe89d107c8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received event network-changed-1691debb-0a81-4c08-b125-ba66d384f0d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.548 256736 DEBUG nova.compute.manager [req-81791955-0fb5-4b95-a83d-0f832d3ddeff req-0ecdeb2b-0d6b-4e86-8056-59fe89d107c8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Refreshing instance network info cache due to event network-changed-1691debb-0a81-4c08-b125-ba66d384f0d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:10:26 compute-0 nova_compute[256729]: 2025-11-29 08:10:26.549 256736 DEBUG oslo_concurrency.lockutils [req-81791955-0fb5-4b95-a83d-0f832d3ddeff req-0ecdeb2b-0d6b-4e86-8056-59fe89d107c8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-df037f63-8d47-48a2-ac4a-94fd2490dc6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:26 compute-0 ceph-mon[75050]: pgmap v2175: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 31 KiB/s rd, 9.4 MiB/s wr, 49 op/s
Nov 29 08:10:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/832408959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3671772463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 48 KiB/s rd, 9.4 MiB/s wr, 71 op/s
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.281 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.426 256736 DEBUG nova.network.neutron [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Updating instance_info_cache with network_info: [{"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.446 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Releasing lock "refresh_cache-df037f63-8d47-48a2-ac4a-94fd2490dc6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.447 256736 DEBUG nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Instance network_info: |[{"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.447 256736 DEBUG oslo_concurrency.lockutils [req-81791955-0fb5-4b95-a83d-0f832d3ddeff req-0ecdeb2b-0d6b-4e86-8056-59fe89d107c8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-df037f63-8d47-48a2-ac4a-94fd2490dc6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.448 256736 DEBUG nova.network.neutron [req-81791955-0fb5-4b95-a83d-0f832d3ddeff req-0ecdeb2b-0d6b-4e86-8056-59fe89d107c8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Refreshing network info cache for port 1691debb-0a81-4c08-b125-ba66d384f0d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.452 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Start _get_guest_xml network_info=[{"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9ca82a13-68c7-4eb8-b2f4-6410faf62051', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9ca82a13-68c7-4eb8-b2f4-6410faf62051', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'df037f63-8d47-48a2-ac4a-94fd2490dc6f', 'attached_at': '', 'detached_at': '', 'volume_id': '9ca82a13-68c7-4eb8-b2f4-6410faf62051', 'serial': '9ca82a13-68c7-4eb8-b2f4-6410faf62051'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': '12d4e903-2a83-4d5d-a3b8-9d376ee966f4', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.457 256736 WARNING nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.462 256736 DEBUG nova.virt.libvirt.host [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.463 256736 DEBUG nova.virt.libvirt.host [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.468 256736 DEBUG nova.virt.libvirt.host [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.469 256736 DEBUG nova.virt.libvirt.host [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.469 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.470 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.470 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.470 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.471 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.471 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.471 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.472 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.472 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.472 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.473 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.473 256736 DEBUG nova.virt.hardware [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.499 256736 DEBUG nova.storage.rbd_utils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image df037f63-8d47-48a2-ac4a-94fd2490dc6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.503 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Nov 29 08:10:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3671772463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Nov 29 08:10:27 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Nov 29 08:10:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3409764419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:27 compute-0 nova_compute[256729]: 2025-11-29 08:10:27.990 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.036 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:28 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:10:28 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.161 256736 DEBUG os_brick.encryptors [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Using volume encryption metadata '{'encryption_key_id': 'e157e00f-8e09-44aa-b638-bb6d600e9411', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9ca82a13-68c7-4eb8-b2f4-6410faf62051', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9ca82a13-68c7-4eb8-b2f4-6410faf62051', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'df037f63-8d47-48a2-ac4a-94fd2490dc6f', 'attached_at': '', 'detached_at': '', 'volume_id': '9ca82a13-68c7-4eb8-b2f4-6410faf62051', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.164 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.186 256736 DEBUG barbicanclient.v1.secrets [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/e157e00f-8e09-44aa-b638-bb6d600e9411 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.187 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.221 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.221 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.248 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.249 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.274 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.274 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.373 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.374 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.400 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.401 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.428 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.428 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.452 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.453 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.478 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.478 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.500 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.500 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.523 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.524 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.550 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.550 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.570 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.571 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.596 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.596 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.619 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.620 256736 INFO barbicanclient.base [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/e157e00f-8e09-44aa-b638-bb6d600e9411
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.660 256736 DEBUG barbicanclient.client [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.661 256736 DEBUG nova.virt.libvirt.host [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <volume>9ca82a13-68c7-4eb8-b2f4-6410faf62051</volume>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   </usage>
Nov 29 08:10:28 compute-0 nova_compute[256729]: </secret>
Nov 29 08:10:28 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.702 256736 DEBUG nova.virt.libvirt.vif [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-287812875',display_name='tempest-TestEncryptedCinderVolumes-server-287812875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-287812875',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxX+PgMAZBuezORyRZTTDnmEkagoQZ/wV6Wk3lwyGDgLxEz+dGqkv0uj7q6iE8ZUn85LQMW2zUhk36PiQ5C6rOrwp08h1M8Rqk3HOI0Jn+9lui32YElh0SXij5turDSPw==',key_name='tempest-TestEncryptedCinderVolumes-1704602409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='062fa36b3fb745529eb64d4b5bb52af6',ramdisk_id='',reservation_id='r-hplm6evg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-541864957',owner_user_name='tempest-TestEncryptedCinderVolumes-541864957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:23Z,user_data=None,user_id='981b7946a749412f90d3d8148d99486a',uuid=df037f63-8d47-48a2-ac4a-94fd2490dc6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.703 256736 DEBUG nova.network.os_vif_util [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converting VIF {"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.705 256736 DEBUG nova.network.os_vif_util [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:83:76,bridge_name='br-int',has_traffic_filtering=True,id=1691debb-0a81-4c08-b125-ba66d384f0d2,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1691debb-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.707 256736 DEBUG nova.objects.instance [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'pci_devices' on Instance uuid df037f63-8d47-48a2-ac4a-94fd2490dc6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Nov 29 08:10:28 compute-0 ceph-mon[75050]: pgmap v2176: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 48 KiB/s rd, 9.4 MiB/s wr, 71 op/s
Nov 29 08:10:28 compute-0 ceph-mon[75050]: osdmap e436: 3 total, 3 up, 3 in
Nov 29 08:10:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3409764419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.724 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <uuid>df037f63-8d47-48a2-ac4a-94fd2490dc6f</uuid>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <name>instance-0000001d</name>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-287812875</nova:name>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:10:27</nova:creationTime>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <nova:user uuid="981b7946a749412f90d3d8148d99486a">tempest-TestEncryptedCinderVolumes-541864957-project-member</nova:user>
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <nova:project uuid="062fa36b3fb745529eb64d4b5bb52af6">tempest-TestEncryptedCinderVolumes-541864957</nova:project>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <nova:port uuid="1691debb-0a81-4c08-b125-ba66d384f0d2">
Nov 29 08:10:28 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <system>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <entry name="serial">df037f63-8d47-48a2-ac4a-94fd2490dc6f</entry>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <entry name="uuid">df037f63-8d47-48a2-ac4a-94fd2490dc6f</entry>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     </system>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <os>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   </os>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <features>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   </features>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/df037f63-8d47-48a2-ac4a-94fd2490dc6f_disk.config">
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       </source>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-9ca82a13-68c7-4eb8-b2f4-6410faf62051">
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       </source>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <serial>9ca82a13-68c7-4eb8-b2f4-6410faf62051</serial>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <encryption format="luks">
Nov 29 08:10:28 compute-0 nova_compute[256729]:         <secret type="passphrase" uuid="eeb52ec9-69cb-4637-b5d8-1308d0e7cffb"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       </encryption>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:b9:83:76"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <target dev="tap1691debb-0a"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f/console.log" append="off"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <video>
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     </video>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:10:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:10:28 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:10:28 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:10:28 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:10:28 compute-0 nova_compute[256729]: </domain>
Nov 29 08:10:28 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.727 256736 DEBUG nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Preparing to wait for external event network-vif-plugged-1691debb-0a81-4c08-b125-ba66d384f0d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.728 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.728 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.729 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.730 256736 DEBUG nova.virt.libvirt.vif [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-287812875',display_name='tempest-TestEncryptedCinderVolumes-server-287812875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-287812875',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxX+PgMAZBuezORyRZTTDnmEkagoQZ/wV6Wk3lwyGDgLxEz+dGqkv0uj7q6iE8ZUn85LQMW2zUhk36PiQ5C6rOrwp08h1M8Rqk3HOI0Jn+9lui32YElh0SXij5turDSPw==',key_name='tempest-TestEncryptedCinderVolumes-1704602409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='062fa36b3fb745529eb64d4b5bb52af6',ramdisk_id='',reservation_id='r-hplm6evg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-541864957',owner_user_name='tempest-TestEncryptedCinderVolumes-541864957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:23Z,user_data=None,user_id='981b7946a749412f90d3d8148d99486a',uuid=df037f63-8d47-48a2-ac4a-94fd2490dc6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.731 256736 DEBUG nova.network.os_vif_util [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converting VIF {"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.732 256736 DEBUG nova.network.os_vif_util [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:83:76,bridge_name='br-int',has_traffic_filtering=True,id=1691debb-0a81-4c08-b125-ba66d384f0d2,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1691debb-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.733 256736 DEBUG os_vif [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:83:76,bridge_name='br-int',has_traffic_filtering=True,id=1691debb-0a81-4c08-b125-ba66d384f0d2,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1691debb-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.735 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.735 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.736 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.740 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.740 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1691debb-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.740 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1691debb-0a, col_values=(('external_ids', {'iface-id': '1691debb-0a81-4c08-b125-ba66d384f0d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:83:76', 'vm-uuid': 'df037f63-8d47-48a2-ac4a-94fd2490dc6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.742 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:28 compute-0 NetworkManager[48962]: <info>  [1764403828.7433] manager: (tap1691debb-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.744 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:10:28 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.753 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.754 256736 INFO os_vif [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:83:76,bridge_name='br-int',has_traffic_filtering=True,id=1691debb-0a81-4c08-b125-ba66d384f0d2,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1691debb-0a')
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.825 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.825 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.825 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No VIF found with MAC fa:16:3e:b9:83:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.826 256736 INFO nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Using config drive
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.850 256736 DEBUG nova.storage.rbd_utils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image df037f63-8d47-48a2-ac4a-94fd2490dc6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.993 256736 DEBUG nova.network.neutron [req-81791955-0fb5-4b95-a83d-0f832d3ddeff req-0ecdeb2b-0d6b-4e86-8056-59fe89d107c8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Updated VIF entry in instance network info cache for port 1691debb-0a81-4c08-b125-ba66d384f0d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:10:28 compute-0 nova_compute[256729]: 2025-11-29 08:10:28.994 256736 DEBUG nova.network.neutron [req-81791955-0fb5-4b95-a83d-0f832d3ddeff req-0ecdeb2b-0d6b-4e86-8056-59fe89d107c8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Updating instance_info_cache with network_info: [{"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.009 256736 DEBUG oslo_concurrency.lockutils [req-81791955-0fb5-4b95-a83d-0f832d3ddeff req-0ecdeb2b-0d6b-4e86-8056-59fe89d107c8 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-df037f63-8d47-48a2-ac4a-94fd2490dc6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.105 256736 INFO nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Creating config drive at /var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f/disk.config
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.114 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpokr3dv7h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 990 KiB/s rd, 9.9 MiB/s wr, 99 op/s
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.263 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpokr3dv7h" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.300 256736 DEBUG nova.storage.rbd_utils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image df037f63-8d47-48a2-ac4a-94fd2490dc6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.306 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f/disk.config df037f63-8d47-48a2-ac4a-94fd2490dc6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.492 256736 DEBUG oslo_concurrency.processutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f/disk.config df037f63-8d47-48a2-ac4a-94fd2490dc6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.494 256736 INFO nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Deleting local config drive /var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f/disk.config because it was imported into RBD.
Nov 29 08:10:29 compute-0 kernel: tap1691debb-0a: entered promiscuous mode
Nov 29 08:10:29 compute-0 NetworkManager[48962]: <info>  [1764403829.5721] manager: (tap1691debb-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Nov 29 08:10:29 compute-0 ovn_controller[153383]: 2025-11-29T08:10:29Z|00282|binding|INFO|Claiming lport 1691debb-0a81-4c08-b125-ba66d384f0d2 for this chassis.
Nov 29 08:10:29 compute-0 ovn_controller[153383]: 2025-11-29T08:10:29Z|00283|binding|INFO|1691debb-0a81-4c08-b125-ba66d384f0d2: Claiming fa:16:3e:b9:83:76 10.100.0.10
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.573 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.582 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:83:76 10.100.0.10'], port_security=['fa:16:3e:b9:83:76 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'df037f63-8d47-48a2-ac4a-94fd2490dc6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dda88d46-9162-4e7c-bb47-793ac4133966', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062fa36b3fb745529eb64d4b5bb52af6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e6edb27-9f1c-444b-901c-a9a15234db1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=767afc55-24b1-431b-aeef-ddbbabf80029, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=1691debb-0a81-4c08-b125-ba66d384f0d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.585 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 1691debb-0a81-4c08-b125-ba66d384f0d2 in datapath dda88d46-9162-4e7c-bb47-793ac4133966 bound to our chassis
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.587 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dda88d46-9162-4e7c-bb47-793ac4133966
Nov 29 08:10:29 compute-0 ovn_controller[153383]: 2025-11-29T08:10:29Z|00284|binding|INFO|Setting lport 1691debb-0a81-4c08-b125-ba66d384f0d2 ovn-installed in OVS
Nov 29 08:10:29 compute-0 ovn_controller[153383]: 2025-11-29T08:10:29Z|00285|binding|INFO|Setting lport 1691debb-0a81-4c08-b125-ba66d384f0d2 up in Southbound
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.595 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.600 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.607 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[cdbf3768-7b65-4ec1-8d98-2b16a2710464]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.608 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdda88d46-91 in ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.610 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdda88d46-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.610 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb18037-77f8-46ab-8d07-2358b0cad0ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.612 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b5eecd43-a9f4-4e04-a4c0-65e0d867b1e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 systemd-machined[217781]: New machine qemu-29-instance-0000001d.
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.633 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[c54551d7-1114-40a6-9890-f5ecaecc8804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Nov 29 08:10:29 compute-0 systemd-udevd[303025]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.646 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e1b578-9488-4ab0-ad04-9cd248aa1867]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 NetworkManager[48962]: <info>  [1764403829.6571] device (tap1691debb-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:10:29 compute-0 NetworkManager[48962]: <info>  [1764403829.6581] device (tap1691debb-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.680 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[50577685-00bb-4aca-994d-c063d6875150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 NetworkManager[48962]: <info>  [1764403829.6860] manager: (tapdda88d46-90): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.684 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[af6e1fde-5693-4995-8f29-a06b8b133720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.720 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b6d293-841b-4a79-aa94-5186580fc262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.723 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[53ffa671-fe49-483d-b9ee-9f9967d7c821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ceph-mon[75050]: osdmap e437: 3 total, 3 up, 3 in
Nov 29 08:10:29 compute-0 NetworkManager[48962]: <info>  [1764403829.7539] device (tapdda88d46-90): carrier: link connected
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.760 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[df5a45e8-7e09-4844-9f76-697cad75bad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.780 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5392afeb-acde-497e-b6a0-f4d5693bc5ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdda88d46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:6b:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614481, 'reachable_time': 40901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303055, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.788 256736 DEBUG nova.compute.manager [req-3fbb16b5-c973-4b7d-8fc7-c7f7ebb66239 req-04ee0cd8-ecfc-4797-a360-f7873f0d2420 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received event network-vif-plugged-1691debb-0a81-4c08-b125-ba66d384f0d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.788 256736 DEBUG oslo_concurrency.lockutils [req-3fbb16b5-c973-4b7d-8fc7-c7f7ebb66239 req-04ee0cd8-ecfc-4797-a360-f7873f0d2420 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.789 256736 DEBUG oslo_concurrency.lockutils [req-3fbb16b5-c973-4b7d-8fc7-c7f7ebb66239 req-04ee0cd8-ecfc-4797-a360-f7873f0d2420 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.790 256736 DEBUG oslo_concurrency.lockutils [req-3fbb16b5-c973-4b7d-8fc7-c7f7ebb66239 req-04ee0cd8-ecfc-4797-a360-f7873f0d2420 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.790 256736 DEBUG nova.compute.manager [req-3fbb16b5-c973-4b7d-8fc7-c7f7ebb66239 req-04ee0cd8-ecfc-4797-a360-f7873f0d2420 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Processing event network-vif-plugged-1691debb-0a81-4c08-b125-ba66d384f0d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.795 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b4aa5332-3bc4-4b89-8a4f-0785d0bd9195]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:6bec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 614481, 'tstamp': 614481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303056, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.815 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[626c9af2-2071-4a1c-b13d-ecec129494a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdda88d46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:6b:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614481, 'reachable_time': 40901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303057, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.859 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[11429a41-b2e9-4207-95fd-52f5999fcea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.931 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb270f1-e884-4cfc-85df-1b4cbef16dbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.933 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdda88d46-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.933 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.934 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdda88d46-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:29 compute-0 kernel: tapdda88d46-90: entered promiscuous mode
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.937 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:29 compute-0 NetworkManager[48962]: <info>  [1764403829.9401] manager: (tapdda88d46-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.945 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdda88d46-90, col_values=(('external_ids', {'iface-id': 'bf50d5e3-cc9a-491e-8a5a-4b199a4df39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:29 compute-0 ovn_controller[153383]: 2025-11-29T08:10:29Z|00286|binding|INFO|Releasing lport bf50d5e3-cc9a-491e-8a5a-4b199a4df39f from this chassis (sb_readonly=0)
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.949 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dda88d46-9162-4e7c-bb47-793ac4133966.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dda88d46-9162-4e7c-bb47-793ac4133966.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.949 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.950 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[97355976-b098-487d-ac2b-316804e30b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.951 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-dda88d46-9162-4e7c-bb47-793ac4133966
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/dda88d46-9162-4e7c-bb47-793ac4133966.pid.haproxy
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID dda88d46-9162-4e7c-bb47-793ac4133966
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:10:29 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:29.952 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'env', 'PROCESS_TAG=haproxy-dda88d46-9162-4e7c-bb47-793ac4133966', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dda88d46-9162-4e7c-bb47-793ac4133966.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:10:29 compute-0 nova_compute[256729]: 2025-11-29 08:10:29.967 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:30 compute-0 podman[303124]: 2025-11-29 08:10:30.411523278 +0000 UTC m=+0.075341871 container create 26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 08:10:30 compute-0 podman[303124]: 2025-11-29 08:10:30.37641941 +0000 UTC m=+0.040238053 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:10:30 compute-0 systemd[1]: Started libpod-conmon-26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817.scope.
Nov 29 08:10:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b42f91bebff8fe8bd7ac00e4433e4a37ffca2d51d0288390bc13bff0690532f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:10:30 compute-0 podman[303124]: 2025-11-29 08:10:30.536258061 +0000 UTC m=+0.200076714 container init 26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:10:30 compute-0 podman[303124]: 2025-11-29 08:10:30.543371276 +0000 UTC m=+0.207189869 container start 26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:10:30 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[303140]: [NOTICE]   (303144) : New worker (303146) forked
Nov 29 08:10:30 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[303140]: [NOTICE]   (303144) : Loading success.
Nov 29 08:10:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Nov 29 08:10:30 compute-0 ceph-mon[75050]: pgmap v2179: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 990 KiB/s rd, 9.9 MiB/s wr, 99 op/s
Nov 29 08:10:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Nov 29 08:10:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Nov 29 08:10:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 62 op/s
Nov 29 08:10:31 compute-0 ceph-mon[75050]: osdmap e438: 3 total, 3 up, 3 in
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.148 256736 DEBUG nova.compute.manager [req-b240c27a-9314-4492-a79f-3952affce837 req-d0a07270-ada4-433e-ba1f-297a6a5792bf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received event network-vif-plugged-1691debb-0a81-4c08-b125-ba66d384f0d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.148 256736 DEBUG oslo_concurrency.lockutils [req-b240c27a-9314-4492-a79f-3952affce837 req-d0a07270-ada4-433e-ba1f-297a6a5792bf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.148 256736 DEBUG oslo_concurrency.lockutils [req-b240c27a-9314-4492-a79f-3952affce837 req-d0a07270-ada4-433e-ba1f-297a6a5792bf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.148 256736 DEBUG oslo_concurrency.lockutils [req-b240c27a-9314-4492-a79f-3952affce837 req-d0a07270-ada4-433e-ba1f-297a6a5792bf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.149 256736 DEBUG nova.compute.manager [req-b240c27a-9314-4492-a79f-3952affce837 req-d0a07270-ada4-433e-ba1f-297a6a5792bf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] No waiting events found dispatching network-vif-plugged-1691debb-0a81-4c08-b125-ba66d384f0d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.149 256736 WARNING nova.compute.manager [req-b240c27a-9314-4492-a79f-3952affce837 req-d0a07270-ada4-433e-ba1f-297a6a5792bf ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received unexpected event network-vif-plugged-1691debb-0a81-4c08-b125-ba66d384f0d2 for instance with vm_state building and task_state spawning.
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.331 256736 DEBUG nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.333 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403832.3306198, df037f63-8d47-48a2-ac4a-94fd2490dc6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.334 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] VM Started (Lifecycle Event)
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.338 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.341 256736 INFO nova.virt.libvirt.driver [-] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Instance spawned successfully.
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.342 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.357 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.365 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.370 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.371 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.372 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.373 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.373 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.374 256736 DEBUG nova.virt.libvirt.driver [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.400 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.401 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403832.3321202, df037f63-8d47-48a2-ac4a-94fd2490dc6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.402 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] VM Paused (Lifecycle Event)
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.433 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.439 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403832.33723, df037f63-8d47-48a2-ac4a-94fd2490dc6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.440 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] VM Resumed (Lifecycle Event)
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.448 256736 INFO nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Took 7.74 seconds to spawn the instance on the hypervisor.
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.449 256736 DEBUG nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.460 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.465 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.496 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.525 256736 INFO nova.compute.manager [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Took 10.01 seconds to build instance.
Nov 29 08:10:32 compute-0 nova_compute[256729]: 2025-11-29 08:10:32.545 256736 DEBUG oslo_concurrency.lockutils [None req-e57d866e-dc6a-457a-bb5f-53f87f5c0f5d 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:32 compute-0 ceph-mon[75050]: pgmap v2181: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 62 op/s
Nov 29 08:10:33 compute-0 nova_compute[256729]: 2025-11-29 08:10:33.039 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.2 MiB/s wr, 143 op/s
Nov 29 08:10:33 compute-0 nova_compute[256729]: 2025-11-29 08:10:33.742 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Nov 29 08:10:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Nov 29 08:10:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Nov 29 08:10:34 compute-0 ceph-mon[75050]: pgmap v2182: 305 pgs: 305 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.2 MiB/s wr, 143 op/s
Nov 29 08:10:34 compute-0 ceph-mon[75050]: osdmap e439: 3 total, 3 up, 3 in
Nov 29 08:10:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.9 MiB/s wr, 180 op/s
Nov 29 08:10:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:10:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:10:36 compute-0 nova_compute[256729]: 2025-11-29 08:10:36.512 256736 DEBUG nova.compute.manager [req-601e4536-7455-406b-9c72-eae6b2a1c0ed req-47976c8f-4542-47d4-880a-367c845e4d22 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received event network-changed-1691debb-0a81-4c08-b125-ba66d384f0d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:36 compute-0 nova_compute[256729]: 2025-11-29 08:10:36.512 256736 DEBUG nova.compute.manager [req-601e4536-7455-406b-9c72-eae6b2a1c0ed req-47976c8f-4542-47d4-880a-367c845e4d22 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Refreshing instance network info cache due to event network-changed-1691debb-0a81-4c08-b125-ba66d384f0d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:10:36 compute-0 nova_compute[256729]: 2025-11-29 08:10:36.513 256736 DEBUG oslo_concurrency.lockutils [req-601e4536-7455-406b-9c72-eae6b2a1c0ed req-47976c8f-4542-47d4-880a-367c845e4d22 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-df037f63-8d47-48a2-ac4a-94fd2490dc6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:10:36 compute-0 nova_compute[256729]: 2025-11-29 08:10:36.513 256736 DEBUG oslo_concurrency.lockutils [req-601e4536-7455-406b-9c72-eae6b2a1c0ed req-47976c8f-4542-47d4-880a-367c845e4d22 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-df037f63-8d47-48a2-ac4a-94fd2490dc6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:10:36 compute-0 nova_compute[256729]: 2025-11-29 08:10:36.513 256736 DEBUG nova.network.neutron [req-601e4536-7455-406b-9c72-eae6b2a1c0ed req-47976c8f-4542-47d4-880a-367c845e4d22 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Refreshing network info cache for port 1691debb-0a81-4c08-b125-ba66d384f0d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:10:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Nov 29 08:10:36 compute-0 ceph-mon[75050]: pgmap v2184: 305 pgs: 305 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.9 MiB/s wr, 180 op/s
Nov 29 08:10:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Nov 29 08:10:36 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Nov 29 08:10:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 2.3 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.9 MiB/s wr, 290 op/s
Nov 29 08:10:37 compute-0 nova_compute[256729]: 2025-11-29 08:10:37.239 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403822.2382708, aee08d25-d8a2-48f8-ac6e-a5b99c377db1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:10:37 compute-0 nova_compute[256729]: 2025-11-29 08:10:37.240 256736 INFO nova.compute.manager [-] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] VM Stopped (Lifecycle Event)
Nov 29 08:10:37 compute-0 nova_compute[256729]: 2025-11-29 08:10:37.258 256736 DEBUG nova.compute.manager [None req-26caee7b-3857-4c46-b028-831ef1c95299 - - - - - -] [instance: aee08d25-d8a2-48f8-ac6e-a5b99c377db1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:10:37 compute-0 ceph-mon[75050]: osdmap e440: 3 total, 3 up, 3 in
Nov 29 08:10:38 compute-0 nova_compute[256729]: 2025-11-29 08:10:38.044 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-0 nova_compute[256729]: 2025-11-29 08:10:38.128 256736 DEBUG nova.network.neutron [req-601e4536-7455-406b-9c72-eae6b2a1c0ed req-47976c8f-4542-47d4-880a-367c845e4d22 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Updated VIF entry in instance network info cache for port 1691debb-0a81-4c08-b125-ba66d384f0d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:10:38 compute-0 nova_compute[256729]: 2025-11-29 08:10:38.129 256736 DEBUG nova.network.neutron [req-601e4536-7455-406b-9c72-eae6b2a1c0ed req-47976c8f-4542-47d4-880a-367c845e4d22 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Updating instance_info_cache with network_info: [{"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:38 compute-0 nova_compute[256729]: 2025-11-29 08:10:38.152 256736 DEBUG oslo_concurrency.lockutils [req-601e4536-7455-406b-9c72-eae6b2a1c0ed req-47976c8f-4542-47d4-880a-367c845e4d22 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-df037f63-8d47-48a2-ac4a-94fd2490dc6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:10:38 compute-0 nova_compute[256729]: 2025-11-29 08:10:38.745 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:38 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/617808774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:38 compute-0 ceph-mon[75050]: pgmap v2186: 305 pgs: 305 active+clean; 2.3 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 6.5 MiB/s rd, 2.9 MiB/s wr, 290 op/s
Nov 29 08:10:38 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/617808774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.4 MiB/s wr, 244 op/s
Nov 29 08:10:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e440 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Nov 29 08:10:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Nov 29 08:10:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Nov 29 08:10:41 compute-0 ceph-mon[75050]: pgmap v2187: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 5.3 MiB/s rd, 2.4 MiB/s wr, 244 op/s
Nov 29 08:10:41 compute-0 ceph-mon[75050]: osdmap e441: 3 total, 3 up, 3 in
Nov 29 08:10:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.3 KiB/s wr, 163 op/s
Nov 29 08:10:42 compute-0 ceph-mon[75050]: pgmap v2189: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.3 KiB/s wr, 163 op/s
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.095530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403842095624, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 673, "num_deletes": 253, "total_data_size": 707829, "memory_usage": 721520, "flush_reason": "Manual Compaction"}
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403842104474, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 699912, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38790, "largest_seqno": 39462, "table_properties": {"data_size": 696345, "index_size": 1411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8490, "raw_average_key_size": 19, "raw_value_size": 689026, "raw_average_value_size": 1606, "num_data_blocks": 62, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403805, "oldest_key_time": 1764403805, "file_creation_time": 1764403842, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 8994 microseconds, and 4949 cpu microseconds.
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.104534) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 699912 bytes OK
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.104555) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.106377) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.106401) EVENT_LOG_v1 {"time_micros": 1764403842106393, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.106421) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 704233, prev total WAL file size 704233, number of live WAL files 2.
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.107431) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(683KB)], [80(10MB)]
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403842107481, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 11214194, "oldest_snapshot_seqno": -1}
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7065 keys, 9447051 bytes, temperature: kUnknown
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403842265349, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9447051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9397484, "index_size": 30831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 180096, "raw_average_key_size": 25, "raw_value_size": 9268236, "raw_average_value_size": 1311, "num_data_blocks": 1219, "num_entries": 7065, "num_filter_entries": 7065, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764403842, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.265662) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9447051 bytes
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.331194) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.0 rd, 59.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.0 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(29.5) write-amplify(13.5) OK, records in: 7585, records dropped: 520 output_compression: NoCompression
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.331228) EVENT_LOG_v1 {"time_micros": 1764403842331214, "job": 46, "event": "compaction_finished", "compaction_time_micros": 157931, "compaction_time_cpu_micros": 50608, "output_level": 6, "num_output_files": 1, "total_output_size": 9447051, "num_input_records": 7585, "num_output_records": 7065, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403842331695, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403842336791, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.107315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.336921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.336925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.336927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.336929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:42 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:10:42.336931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:10:43 compute-0 nova_compute[256729]: 2025-11-29 08:10:43.048 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 2.5 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 22 MiB/s wr, 213 op/s
Nov 29 08:10:43 compute-0 nova_compute[256729]: 2025-11-29 08:10:43.748 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:44 compute-0 ceph-mon[75050]: pgmap v2190: 305 pgs: 305 active+clean; 2.5 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 22 MiB/s wr, 213 op/s
Nov 29 08:10:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 2.6 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 578 KiB/s rd, 40 MiB/s wr, 163 op/s
Nov 29 08:10:45 compute-0 podman[303163]: 2025-11-29 08:10:45.749319331 +0000 UTC m=+0.108141504 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 08:10:45 compute-0 podman[303164]: 2025-11-29 08:10:45.772327376 +0000 UTC m=+0.104318309 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 08:10:45 compute-0 podman[303162]: 2025-11-29 08:10:45.778270821 +0000 UTC m=+0.138437032 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 08:10:46 compute-0 ovn_controller[153383]: 2025-11-29T08:10:46Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:83:76 10.100.0.10
Nov 29 08:10:46 compute-0 ovn_controller[153383]: 2025-11-29T08:10:46Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:83:76 10.100.0.10
Nov 29 08:10:46 compute-0 ceph-mon[75050]: pgmap v2191: 305 pgs: 305 active+clean; 2.6 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 578 KiB/s rd, 40 MiB/s wr, 163 op/s
Nov 29 08:10:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 2.9 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 525 KiB/s rd, 68 MiB/s wr, 239 op/s
Nov 29 08:10:48 compute-0 nova_compute[256729]: 2025-11-29 08:10:48.048 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:48 compute-0 ceph-mon[75050]: pgmap v2192: 305 pgs: 305 active+clean; 2.9 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 525 KiB/s rd, 68 MiB/s wr, 239 op/s
Nov 29 08:10:48 compute-0 nova_compute[256729]: 2025-11-29 08:10:48.750 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 3.1 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 733 KiB/s rd, 87 MiB/s wr, 257 op/s
Nov 29 08:10:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:50 compute-0 ceph-mon[75050]: pgmap v2193: 305 pgs: 305 active+clean; 3.1 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 733 KiB/s rd, 87 MiB/s wr, 257 op/s
Nov 29 08:10:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 3.1 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 666 KiB/s rd, 79 MiB/s wr, 234 op/s
Nov 29 08:10:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Nov 29 08:10:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Nov 29 08:10:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.486 256736 DEBUG oslo_concurrency.lockutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.487 256736 DEBUG oslo_concurrency.lockutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.487 256736 DEBUG oslo_concurrency.lockutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.488 256736 DEBUG oslo_concurrency.lockutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.488 256736 DEBUG oslo_concurrency.lockutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.489 256736 INFO nova.compute.manager [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Terminating instance
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.490 256736 DEBUG nova.compute.manager [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:10:52 compute-0 ceph-mon[75050]: pgmap v2194: 305 pgs: 305 active+clean; 3.1 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 666 KiB/s rd, 79 MiB/s wr, 234 op/s
Nov 29 08:10:52 compute-0 ceph-mon[75050]: osdmap e442: 3 total, 3 up, 3 in
Nov 29 08:10:52 compute-0 kernel: tap1691debb-0a (unregistering): left promiscuous mode
Nov 29 08:10:52 compute-0 NetworkManager[48962]: <info>  [1764403852.5501] device (tap1691debb-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:10:52 compute-0 ovn_controller[153383]: 2025-11-29T08:10:52Z|00287|binding|INFO|Releasing lport 1691debb-0a81-4c08-b125-ba66d384f0d2 from this chassis (sb_readonly=0)
Nov 29 08:10:52 compute-0 ovn_controller[153383]: 2025-11-29T08:10:52Z|00288|binding|INFO|Setting lport 1691debb-0a81-4c08-b125-ba66d384f0d2 down in Southbound
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.559 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:52 compute-0 ovn_controller[153383]: 2025-11-29T08:10:52Z|00289|binding|INFO|Removing iface tap1691debb-0a ovn-installed in OVS
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.563 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.570 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:83:76 10.100.0.10'], port_security=['fa:16:3e:b9:83:76 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'df037f63-8d47-48a2-ac4a-94fd2490dc6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dda88d46-9162-4e7c-bb47-793ac4133966', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062fa36b3fb745529eb64d4b5bb52af6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e6edb27-9f1c-444b-901c-a9a15234db1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=767afc55-24b1-431b-aeef-ddbbabf80029, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=1691debb-0a81-4c08-b125-ba66d384f0d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.574 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 1691debb-0a81-4c08-b125-ba66d384f0d2 in datapath dda88d46-9162-4e7c-bb47-793ac4133966 unbound from our chassis
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.577 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dda88d46-9162-4e7c-bb47-793ac4133966, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.579 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[80ae535e-c8fe-4f1b-b59e-ee1f3072d129]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.580 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 namespace which is not needed anymore
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.604 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:52 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 29 08:10:52 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 16.743s CPU time.
Nov 29 08:10:52 compute-0 systemd-machined[217781]: Machine qemu-29-instance-0000001d terminated.
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.718 256736 INFO nova.virt.libvirt.driver [-] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Instance destroyed successfully.
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.721 256736 DEBUG nova.objects.instance [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'resources' on Instance uuid df037f63-8d47-48a2-ac4a-94fd2490dc6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:10:52 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[303140]: [NOTICE]   (303144) : haproxy version is 2.8.14-c23fe91
Nov 29 08:10:52 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[303140]: [NOTICE]   (303144) : path to executable is /usr/sbin/haproxy
Nov 29 08:10:52 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[303140]: [WARNING]  (303144) : Exiting Master process...
Nov 29 08:10:52 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[303140]: [ALERT]    (303144) : Current worker (303146) exited with code 143 (Terminated)
Nov 29 08:10:52 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[303140]: [WARNING]  (303144) : All workers exited. Exiting... (0)
Nov 29 08:10:52 compute-0 systemd[1]: libpod-26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817.scope: Deactivated successfully.
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.735 256736 DEBUG nova.virt.libvirt.vif [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-287812875',display_name='tempest-TestEncryptedCinderVolumes-server-287812875',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-287812875',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxX+PgMAZBuezORyRZTTDnmEkagoQZ/wV6Wk3lwyGDgLxEz+dGqkv0uj7q6iE8ZUn85LQMW2zUhk36PiQ5C6rOrwp08h1M8Rqk3HOI0Jn+9lui32YElh0SXij5turDSPw==',key_name='tempest-TestEncryptedCinderVolumes-1704602409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='062fa36b3fb745529eb64d4b5bb52af6',ramdisk_id='',reservation_id='r-hplm6evg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-541864957',owner_user_name='tempest-TestEncryptedCinderVolumes-541864957-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:32Z,user_data=None,user_id='981b7946a749412f90d3d8148d99486a',uuid=df037f63-8d47-48a2-ac4a-94fd2490dc6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.735 256736 DEBUG nova.network.os_vif_util [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converting VIF {"id": "1691debb-0a81-4c08-b125-ba66d384f0d2", "address": "fa:16:3e:b9:83:76", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1691debb-0a", "ovs_interfaceid": "1691debb-0a81-4c08-b125-ba66d384f0d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.736 256736 DEBUG nova.network.os_vif_util [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:83:76,bridge_name='br-int',has_traffic_filtering=True,id=1691debb-0a81-4c08-b125-ba66d384f0d2,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1691debb-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.737 256736 DEBUG os_vif [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:83:76,bridge_name='br-int',has_traffic_filtering=True,id=1691debb-0a81-4c08-b125-ba66d384f0d2,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1691debb-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.740 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:52 compute-0 podman[303249]: 2025-11-29 08:10:52.740499231 +0000 UTC m=+0.047692317 container died 26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.741 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1691debb-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.742 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.744 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.748 256736 INFO os_vif [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:83:76,bridge_name='br-int',has_traffic_filtering=True,id=1691debb-0a81-4c08-b125-ba66d384f0d2,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1691debb-0a')
Nov 29 08:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b42f91bebff8fe8bd7ac00e4433e4a37ffca2d51d0288390bc13bff0690532f-merged.mount: Deactivated successfully.
Nov 29 08:10:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817-userdata-shm.mount: Deactivated successfully.
Nov 29 08:10:52 compute-0 podman[303249]: 2025-11-29 08:10:52.783107907 +0000 UTC m=+0.090300983 container cleanup 26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 08:10:52 compute-0 systemd[1]: libpod-conmon-26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817.scope: Deactivated successfully.
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.859 256736 DEBUG nova.compute.manager [req-cc5998c7-bacc-4c70-a880-fd070f40146a req-b07ae583-a2ee-42fb-81ff-8d7ebad11a18 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received event network-vif-unplugged-1691debb-0a81-4c08-b125-ba66d384f0d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.860 256736 DEBUG oslo_concurrency.lockutils [req-cc5998c7-bacc-4c70-a880-fd070f40146a req-b07ae583-a2ee-42fb-81ff-8d7ebad11a18 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.860 256736 DEBUG oslo_concurrency.lockutils [req-cc5998c7-bacc-4c70-a880-fd070f40146a req-b07ae583-a2ee-42fb-81ff-8d7ebad11a18 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.860 256736 DEBUG oslo_concurrency.lockutils [req-cc5998c7-bacc-4c70-a880-fd070f40146a req-b07ae583-a2ee-42fb-81ff-8d7ebad11a18 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.860 256736 DEBUG nova.compute.manager [req-cc5998c7-bacc-4c70-a880-fd070f40146a req-b07ae583-a2ee-42fb-81ff-8d7ebad11a18 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] No waiting events found dispatching network-vif-unplugged-1691debb-0a81-4c08-b125-ba66d384f0d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.861 256736 DEBUG nova.compute.manager [req-cc5998c7-bacc-4c70-a880-fd070f40146a req-b07ae583-a2ee-42fb-81ff-8d7ebad11a18 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received event network-vif-unplugged-1691debb-0a81-4c08-b125-ba66d384f0d2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:10:52 compute-0 podman[303306]: 2025-11-29 08:10:52.861083779 +0000 UTC m=+0.050660679 container remove 26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.868 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[477b428e-606b-4d4f-a8b0-12e3390fcb33]: (4, ('Sat Nov 29 08:10:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 (26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817)\n26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817\nSat Nov 29 08:10:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 (26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817)\n26b18a15403de53c2995223a8395c3ab3b6e63ac37a575945def568496e10817\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.870 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[4e82aed6-91bf-4809-b6ea-b93f16e8630a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.874 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdda88d46-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.876 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:52 compute-0 kernel: tapdda88d46-90: left promiscuous mode
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.895 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.898 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[f26bdc3e-d593-4429-9f6a-ee0ea6a26c27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.914 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[cdcdd137-8ae0-439d-9551-05e6db8d41e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.915 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e3725412-1983-4e47-8297-524c3edd99e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.932 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[614007ca-cf71-488d-b419-7574e91785a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614473, 'reachable_time': 16179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303325, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:52 compute-0 systemd[1]: run-netns-ovnmeta\x2ddda88d46\x2d9162\x2d4e7c\x2dbb47\x2d793ac4133966.mount: Deactivated successfully.
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.938 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:10:52 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:52.938 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8e24c1-3b69-49a9-b031-0fcb98991ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.947 256736 INFO nova.virt.libvirt.driver [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Deleting instance files /var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f_del
Nov 29 08:10:52 compute-0 nova_compute[256729]: 2025-11-29 08:10:52.948 256736 INFO nova.virt.libvirt.driver [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Deletion of /var/lib/nova/instances/df037f63-8d47-48a2-ac4a-94fd2490dc6f_del complete
Nov 29 08:10:53 compute-0 nova_compute[256729]: 2025-11-29 08:10:53.050 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.7 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 793 KiB/s rd, 91 MiB/s wr, 310 op/s
Nov 29 08:10:53 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:53.313 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:10:53 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:53.314 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:10:53 compute-0 nova_compute[256729]: 2025-11-29 08:10:53.315 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:53 compute-0 nova_compute[256729]: 2025-11-29 08:10:53.320 256736 INFO nova.compute.manager [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Took 0.83 seconds to destroy the instance on the hypervisor.
Nov 29 08:10:53 compute-0 nova_compute[256729]: 2025-11-29 08:10:53.321 256736 DEBUG oslo.service.loopingcall [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:10:53 compute-0 nova_compute[256729]: 2025-11-29 08:10:53.322 256736 DEBUG nova.compute.manager [-] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:10:53 compute-0 nova_compute[256729]: 2025-11-29 08:10:53.322 256736 DEBUG nova.network.neutron [-] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:10:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1168466118' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1168466118' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1168466118' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:53 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1168466118' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.385 256736 DEBUG nova.network.neutron [-] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.413 256736 INFO nova.compute.manager [-] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Took 1.09 seconds to deallocate network for instance.
Nov 29 08:10:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Nov 29 08:10:54 compute-0 ceph-mon[75050]: pgmap v2196: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.7 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 793 KiB/s rd, 91 MiB/s wr, 310 op/s
Nov 29 08:10:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Nov 29 08:10:54 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.583 256736 INFO nova.compute.manager [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Took 0.17 seconds to detach 1 volumes for instance.
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.635 256736 DEBUG oslo_concurrency.lockutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.636 256736 DEBUG oslo_concurrency.lockutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.700 256736 DEBUG oslo_concurrency.processutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.938 256736 DEBUG nova.compute.manager [req-ea3a2658-e865-4e6c-b909-112f8efe01f1 req-b993d12d-79a6-4fed-9fc8-2992cafe28c5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received event network-vif-plugged-1691debb-0a81-4c08-b125-ba66d384f0d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.938 256736 DEBUG oslo_concurrency.lockutils [req-ea3a2658-e865-4e6c-b909-112f8efe01f1 req-b993d12d-79a6-4fed-9fc8-2992cafe28c5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.939 256736 DEBUG oslo_concurrency.lockutils [req-ea3a2658-e865-4e6c-b909-112f8efe01f1 req-b993d12d-79a6-4fed-9fc8-2992cafe28c5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.939 256736 DEBUG oslo_concurrency.lockutils [req-ea3a2658-e865-4e6c-b909-112f8efe01f1 req-b993d12d-79a6-4fed-9fc8-2992cafe28c5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.939 256736 DEBUG nova.compute.manager [req-ea3a2658-e865-4e6c-b909-112f8efe01f1 req-b993d12d-79a6-4fed-9fc8-2992cafe28c5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] No waiting events found dispatching network-vif-plugged-1691debb-0a81-4c08-b125-ba66d384f0d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.940 256736 WARNING nova.compute.manager [req-ea3a2658-e865-4e6c-b909-112f8efe01f1 req-b993d12d-79a6-4fed-9fc8-2992cafe28c5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received unexpected event network-vif-plugged-1691debb-0a81-4c08-b125-ba66d384f0d2 for instance with vm_state deleted and task_state None.
Nov 29 08:10:54 compute-0 nova_compute[256729]: 2025-11-29 08:10:54.940 256736 DEBUG nova.compute.manager [req-ea3a2658-e865-4e6c-b909-112f8efe01f1 req-b993d12d-79a6-4fed-9fc8-2992cafe28c5 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Received event network-vif-deleted-1691debb-0a81-4c08-b125-ba66d384f0d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:10:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:10:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079931217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:55 compute-0 nova_compute[256729]: 2025-11-29 08:10:55.182 256736 DEBUG oslo_concurrency.processutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:10:55 compute-0 nova_compute[256729]: 2025-11-29 08:10:55.189 256736 DEBUG nova.compute.provider_tree [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:10:55 compute-0 nova_compute[256729]: 2025-11-29 08:10:55.213 256736 DEBUG nova.scheduler.client.report [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:10:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:10:55 compute-0 nova_compute[256729]: 2025-11-29 08:10:55.239 256736 DEBUG oslo_concurrency.lockutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 425 KiB/s rd, 52 MiB/s wr, 248 op/s
Nov 29 08:10:55 compute-0 nova_compute[256729]: 2025-11-29 08:10:55.259 256736 INFO nova.scheduler.client.report [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Deleted allocations for instance df037f63-8d47-48a2-ac4a-94fd2490dc6f
Nov 29 08:10:55 compute-0 nova_compute[256729]: 2025-11-29 08:10:55.319 256736 DEBUG oslo_concurrency.lockutils [None req-5ae20924-0ac2-4894-8202-d5a0dafbf6c9 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "df037f63-8d47-48a2-ac4a-94fd2490dc6f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:10:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Nov 29 08:10:55 compute-0 ceph-mon[75050]: osdmap e443: 3 total, 3 up, 3 in
Nov 29 08:10:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1079931217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:10:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Nov 29 08:10:55 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Nov 29 08:10:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:10:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1840167922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:10:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1840167922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:56 compute-0 ceph-mon[75050]: pgmap v2198: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 2.4 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 425 KiB/s rd, 52 MiB/s wr, 248 op/s
Nov 29 08:10:56 compute-0 ceph-mon[75050]: osdmap e444: 3 total, 3 up, 3 in
Nov 29 08:10:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1840167922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:10:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1840167922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:10:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 2.3 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 284 KiB/s rd, 37 MiB/s wr, 374 op/s
Nov 29 08:10:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Nov 29 08:10:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Nov 29 08:10:57 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Nov 29 08:10:57 compute-0 nova_compute[256729]: 2025-11-29 08:10:57.805 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:58 compute-0 nova_compute[256729]: 2025-11-29 08:10:58.052 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:10:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:10:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399101067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Nov 29 08:10:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Nov 29 08:10:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Nov 29 08:10:58 compute-0 ceph-mon[75050]: pgmap v2200: 305 pgs: 305 active+clean; 2.3 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 284 KiB/s rd, 37 MiB/s wr, 374 op/s
Nov 29 08:10:58 compute-0 ceph-mon[75050]: osdmap e445: 3 total, 3 up, 3 in
Nov 29 08:10:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/399101067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:10:59 compute-0 nova_compute[256729]: 2025-11-29 08:10:59.144 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:10:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 112 KiB/s rd, 6.8 KiB/s wr, 140 op/s
Nov 29 08:10:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Nov 29 08:10:59 compute-0 ceph-mon[75050]: osdmap e446: 3 total, 3 up, 3 in
Nov 29 08:10:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Nov 29 08:10:59 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:59.790 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:59.791 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:10:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:10:59.791 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Nov 29 08:11:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Nov 29 08:11:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Nov 29 08:11:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177104205' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177104205' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:00 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:00.317 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:00 compute-0 ceph-mon[75050]: pgmap v2203: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 112 KiB/s rd, 6.8 KiB/s wr, 140 op/s
Nov 29 08:11:00 compute-0 ceph-mon[75050]: osdmap e447: 3 total, 3 up, 3 in
Nov 29 08:11:00 compute-0 ceph-mon[75050]: osdmap e448: 3 total, 3 up, 3 in
Nov 29 08:11:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1177104205' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1177104205' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 24 KiB/s rd, 2.5 KiB/s wr, 35 op/s
Nov 29 08:11:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Nov 29 08:11:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Nov 29 08:11:01 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Nov 29 08:11:02 compute-0 ceph-mon[75050]: pgmap v2206: 305 pgs: 305 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 24 KiB/s rd, 2.5 KiB/s wr, 35 op/s
Nov 29 08:11:02 compute-0 ceph-mon[75050]: osdmap e449: 3 total, 3 up, 3 in
Nov 29 08:11:02 compute-0 nova_compute[256729]: 2025-11-29 08:11:02.809 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:03 compute-0 nova_compute[256729]: 2025-11-29 08:11:03.054 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4255504118' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:03 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4255504118' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 1.7 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 122 KiB/s rd, 9.2 KiB/s wr, 198 op/s
Nov 29 08:11:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4255504118' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4255504118' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:04 compute-0 ceph-mon[75050]: pgmap v2208: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 1.7 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 122 KiB/s rd, 9.2 KiB/s wr, 198 op/s
Nov 29 08:11:04 compute-0 nova_compute[256729]: 2025-11-29 08:11:04.886 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "11d37006-0804-487e-93f1-217ea49e9a51" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:04 compute-0 nova_compute[256729]: 2025-11-29 08:11:04.887 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:04 compute-0 nova_compute[256729]: 2025-11-29 08:11:04.933 256736 DEBUG nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 29 08:11:05 compute-0 nova_compute[256729]: 2025-11-29 08:11:05.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:05 compute-0 nova_compute[256729]: 2025-11-29 08:11:05.187 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:05 compute-0 nova_compute[256729]: 2025-11-29 08:11:05.187 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:05 compute-0 nova_compute[256729]: 2025-11-29 08:11:05.198 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 29 08:11:05 compute-0 nova_compute[256729]: 2025-11-29 08:11:05.198 256736 INFO nova.compute.claims [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Claim successful on node compute-0.ctlplane.example.com
Nov 29 08:11:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e449 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 939 MiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 5.5 KiB/s wr, 168 op/s
Nov 29 08:11:05 compute-0 nova_compute[256729]: 2025-11-29 08:11:05.527 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:11:05
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['.mgr', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.data']
Nov 29 08:11:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:11:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:11:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630752243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.004 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.011 256736 DEBUG nova.compute.provider_tree [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.030 256736 DEBUG nova.scheduler.client.report [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.060 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.061 256736 DEBUG nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.121 256736 DEBUG nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.123 256736 DEBUG nova.network.neutron [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.147 256736 INFO nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.169 256736 DEBUG nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.227 256736 INFO nova.virt.block_device [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Booting with volume 1447a403-936a-439b-837a-05ee34b38c45 at /dev/vda
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.340 256736 DEBUG nova.policy [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '981b7946a749412f90d3d8148d99486a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '062fa36b3fb745529eb64d4b5bb52af6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.363 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.407 256736 DEBUG os_brick.utils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.409 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.429 266745 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.429 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[c34f39f2-8809-4be8-a0b5-d0deb0989f90]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.431 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.443 266745 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.444 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b97c7f-758f-41ce-9459-9bab1f9aeba3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:f8ddf59f2518', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.446 266745 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.456 266745 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.456 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1f9fbe-f885-43cb-914b-14180f82621a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.459 266745 DEBUG oslo.privsep.daemon [-] privsep: reply[9218de16-1607-43e3-913a-f1cbcfc12666]: (4, 'a4431209-b14d-4d8f-894a-1aed0bd2dae7') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.460 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.491 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.497 256736 DEBUG os_brick.initiator.connectors.lightos [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.497 256736 DEBUG os_brick.initiator.connectors.lightos [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.498 256736 DEBUG os_brick.initiator.connectors.lightos [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.499 256736 DEBUG os_brick.utils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] <== get_connector_properties: return (90ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:f8ddf59f2518', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': 'a4431209-b14d-4d8f-894a-1aed0bd2dae7', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.500 256736 DEBUG nova.virt.block_device [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Updating existing volume attachment record: 14ce5ca8-4b8a-47c6-a3e8-9c953e152cf4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 29 08:11:06 compute-0 nova_compute[256729]: 2025-11-29 08:11:06.556 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:06 compute-0 ceph-mon[75050]: pgmap v2209: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 939 MiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 5.5 KiB/s wr, 168 op/s
Nov 29 08:11:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1630752243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:11:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:07 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3809677105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 271 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 4.8 KiB/s wr, 158 op/s
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.588 256736 DEBUG nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.592 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.593 256736 INFO nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Creating image(s)
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.594 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.595 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Ensure instance console log exists: /var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.596 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.596 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.597 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.626 256736 DEBUG nova.network.neutron [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Successfully created port: 503caeb9-24dd-41d1-bcb9-da6866a4b3cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 29 08:11:07 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3809677105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.717 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403852.7163775, df037f63-8d47-48a2-ac4a-94fd2490dc6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.718 256736 INFO nova.compute.manager [-] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] VM Stopped (Lifecycle Event)
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.741 256736 DEBUG nova.compute.manager [None req-614209d5-62ab-45c5-9d9b-d8438fb8c79b - - - - - -] [instance: df037f63-8d47-48a2-ac4a-94fd2490dc6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:07 compute-0 nova_compute[256729]: 2025-11-29 08:11:07.812 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:08 compute-0 nova_compute[256729]: 2025-11-29 08:11:08.057 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:08 compute-0 nova_compute[256729]: 2025-11-29 08:11:08.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:08 compute-0 nova_compute[256729]: 2025-11-29 08:11:08.147 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:08 compute-0 ceph-mon[75050]: pgmap v2210: 305 pgs: 305 active+clean; 271 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 4.8 KiB/s wr, 158 op/s
Nov 29 08:11:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722773953' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722773953' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 271 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 4.1 KiB/s wr, 134 op/s
Nov 29 08:11:09 compute-0 nova_compute[256729]: 2025-11-29 08:11:09.525 256736 DEBUG nova.network.neutron [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Successfully updated port: 503caeb9-24dd-41d1-bcb9-da6866a4b3cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 29 08:11:09 compute-0 nova_compute[256729]: 2025-11-29 08:11:09.543 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "refresh_cache-11d37006-0804-487e-93f1-217ea49e9a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:11:09 compute-0 nova_compute[256729]: 2025-11-29 08:11:09.544 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquired lock "refresh_cache-11d37006-0804-487e-93f1-217ea49e9a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:11:09 compute-0 nova_compute[256729]: 2025-11-29 08:11:09.544 256736 DEBUG nova.network.neutron [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 29 08:11:09 compute-0 nova_compute[256729]: 2025-11-29 08:11:09.627 256736 DEBUG nova.compute.manager [req-dbd5ca1a-c3eb-4df0-9fc9-f59da8bb309c req-0938b503-97b5-4df7-909a-0023febfeb61 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received event network-changed-503caeb9-24dd-41d1-bcb9-da6866a4b3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:09 compute-0 nova_compute[256729]: 2025-11-29 08:11:09.628 256736 DEBUG nova.compute.manager [req-dbd5ca1a-c3eb-4df0-9fc9-f59da8bb309c req-0938b503-97b5-4df7-909a-0023febfeb61 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Refreshing instance network info cache due to event network-changed-503caeb9-24dd-41d1-bcb9-da6866a4b3cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:11:09 compute-0 nova_compute[256729]: 2025-11-29 08:11:09.628 256736 DEBUG oslo_concurrency.lockutils [req-dbd5ca1a-c3eb-4df0-9fc9-f59da8bb309c req-0938b503-97b5-4df7-909a-0023febfeb61 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-11d37006-0804-487e-93f1-217ea49e9a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:11:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3722773953' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3722773953' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:10 compute-0 nova_compute[256729]: 2025-11-29 08:11:10.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:10 compute-0 nova_compute[256729]: 2025-11-29 08:11:10.148 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:11:10 compute-0 nova_compute[256729]: 2025-11-29 08:11:10.148 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:11:10 compute-0 nova_compute[256729]: 2025-11-29 08:11:10.220 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 29 08:11:10 compute-0 nova_compute[256729]: 2025-11-29 08:11:10.220 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:11:10 compute-0 nova_compute[256729]: 2025-11-29 08:11:10.221 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:10 compute-0 nova_compute[256729]: 2025-11-29 08:11:10.221 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:10 compute-0 nova_compute[256729]: 2025-11-29 08:11:10.221 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:11:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e449 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Nov 29 08:11:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Nov 29 08:11:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Nov 29 08:11:10 compute-0 nova_compute[256729]: 2025-11-29 08:11:10.487 256736 DEBUG nova.network.neutron [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 29 08:11:10 compute-0 ceph-mon[75050]: pgmap v2211: 305 pgs: 305 active+clean; 271 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 4.1 KiB/s wr, 134 op/s
Nov 29 08:11:10 compute-0 ceph-mon[75050]: osdmap e450: 3 total, 3 up, 3 in
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.170 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.171 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.171 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.172 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.173 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 271 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.8 KiB/s wr, 125 op/s
Nov 29 08:11:11 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:11:11 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/204346532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.619 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.750 256736 DEBUG nova.network.neutron [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Updating instance_info_cache with network_info: [{"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:11 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/204346532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.831 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.832 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4323MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.832 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.832 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.879 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Releasing lock "refresh_cache-11d37006-0804-487e-93f1-217ea49e9a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.879 256736 DEBUG nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Instance network_info: |[{"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.880 256736 DEBUG oslo_concurrency.lockutils [req-dbd5ca1a-c3eb-4df0-9fc9-f59da8bb309c req-0938b503-97b5-4df7-909a-0023febfeb61 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-11d37006-0804-487e-93f1-217ea49e9a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.880 256736 DEBUG nova.network.neutron [req-dbd5ca1a-c3eb-4df0-9fc9-f59da8bb309c req-0938b503-97b5-4df7-909a-0023febfeb61 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Refreshing network info cache for port 503caeb9-24dd-41d1-bcb9-da6866a4b3cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.885 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Start _get_guest_xml network_info=[{"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'delete_on_termination': False, 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1447a403-936a-439b-837a-05ee34b38c45', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1447a403-936a-439b-837a-05ee34b38c45', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '11d37006-0804-487e-93f1-217ea49e9a51', 'attached_at': '', 'detached_at': '', 'volume_id': '1447a403-936a-439b-837a-05ee34b38c45', 'serial': '1447a403-936a-439b-837a-05ee34b38c45'}, 'device_type': 'disk', 'disk_bus': 'virtio', 'mount_device': '/dev/vda', 'guest_format': None, 'attachment_id': '14ce5ca8-4b8a-47c6-a3e8-9c953e152cf4', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.889 256736 WARNING nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.897 256736 DEBUG nova.virt.libvirt.host [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.897 256736 DEBUG nova.virt.libvirt.host [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.905 256736 DEBUG nova.virt.libvirt.host [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.905 256736 DEBUG nova.virt.libvirt.host [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.906 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.906 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:44:59Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='167fbd13-6c14-4e01-870e-509b9a2d9831',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.907 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.907 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.907 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.908 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.908 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.908 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.909 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.909 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.909 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.910 256736 DEBUG nova.virt.hardware [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.936 256736 DEBUG nova.storage.rbd_utils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 11d37006-0804-487e-93f1-217ea49e9a51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.939 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.997 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Instance 11d37006-0804-487e-93f1-217ea49e9a51 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.998 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:11:11 compute-0 nova_compute[256729]: 2025-11-29 08:11:11.999 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.078 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:11:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3436591909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.390 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:12 compute-0 sudo[303463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:12 compute-0 sudo[303463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:12 compute-0 sudo[303463]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:12 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:11:12 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3113966863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.522 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.530 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:11:12 compute-0 sudo[303488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:12 compute-0 sudo[303488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:12 compute-0 sudo[303488]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.555 256736 DEBUG os_brick.encryptors [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Using volume encryption metadata '{'encryption_key_id': 'd27a38a6-1e21-467f-8c95-db88b488c164', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1447a403-936a-439b-837a-05ee34b38c45', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1447a403-936a-439b-837a-05ee34b38c45', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '11d37006-0804-487e-93f1-217ea49e9a51', 'attached_at': '', 'detached_at': '', 'volume_id': '1447a403-936a-439b-837a-05ee34b38c45', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.557 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.579 256736 DEBUG barbicanclient.v1.secrets [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/d27a38a6-1e21-467f-8c95-db88b488c164 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.580 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.601 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:11:12 compute-0 sudo[303515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:12 compute-0 sudo[303515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:12 compute-0 sudo[303515]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.648 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.649 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.670 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.671 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:12 compute-0 sudo[303540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:11:12 compute-0 sudo[303540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.693 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.694 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.723 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.723 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.744 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.745 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.771 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.772 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 ceph-mon[75050]: pgmap v2213: 305 pgs: 305 active+clean; 271 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.8 KiB/s wr, 125 op/s
Nov 29 08:11:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3436591909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:11:12 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3113966863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.806 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.807 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.814 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.842 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.843 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.869 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.871 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.910 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.911 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.940 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.941 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.967 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:12 compute-0 nova_compute[256729]: 2025-11-29 08:11:12.967 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.059 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.071 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.071 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.101 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.102 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.120 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.121 256736 INFO barbicanclient.base [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Calculated Secrets uuid ref: secrets/d27a38a6-1e21-467f-8c95-db88b488c164
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.143 256736 DEBUG barbicanclient.client [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.144 256736 DEBUG nova.virt.libvirt.host [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <usage type="volume">
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <volume>1447a403-936a-439b-837a-05ee34b38c45</volume>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   </usage>
Nov 29 08:11:13 compute-0 nova_compute[256729]: </secret>
Nov 29 08:11:13 compute-0 nova_compute[256729]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.203 256736 DEBUG nova.virt.libvirt.vif [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1949949001',display_name='tempest-TestEncryptedCinderVolumes-server-1949949001',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1949949001',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxX+PgMAZBuezORyRZTTDnmEkagoQZ/wV6Wk3lwyGDgLxEz+dGqkv0uj7q6iE8ZUn85LQMW2zUhk36PiQ5C6rOrwp08h1M8Rqk3HOI0Jn+9lui32YElh0SXij5turDSPw==',key_name='tempest-TestEncryptedCinderVolumes-1704602409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='062fa36b3fb745529eb64d4b5bb52af6',ramdisk_id='',reservation_id='r-jq90o5e9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-541864957',owner_user_name='tempest-TestEncryptedCinderVolumes-541864957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:06Z,user_data=None,user_id='981b7946a749412f90d3d8148d99486a',uuid=11d37006-0804-487e-93f1-217ea49e9a51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.204 256736 DEBUG nova.network.os_vif_util [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converting VIF {"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.205 256736 DEBUG nova.network.os_vif_util [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:8f:05,bridge_name='br-int',has_traffic_filtering=True,id=503caeb9-24dd-41d1-bcb9-da6866a4b3cd,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap503caeb9-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.207 256736 DEBUG nova.objects.instance [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11d37006-0804-487e-93f1-217ea49e9a51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 409 B/s wr, 42 op/s
Nov 29 08:11:13 compute-0 sudo[303540]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.305 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] End _get_guest_xml xml=<domain type="kvm">
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <uuid>11d37006-0804-487e-93f1-217ea49e9a51</uuid>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <name>instance-0000001e</name>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <memory>131072</memory>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <vcpu>1</vcpu>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <metadata>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1949949001</nova:name>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <nova:creationTime>2025-11-29 08:11:11</nova:creationTime>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <nova:flavor name="m1.nano">
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <nova:memory>128</nova:memory>
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <nova:disk>1</nova:disk>
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <nova:swap>0</nova:swap>
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <nova:ephemeral>0</nova:ephemeral>
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <nova:vcpus>1</nova:vcpus>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       </nova:flavor>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <nova:owner>
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <nova:user uuid="981b7946a749412f90d3d8148d99486a">tempest-TestEncryptedCinderVolumes-541864957-project-member</nova:user>
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <nova:project uuid="062fa36b3fb745529eb64d4b5bb52af6">tempest-TestEncryptedCinderVolumes-541864957</nova:project>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       </nova:owner>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <nova:ports>
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <nova:port uuid="503caeb9-24dd-41d1-bcb9-da6866a4b3cd">
Nov 29 08:11:13 compute-0 nova_compute[256729]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:         </nova:port>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       </nova:ports>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     </nova:instance>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   </metadata>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <sysinfo type="smbios">
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <system>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <entry name="manufacturer">RDO</entry>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <entry name="product">OpenStack Compute</entry>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <entry name="serial">11d37006-0804-487e-93f1-217ea49e9a51</entry>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <entry name="uuid">11d37006-0804-487e-93f1-217ea49e9a51</entry>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <entry name="family">Virtual Machine</entry>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     </system>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   </sysinfo>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <os>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <boot dev="hd"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <smbios mode="sysinfo"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   </os>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <features>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <acpi/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <apic/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <vmcoreinfo/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   </features>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <clock offset="utc">
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <timer name="pit" tickpolicy="delay"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <timer name="hpet" present="no"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   </clock>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <cpu mode="host-model" match="exact">
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <topology sockets="1" cores="1" threads="1"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   </cpu>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   <devices>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <disk type="network" device="cdrom">
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <driver type="raw" cache="none"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <source protocol="rbd" name="vms/11d37006-0804-487e-93f1-217ea49e9a51_disk.config">
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       </source>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <target dev="sda" bus="sata"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <disk type="network" device="disk">
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <source protocol="rbd" name="volumes/volume-1447a403-936a-439b-837a-05ee34b38c45">
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <host name="192.168.122.100" port="6789"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       </source>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <auth username="openstack">
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <secret type="ceph" uuid="14ff1f30-5059-58f1-9a23-69871bb275a1"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       </auth>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <target dev="vda" bus="virtio"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <serial>1447a403-936a-439b-837a-05ee34b38c45</serial>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <encryption format="luks">
Nov 29 08:11:13 compute-0 nova_compute[256729]:         <secret type="passphrase" uuid="2b81b405-481d-4a07-b92d-8227976e9b70"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       </encryption>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     </disk>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <interface type="ethernet">
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <mac address="fa:16:3e:d2:8f:05"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <driver name="vhost" rx_queue_size="512"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <mtu size="1442"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <target dev="tap503caeb9-24"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     </interface>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <serial type="pty">
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <log file="/var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51/console.log" append="off"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     </serial>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <video>
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <model type="virtio"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     </video>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <input type="tablet" bus="usb"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <rng model="virtio">
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <backend model="random">/dev/urandom</backend>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     </rng>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="pci" model="pcie-root-port"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <controller type="usb" index="0"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     <memballoon model="virtio">
Nov 29 08:11:13 compute-0 nova_compute[256729]:       <stats period="10"/>
Nov 29 08:11:13 compute-0 nova_compute[256729]:     </memballoon>
Nov 29 08:11:13 compute-0 nova_compute[256729]:   </devices>
Nov 29 08:11:13 compute-0 nova_compute[256729]: </domain>
Nov 29 08:11:13 compute-0 nova_compute[256729]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.307 256736 DEBUG nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Preparing to wait for external event network-vif-plugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.307 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "11d37006-0804-487e-93f1-217ea49e9a51-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.307 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.307 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.308 256736 DEBUG nova.virt.libvirt.vif [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1949949001',display_name='tempest-TestEncryptedCinderVolumes-server-1949949001',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1949949001',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxX+PgMAZBuezORyRZTTDnmEkagoQZ/wV6Wk3lwyGDgLxEz+dGqkv0uj7q6iE8ZUn85LQMW2zUhk36PiQ5C6rOrwp08h1M8Rqk3HOI0Jn+9lui32YElh0SXij5turDSPw==',key_name='tempest-TestEncryptedCinderVolumes-1704602409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='062fa36b3fb745529eb64d4b5bb52af6',ramdisk_id='',reservation_id='r-jq90o5e9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-541864957',owner_user_name='tempest-TestEncryptedCinderVolumes-541864957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:06Z,user_data=None,user_id='981b7946a749412f90d3d8148d99486a',uuid=11d37006-0804-487e-93f1-217ea49e9a51,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.308 256736 DEBUG nova.network.os_vif_util [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converting VIF {"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.309 256736 DEBUG nova.network.os_vif_util [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:8f:05,bridge_name='br-int',has_traffic_filtering=True,id=503caeb9-24dd-41d1-bcb9-da6866a4b3cd,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap503caeb9-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.309 256736 DEBUG os_vif [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:8f:05,bridge_name='br-int',has_traffic_filtering=True,id=503caeb9-24dd-41d1-bcb9-da6866a4b3cd,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap503caeb9-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.310 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.310 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.311 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.313 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.313 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap503caeb9-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.314 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap503caeb9-24, col_values=(('external_ids', {'iface-id': '503caeb9-24dd-41d1-bcb9-da6866a4b3cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:8f:05', 'vm-uuid': '11d37006-0804-487e-93f1-217ea49e9a51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.315 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-0 NetworkManager[48962]: <info>  [1764403873.3171] manager: (tap503caeb9-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.318 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.324 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.326 256736 INFO os_vif [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:8f:05,bridge_name='br-int',has_traffic_filtering=True,id=503caeb9-24dd-41d1-bcb9-da6866a4b3cd,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap503caeb9-24')
Nov 29 08:11:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:11:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:11:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:11:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:11:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:11:13 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5616d89c-ced8-48b6-8c99-0d93f378b1c2 does not exist
Nov 29 08:11:13 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 0dbb4d6c-1088-4b08-9a9d-347d7bd0afb1 does not exist
Nov 29 08:11:13 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c13f2038-2440-4f75-83b8-32702e123646 does not exist
Nov 29 08:11:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:11:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:11:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:11:13 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:11:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:11:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.392 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.393 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.394 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] No VIF found with MAC fa:16:3e:d2:8f:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.395 256736 INFO nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Using config drive
Nov 29 08:11:13 compute-0 sudo[303599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:13 compute-0 sudo[303599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:13 compute-0 sudo[303599]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.434 256736 DEBUG nova.storage.rbd_utils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 11d37006-0804-487e-93f1-217ea49e9a51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:11:13 compute-0 sudo[303642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:13 compute-0 sudo[303642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:13 compute-0 sudo[303642]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:13 compute-0 sudo[303667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:13 compute-0 sudo[303667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:13 compute-0 sudo[303667]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:13 compute-0 sudo[303692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:11:13 compute-0 sudo[303692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.688 256736 DEBUG nova.network.neutron [req-dbd5ca1a-c3eb-4df0-9fc9-f59da8bb309c req-0938b503-97b5-4df7-909a-0023febfeb61 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Updated VIF entry in instance network info cache for port 503caeb9-24dd-41d1-bcb9-da6866a4b3cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.688 256736 DEBUG nova.network.neutron [req-dbd5ca1a-c3eb-4df0-9fc9-f59da8bb309c req-0938b503-97b5-4df7-909a-0023febfeb61 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Updating instance_info_cache with network_info: [{"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.707 256736 DEBUG oslo_concurrency.lockutils [req-dbd5ca1a-c3eb-4df0-9fc9-f59da8bb309c req-0938b503-97b5-4df7-909a-0023febfeb61 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-11d37006-0804-487e-93f1-217ea49e9a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:11:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:11:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:11:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:11:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:11:13 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.947 256736 INFO nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Creating config drive at /var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51/disk.config
Nov 29 08:11:13 compute-0 nova_compute[256729]: 2025-11-29 08:11:13.956 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptmzvyw52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:14 compute-0 nova_compute[256729]: 2025-11-29 08:11:14.091 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptmzvyw52" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:14 compute-0 podman[303760]: 2025-11-29 08:11:14.115007308 +0000 UTC m=+0.078943689 container create 46c281e243489a098d061b6035c069bc2765303ead193e143b8a618834ae2078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kalam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:11:14 compute-0 nova_compute[256729]: 2025-11-29 08:11:14.134 256736 DEBUG nova.storage.rbd_utils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] rbd image 11d37006-0804-487e-93f1-217ea49e9a51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 29 08:11:14 compute-0 nova_compute[256729]: 2025-11-29 08:11:14.139 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51/disk.config 11d37006-0804-487e-93f1-217ea49e9a51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:14 compute-0 systemd[1]: Started libpod-conmon-46c281e243489a098d061b6035c069bc2765303ead193e143b8a618834ae2078.scope.
Nov 29 08:11:14 compute-0 podman[303760]: 2025-11-29 08:11:14.0817398 +0000 UTC m=+0.045676241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:14 compute-0 podman[303760]: 2025-11-29 08:11:14.22776022 +0000 UTC m=+0.191696611 container init 46c281e243489a098d061b6035c069bc2765303ead193e143b8a618834ae2078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kalam, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 08:11:14 compute-0 podman[303760]: 2025-11-29 08:11:14.237496289 +0000 UTC m=+0.201432640 container start 46c281e243489a098d061b6035c069bc2765303ead193e143b8a618834ae2078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kalam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 29 08:11:14 compute-0 podman[303760]: 2025-11-29 08:11:14.241381206 +0000 UTC m=+0.205317567 container attach 46c281e243489a098d061b6035c069bc2765303ead193e143b8a618834ae2078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kalam, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:11:14 compute-0 elastic_kalam[303796]: 167 167
Nov 29 08:11:14 compute-0 systemd[1]: libpod-46c281e243489a098d061b6035c069bc2765303ead193e143b8a618834ae2078.scope: Deactivated successfully.
Nov 29 08:11:14 compute-0 podman[303760]: 2025-11-29 08:11:14.246640081 +0000 UTC m=+0.210576432 container died 46c281e243489a098d061b6035c069bc2765303ead193e143b8a618834ae2078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7b4ce0add56378fbc0becd33d7d37fb5215fd207917fce6ee34b4c5d7fcfe04-merged.mount: Deactivated successfully.
Nov 29 08:11:14 compute-0 podman[303760]: 2025-11-29 08:11:14.323952824 +0000 UTC m=+0.287889165 container remove 46c281e243489a098d061b6035c069bc2765303ead193e143b8a618834ae2078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 08:11:14 compute-0 systemd[1]: libpod-conmon-46c281e243489a098d061b6035c069bc2765303ead193e143b8a618834ae2078.scope: Deactivated successfully.
Nov 29 08:11:14 compute-0 nova_compute[256729]: 2025-11-29 08:11:14.338 256736 DEBUG oslo_concurrency.processutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51/disk.config 11d37006-0804-487e-93f1-217ea49e9a51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:14 compute-0 nova_compute[256729]: 2025-11-29 08:11:14.339 256736 INFO nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Deleting local config drive /var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51/disk.config because it was imported into RBD.
Nov 29 08:11:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Nov 29 08:11:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Nov 29 08:11:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Nov 29 08:11:14 compute-0 kernel: tap503caeb9-24: entered promiscuous mode
Nov 29 08:11:14 compute-0 NetworkManager[48962]: <info>  [1764403874.4006] manager: (tap503caeb9-24): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Nov 29 08:11:14 compute-0 ovn_controller[153383]: 2025-11-29T08:11:14Z|00290|binding|INFO|Claiming lport 503caeb9-24dd-41d1-bcb9-da6866a4b3cd for this chassis.
Nov 29 08:11:14 compute-0 ovn_controller[153383]: 2025-11-29T08:11:14Z|00291|binding|INFO|503caeb9-24dd-41d1-bcb9-da6866a4b3cd: Claiming fa:16:3e:d2:8f:05 10.100.0.14
Nov 29 08:11:14 compute-0 nova_compute[256729]: 2025-11-29 08:11:14.401 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:14 compute-0 systemd-machined[217781]: New machine qemu-30-instance-0000001e.
Nov 29 08:11:14 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Nov 29 08:11:14 compute-0 nova_compute[256729]: 2025-11-29 08:11:14.482 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:14 compute-0 ovn_controller[153383]: 2025-11-29T08:11:14Z|00292|binding|INFO|Setting lport 503caeb9-24dd-41d1-bcb9-da6866a4b3cd ovn-installed in OVS
Nov 29 08:11:14 compute-0 nova_compute[256729]: 2025-11-29 08:11:14.488 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:14 compute-0 systemd-udevd[303865]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:11:14 compute-0 NetworkManager[48962]: <info>  [1764403874.5047] device (tap503caeb9-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 08:11:14 compute-0 NetworkManager[48962]: <info>  [1764403874.5055] device (tap503caeb9-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 08:11:14 compute-0 podman[303853]: 2025-11-29 08:11:14.522763801 +0000 UTC m=+0.063492963 container create f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 08:11:14 compute-0 systemd[1]: Started libpod-conmon-f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10.scope.
Nov 29 08:11:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd3681fecef46b7e74a92fd6dc946a34837578070cd82e86126062f79819ad2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd3681fecef46b7e74a92fd6dc946a34837578070cd82e86126062f79819ad2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:14 compute-0 podman[303853]: 2025-11-29 08:11:14.503751937 +0000 UTC m=+0.044481109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd3681fecef46b7e74a92fd6dc946a34837578070cd82e86126062f79819ad2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd3681fecef46b7e74a92fd6dc946a34837578070cd82e86126062f79819ad2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd3681fecef46b7e74a92fd6dc946a34837578070cd82e86126062f79819ad2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:14 compute-0 podman[303853]: 2025-11-29 08:11:14.611685625 +0000 UTC m=+0.152414787 container init f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 29 08:11:14 compute-0 podman[303853]: 2025-11-29 08:11:14.621772544 +0000 UTC m=+0.162501726 container start f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_meitner, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 08:11:14 compute-0 podman[303853]: 2025-11-29 08:11:14.625607309 +0000 UTC m=+0.166336491 container attach f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 08:11:14 compute-0 ovn_controller[153383]: 2025-11-29T08:11:14Z|00293|binding|INFO|Setting lport 503caeb9-24dd-41d1-bcb9-da6866a4b3cd up in Southbound
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.678 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:8f:05 10.100.0.14'], port_security=['fa:16:3e:d2:8f:05 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '11d37006-0804-487e-93f1-217ea49e9a51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dda88d46-9162-4e7c-bb47-793ac4133966', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062fa36b3fb745529eb64d4b5bb52af6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e6edb27-9f1c-444b-901c-a9a15234db1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=767afc55-24b1-431b-aeef-ddbbabf80029, chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=503caeb9-24dd-41d1-bcb9-da6866a4b3cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.680 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 503caeb9-24dd-41d1-bcb9-da6866a4b3cd in datapath dda88d46-9162-4e7c-bb47-793ac4133966 bound to our chassis
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.684 163655 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dda88d46-9162-4e7c-bb47-793ac4133966
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.699 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[151929e3-0e35-4810-a453-f31c072b6371]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.701 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdda88d46-91 in ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.705 266092 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdda88d46-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.705 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[df5f6e7d-89a7-4f02-a8d4-f46d36138c1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.707 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[21ded52c-702e-464d-9b0e-44f8039597b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.720 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[dee3e52c-6939-4798-b46a-3836f80efa4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.738 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[47e24124-1486-4d4c-aa0b-42b1ef2684c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.781 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac67139-5e01-4800-9a03-441170b2a341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 NetworkManager[48962]: <info>  [1764403874.7904] manager: (tapdda88d46-90): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Nov 29 08:11:14 compute-0 systemd-udevd[303867]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.788 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c82a87-db98-4108-92a5-7939a4a550e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ceph-mon[75050]: pgmap v2214: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 409 B/s wr, 42 op/s
Nov 29 08:11:14 compute-0 ceph-mon[75050]: osdmap e451: 3 total, 3 up, 3 in
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.833 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[791df690-6de1-4bf2-a1ee-1c9789eb703e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.836 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[662df402-014f-42ee-9f5a-5d771c538cc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 NetworkManager[48962]: <info>  [1764403874.8583] device (tapdda88d46-90): carrier: link connected
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.863 266358 DEBUG oslo.privsep.daemon [-] privsep: reply[942fb8c7-dfc6-4cf2-8ac2-16c1fda417c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.882 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa348ab-4e98-4162-a186-cc2c4d936305]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdda88d46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:6b:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618992, 'reachable_time': 35702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303944, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.901 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[bd17051c-95f6-40b2-b091-52c22675d30e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:6bec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618992, 'tstamp': 618992}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303945, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.920 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[8e636621-25c8-46cc-8134-323d4f3f973b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdda88d46-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:6b:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618992, 'reachable_time': 35702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303946, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:14 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:14.950 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[c22acf1d-8cba-4b8d-9123-3a6ad7f2af4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:15.015 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[b9beaa7c-5e78-4c26-b938-978a65fb8db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:15.017 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdda88d46-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:15.018 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:15.019 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdda88d46-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:15 compute-0 NetworkManager[48962]: <info>  [1764403875.0223] manager: (tapdda88d46-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.021 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:15 compute-0 kernel: tapdda88d46-90: entered promiscuous mode
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.024 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:15.026 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdda88d46-90, col_values=(('external_ids', {'iface-id': 'bf50d5e3-cc9a-491e-8a5a-4b199a4df39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.027 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:15 compute-0 ovn_controller[153383]: 2025-11-29T08:11:15Z|00294|binding|INFO|Releasing lport bf50d5e3-cc9a-491e-8a5a-4b199a4df39f from this chassis (sb_readonly=0)
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.056 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.060 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:15.061 163655 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dda88d46-9162-4e7c-bb47-793ac4133966.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dda88d46-9162-4e7c-bb47-793ac4133966.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:15.063 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[af1278d8-7d85-427f-b4a9-259808957ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:15.064 163655 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: global
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     log         /dev/log local0 debug
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     log-tag     haproxy-metadata-proxy-dda88d46-9162-4e7c-bb47-793ac4133966
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     user        root
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     group       root
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     maxconn     1024
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     pidfile     /var/lib/neutron/external/pids/dda88d46-9162-4e7c-bb47-793ac4133966.pid.haproxy
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     daemon
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: defaults
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     log global
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     mode http
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     option httplog
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     option dontlognull
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     option http-server-close
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     option forwardfor
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     retries                 3
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     timeout http-request    30s
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     timeout connect         30s
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     timeout client          32s
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     timeout server          32s
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     timeout http-keep-alive 30s
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: listen listener
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     bind 169.254.169.254:80
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     server metadata /var/lib/neutron/metadata_proxy
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:     http-request add-header X-OVN-Network-ID dda88d46-9162-4e7c-bb47-793ac4133966
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 29 08:11:15 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:15.068 163655 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'env', 'PROCESS_TAG=haproxy-dda88d46-9162-4e7c-bb47-793ac4133966', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dda88d46-9162-4e7c-bb47-793ac4133966.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 29 08:11:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e451 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 895 B/s rd, 383 B/s wr, 2 op/s
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.325 256736 DEBUG nova.compute.manager [req-39e04332-b92c-4f8e-8bdc-7c64de29227b req-fc1de87c-8898-482a-8f51-1090c2ea3cff ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received event network-vif-plugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.325 256736 DEBUG oslo_concurrency.lockutils [req-39e04332-b92c-4f8e-8bdc-7c64de29227b req-fc1de87c-8898-482a-8f51-1090c2ea3cff ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "11d37006-0804-487e-93f1-217ea49e9a51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.325 256736 DEBUG oslo_concurrency.lockutils [req-39e04332-b92c-4f8e-8bdc-7c64de29227b req-fc1de87c-8898-482a-8f51-1090c2ea3cff ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.325 256736 DEBUG oslo_concurrency.lockutils [req-39e04332-b92c-4f8e-8bdc-7c64de29227b req-fc1de87c-8898-482a-8f51-1090c2ea3cff ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:15 compute-0 nova_compute[256729]: 2025-11-29 08:11:15.326 256736 DEBUG nova.compute.manager [req-39e04332-b92c-4f8e-8bdc-7c64de29227b req-fc1de87c-8898-482a-8f51-1090c2ea3cff ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Processing event network-vif-plugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 29 08:11:15 compute-0 podman[303984]: 2025-11-29 08:11:15.452400776 +0000 UTC m=+0.067172834 container create a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:11:15 compute-0 podman[303984]: 2025-11-29 08:11:15.417153164 +0000 UTC m=+0.031925252 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894331066451781 of space, bias 1.0, pg target 0.8682993199355343 quantized to 32 (current 32)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:11:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:11:15 compute-0 systemd[1]: Started libpod-conmon-a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc.scope.
Nov 29 08:11:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f7cd6a93255d6df0093836457f5c3f0761ad2e8c2e3f8d5d904ff203a38537/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:15 compute-0 podman[303984]: 2025-11-29 08:11:15.573746646 +0000 UTC m=+0.188518694 container init a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:11:15 compute-0 podman[303984]: 2025-11-29 08:11:15.579446863 +0000 UTC m=+0.194218921 container start a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 08:11:15 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[304008]: [NOTICE]   (304016) : New worker (304020) forked
Nov 29 08:11:15 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[304008]: [NOTICE]   (304016) : Loading success.
Nov 29 08:11:15 compute-0 pensive_meitner[303879]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:11:15 compute-0 pensive_meitner[303879]: --> relative data size: 1.0
Nov 29 08:11:15 compute-0 pensive_meitner[303879]: --> All data devices are unavailable
Nov 29 08:11:15 compute-0 systemd[1]: libpod-f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10.scope: Deactivated successfully.
Nov 29 08:11:15 compute-0 conmon[303879]: conmon f80be0687462cfb8db7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10.scope/container/memory.events
Nov 29 08:11:15 compute-0 podman[303853]: 2025-11-29 08:11:15.708138744 +0000 UTC m=+1.248867916 container died f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bd3681fecef46b7e74a92fd6dc946a34837578070cd82e86126062f79819ad2-merged.mount: Deactivated successfully.
Nov 29 08:11:15 compute-0 podman[303853]: 2025-11-29 08:11:15.761935569 +0000 UTC m=+1.302664721 container remove f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_meitner, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:11:15 compute-0 systemd[1]: libpod-conmon-f80be0687462cfb8db7e6d46f5bcdfb961d86cd2f9f1af195bb36d27d82d3c10.scope: Deactivated successfully.
Nov 29 08:11:15 compute-0 sudo[303692]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:15 compute-0 sudo[304068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:15 compute-0 podman[304046]: 2025-11-29 08:11:15.872468519 +0000 UTC m=+0.072476261 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd)
Nov 29 08:11:15 compute-0 sudo[304068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:15 compute-0 sudo[304068]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:15 compute-0 podman[304049]: 2025-11-29 08:11:15.896505823 +0000 UTC m=+0.089297896 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:11:15 compute-0 podman[304048]: 2025-11-29 08:11:15.913695167 +0000 UTC m=+0.102350946 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:11:15 compute-0 sudo[304133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:15 compute-0 sudo[304133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:15 compute-0 sudo[304133]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:16 compute-0 sudo[304158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:16 compute-0 sudo[304158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:16 compute-0 sudo[304158]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:16 compute-0 sudo[304183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:11:16 compute-0 sudo[304183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:16 compute-0 podman[304246]: 2025-11-29 08:11:16.567538031 +0000 UTC m=+0.068136541 container create ebeae8ff7661af2e7cc6d831751ac5e39cb0635719e4eda86af6515004ad5787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 08:11:16 compute-0 podman[304246]: 2025-11-29 08:11:16.534924662 +0000 UTC m=+0.035523252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:16 compute-0 systemd[1]: Started libpod-conmon-ebeae8ff7661af2e7cc6d831751ac5e39cb0635719e4eda86af6515004ad5787.scope.
Nov 29 08:11:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:16 compute-0 nova_compute[256729]: 2025-11-29 08:11:16.676 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:11:16 compute-0 podman[304246]: 2025-11-29 08:11:16.742155541 +0000 UTC m=+0.242754081 container init ebeae8ff7661af2e7cc6d831751ac5e39cb0635719e4eda86af6515004ad5787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:11:16 compute-0 podman[304246]: 2025-11-29 08:11:16.758032938 +0000 UTC m=+0.258631448 container start ebeae8ff7661af2e7cc6d831751ac5e39cb0635719e4eda86af6515004ad5787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 08:11:16 compute-0 podman[304246]: 2025-11-29 08:11:16.762620965 +0000 UTC m=+0.263219505 container attach ebeae8ff7661af2e7cc6d831751ac5e39cb0635719e4eda86af6515004ad5787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:11:16 compute-0 vibrant_torvalds[304262]: 167 167
Nov 29 08:11:16 compute-0 systemd[1]: libpod-ebeae8ff7661af2e7cc6d831751ac5e39cb0635719e4eda86af6515004ad5787.scope: Deactivated successfully.
Nov 29 08:11:16 compute-0 podman[304246]: 2025-11-29 08:11:16.766018689 +0000 UTC m=+0.266617199 container died ebeae8ff7661af2e7cc6d831751ac5e39cb0635719e4eda86af6515004ad5787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 08:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ae773f149c613bb49853d347a60c20ab724db3b63d1b96f462e16afedf62093-merged.mount: Deactivated successfully.
Nov 29 08:11:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Nov 29 08:11:16 compute-0 ceph-mon[75050]: pgmap v2216: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 895 B/s rd, 383 B/s wr, 2 op/s
Nov 29 08:11:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Nov 29 08:11:16 compute-0 podman[304246]: 2025-11-29 08:11:16.910286411 +0000 UTC m=+0.410884941 container remove ebeae8ff7661af2e7cc6d831751ac5e39cb0635719e4eda86af6515004ad5787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 08:11:16 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Nov 29 08:11:16 compute-0 systemd[1]: libpod-conmon-ebeae8ff7661af2e7cc6d831751ac5e39cb0635719e4eda86af6515004ad5787.scope: Deactivated successfully.
Nov 29 08:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:11:17 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 8573 writes, 39K keys, 8573 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s
                                           Cumulative WAL: 8573 writes, 8573 syncs, 1.00 writes per sync, written: 0.05 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1875 writes, 8872 keys, 1875 commit groups, 1.0 writes per commit group, ingest: 11.00 MB, 0.02 MB/s
                                           Interval WAL: 1875 writes, 1875 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.3      5.03              0.20        23    0.218       0      0       0.0       0.0
                                             L6      1/0    9.01 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.0     39.7     33.0      5.65              0.79        22    0.257    125K    12K       0.0       0.0
                                            Sum      1/0    9.01 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.0     21.0     21.9     10.68              0.99        45    0.237    125K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2     97.7     97.0      0.82              0.31        14    0.059     50K   4097       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     39.7     33.0      5.65              0.79        22    0.257    125K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.3      5.02              0.20        22    0.228       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.046, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.06 MB/s write, 0.22 GB read, 0.06 MB/s read, 10.7 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdb5ecb1f0#2 capacity: 304.00 MB usage: 25.36 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.00019 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1733,24.35 MB,8.00944%) FilterBlock(46,364.05 KB,0.116946%) IndexBlock(46,668.92 KB,0.214883%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 08:11:17 compute-0 podman[304289]: 2025-11-29 08:11:17.146011626 +0000 UTC m=+0.050543056 container create 3321d9050d8b6a12c93578bced0c43c1fbc03fa75ea7b9fc1eb32630b7ebc35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.175 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403877.1747792, 11d37006-0804-487e-93f1-217ea49e9a51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.176 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] VM Started (Lifecycle Event)
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.178 256736 DEBUG nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 29 08:11:17 compute-0 systemd[1]: Started libpod-conmon-3321d9050d8b6a12c93578bced0c43c1fbc03fa75ea7b9fc1eb32630b7ebc35e.scope.
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.194 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.195 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.199 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.201 256736 INFO nova.virt.libvirt.driver [-] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Instance spawned successfully.
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.202 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 29 08:11:17 compute-0 podman[304289]: 2025-11-29 08:11:17.129049878 +0000 UTC m=+0.033581338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78b71eec76dc18626df8a0ec9ef5f3437a01265f0c758ec1dff36e52e20e9330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78b71eec76dc18626df8a0ec9ef5f3437a01265f0c758ec1dff36e52e20e9330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78b71eec76dc18626df8a0ec9ef5f3437a01265f0c758ec1dff36e52e20e9330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78b71eec76dc18626df8a0ec9ef5f3437a01265f0c758ec1dff36e52e20e9330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:17 compute-0 podman[304289]: 2025-11-29 08:11:17.246240992 +0000 UTC m=+0.150772432 container init 3321d9050d8b6a12c93578bced0c43c1fbc03fa75ea7b9fc1eb32630b7ebc35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:11:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 24 KiB/s wr, 55 op/s
Nov 29 08:11:17 compute-0 podman[304289]: 2025-11-29 08:11:17.258532661 +0000 UTC m=+0.163064111 container start 3321d9050d8b6a12c93578bced0c43c1fbc03fa75ea7b9fc1eb32630b7ebc35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 08:11:17 compute-0 podman[304289]: 2025-11-29 08:11:17.262954114 +0000 UTC m=+0.167485564 container attach 3321d9050d8b6a12c93578bced0c43c1fbc03fa75ea7b9fc1eb32630b7ebc35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.360 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.360 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.361 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.361 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.361 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.362 256736 DEBUG nova.virt.libvirt.driver [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.365 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.365 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403877.1749067, 11d37006-0804-487e-93f1-217ea49e9a51 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.365 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] VM Paused (Lifecycle Event)
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.409 256736 DEBUG nova.compute.manager [req-bbb98b92-b43c-4efb-9eb6-bebabcfef408 req-e20e9c1a-fac0-47d7-9f8b-caa24350fd21 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received event network-vif-plugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.409 256736 DEBUG oslo_concurrency.lockutils [req-bbb98b92-b43c-4efb-9eb6-bebabcfef408 req-e20e9c1a-fac0-47d7-9f8b-caa24350fd21 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "11d37006-0804-487e-93f1-217ea49e9a51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.409 256736 DEBUG oslo_concurrency.lockutils [req-bbb98b92-b43c-4efb-9eb6-bebabcfef408 req-e20e9c1a-fac0-47d7-9f8b-caa24350fd21 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.410 256736 DEBUG oslo_concurrency.lockutils [req-bbb98b92-b43c-4efb-9eb6-bebabcfef408 req-e20e9c1a-fac0-47d7-9f8b-caa24350fd21 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.410 256736 DEBUG nova.compute.manager [req-bbb98b92-b43c-4efb-9eb6-bebabcfef408 req-e20e9c1a-fac0-47d7-9f8b-caa24350fd21 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] No waiting events found dispatching network-vif-plugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.410 256736 WARNING nova.compute.manager [req-bbb98b92-b43c-4efb-9eb6-bebabcfef408 req-e20e9c1a-fac0-47d7-9f8b-caa24350fd21 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received unexpected event network-vif-plugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd for instance with vm_state building and task_state spawning.
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.411 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.414 256736 DEBUG nova.virt.driver [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] Emitting event <LifecycleEvent: 1764403877.1818066, 11d37006-0804-487e-93f1-217ea49e9a51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.414 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] VM Resumed (Lifecycle Event)
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.438 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.441 256736 DEBUG nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.445 256736 INFO nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Took 9.86 seconds to spawn the instance on the hypervisor.
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.445 256736 DEBUG nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.471 256736 INFO nova.compute.manager [None req-c0698741-c496-4c65-b0e7-7fb22176bb90 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.513 256736 INFO nova.compute.manager [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Took 12.36 seconds to build instance.
Nov 29 08:11:17 compute-0 nova_compute[256729]: 2025-11-29 08:11:17.535 256736 DEBUG oslo_concurrency.lockutils [None req-cc13e146-5230-4291-a689-81dadd297552 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:17 compute-0 ceph-mon[75050]: osdmap e452: 3 total, 3 up, 3 in
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]: {
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:     "0": [
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:         {
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "devices": [
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "/dev/loop3"
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             ],
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_name": "ceph_lv0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_size": "21470642176",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "name": "ceph_lv0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "tags": {
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.cluster_name": "ceph",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.crush_device_class": "",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.encrypted": "0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.osd_id": "0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.type": "block",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.vdo": "0"
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             },
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "type": "block",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "vg_name": "ceph_vg0"
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:         }
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:     ],
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:     "1": [
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:         {
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "devices": [
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "/dev/loop4"
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             ],
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_name": "ceph_lv1",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_size": "21470642176",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "name": "ceph_lv1",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "tags": {
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.cluster_name": "ceph",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.crush_device_class": "",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.encrypted": "0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.osd_id": "1",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.type": "block",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.vdo": "0"
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             },
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "type": "block",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "vg_name": "ceph_vg1"
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:         }
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:     ],
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:     "2": [
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:         {
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "devices": [
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "/dev/loop5"
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             ],
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_name": "ceph_lv2",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_size": "21470642176",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "name": "ceph_lv2",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "tags": {
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.cluster_name": "ceph",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.crush_device_class": "",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.encrypted": "0",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.osd_id": "2",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.type": "block",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:                 "ceph.vdo": "0"
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             },
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "type": "block",
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:             "vg_name": "ceph_vg2"
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:         }
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]:     ]
Nov 29 08:11:18 compute-0 friendly_dhawan[304306]: }
Nov 29 08:11:18 compute-0 nova_compute[256729]: 2025-11-29 08:11:18.062 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:18 compute-0 systemd[1]: libpod-3321d9050d8b6a12c93578bced0c43c1fbc03fa75ea7b9fc1eb32630b7ebc35e.scope: Deactivated successfully.
Nov 29 08:11:18 compute-0 podman[304315]: 2025-11-29 08:11:18.134187327 +0000 UTC m=+0.027793368 container died 3321d9050d8b6a12c93578bced0c43c1fbc03fa75ea7b9fc1eb32630b7ebc35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 08:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-78b71eec76dc18626df8a0ec9ef5f3437a01265f0c758ec1dff36e52e20e9330-merged.mount: Deactivated successfully.
Nov 29 08:11:18 compute-0 podman[304315]: 2025-11-29 08:11:18.181084541 +0000 UTC m=+0.074690552 container remove 3321d9050d8b6a12c93578bced0c43c1fbc03fa75ea7b9fc1eb32630b7ebc35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:11:18 compute-0 systemd[1]: libpod-conmon-3321d9050d8b6a12c93578bced0c43c1fbc03fa75ea7b9fc1eb32630b7ebc35e.scope: Deactivated successfully.
Nov 29 08:11:18 compute-0 sudo[304183]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:18 compute-0 sudo[304330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:18 compute-0 sudo[304330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:18 compute-0 sudo[304330]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:18 compute-0 nova_compute[256729]: 2025-11-29 08:11:18.316 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:18 compute-0 sudo[304355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:11:18 compute-0 sudo[304355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:18 compute-0 sudo[304355]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:18 compute-0 sudo[304380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:18 compute-0 sudo[304380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:18 compute-0 sudo[304380]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:18 compute-0 sudo[304405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:11:18 compute-0 sudo[304405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:18 compute-0 podman[304470]: 2025-11-29 08:11:18.796168036 +0000 UTC m=+0.033249739 container create 7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 08:11:18 compute-0 systemd[1]: Started libpod-conmon-7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6.scope.
Nov 29 08:11:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:18 compute-0 podman[304470]: 2025-11-29 08:11:18.878258311 +0000 UTC m=+0.115340104 container init 7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_colden, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 08:11:18 compute-0 podman[304470]: 2025-11-29 08:11:18.781504521 +0000 UTC m=+0.018586254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:18 compute-0 podman[304470]: 2025-11-29 08:11:18.888320509 +0000 UTC m=+0.125402262 container start 7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:11:18 compute-0 podman[304470]: 2025-11-29 08:11:18.892133394 +0000 UTC m=+0.129215187 container attach 7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:11:18 compute-0 priceless_colden[304486]: 167 167
Nov 29 08:11:18 compute-0 systemd[1]: libpod-7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6.scope: Deactivated successfully.
Nov 29 08:11:18 compute-0 conmon[304486]: conmon 7e6e14dc5894d245f570 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6.scope/container/memory.events
Nov 29 08:11:18 compute-0 podman[304470]: 2025-11-29 08:11:18.897155893 +0000 UTC m=+0.134237666 container died 7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:11:18 compute-0 ceph-mon[75050]: pgmap v2218: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 24 KiB/s wr, 55 op/s
Nov 29 08:11:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-60b9bd2f64f816bf5439c5df15353f5fae5f04360a2188e9d93a7cb24b5fbae3-merged.mount: Deactivated successfully.
Nov 29 08:11:18 compute-0 podman[304470]: 2025-11-29 08:11:18.953481747 +0000 UTC m=+0.190563500 container remove 7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_colden, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:11:18 compute-0 systemd[1]: libpod-conmon-7e6e14dc5894d245f5702ed50faf800b4034fd1b59d2af3a9d6991d5cbcf93e6.scope: Deactivated successfully.
Nov 29 08:11:19 compute-0 podman[304509]: 2025-11-29 08:11:19.208599278 +0000 UTC m=+0.087656590 container create 1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:11:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 21 KiB/s wr, 93 op/s
Nov 29 08:11:19 compute-0 podman[304509]: 2025-11-29 08:11:19.169508589 +0000 UTC m=+0.048565891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:11:19 compute-0 systemd[1]: Started libpod-conmon-1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4.scope.
Nov 29 08:11:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dfc90c2edcd3186a1183dc3acab193d8060d5532185afe9f400025a2573465/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dfc90c2edcd3186a1183dc3acab193d8060d5532185afe9f400025a2573465/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dfc90c2edcd3186a1183dc3acab193d8060d5532185afe9f400025a2573465/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dfc90c2edcd3186a1183dc3acab193d8060d5532185afe9f400025a2573465/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:11:19 compute-0 podman[304509]: 2025-11-29 08:11:19.32861334 +0000 UTC m=+0.207670702 container init 1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:11:19 compute-0 podman[304509]: 2025-11-29 08:11:19.337789523 +0000 UTC m=+0.216846835 container start 1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:11:19 compute-0 podman[304509]: 2025-11-29 08:11:19.341844845 +0000 UTC m=+0.220902167 container attach 1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:11:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Nov 29 08:11:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Nov 29 08:11:19 compute-0 ceph-mon[75050]: pgmap v2219: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 21 KiB/s wr, 93 op/s
Nov 29 08:11:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Nov 29 08:11:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e453 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]: {
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "osd_id": 2,
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "type": "bluestore"
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:     },
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "osd_id": 1,
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "type": "bluestore"
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:     },
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "osd_id": 0,
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:         "type": "bluestore"
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]:     }
Nov 29 08:11:20 compute-0 compassionate_hamilton[304526]: }
Nov 29 08:11:20 compute-0 systemd[1]: libpod-1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4.scope: Deactivated successfully.
Nov 29 08:11:20 compute-0 systemd[1]: libpod-1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4.scope: Consumed 1.083s CPU time.
Nov 29 08:11:20 compute-0 podman[304559]: 2025-11-29 08:11:20.457677769 +0000 UTC m=+0.024224350 container died 1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 08:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-98dfc90c2edcd3186a1183dc3acab193d8060d5532185afe9f400025a2573465-merged.mount: Deactivated successfully.
Nov 29 08:11:20 compute-0 podman[304559]: 2025-11-29 08:11:20.509222511 +0000 UTC m=+0.075769082 container remove 1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 08:11:20 compute-0 systemd[1]: libpod-conmon-1c07307de293789314f6b267593192d00d14075cd4726f8ee2857a48fe8accc4.scope: Deactivated successfully.
Nov 29 08:11:20 compute-0 sudo[304405]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:11:20 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:11:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:11:20 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:11:20 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev de0a4e4f-b411-4aa1-ad44-728c99b1eac7 does not exist
Nov 29 08:11:20 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 138b660b-2a42-4107-9812-6515c58477ac does not exist
Nov 29 08:11:20 compute-0 sudo[304574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:11:20 compute-0 sudo[304574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:20 compute-0 sudo[304574]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:20 compute-0 sudo[304599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:11:20 compute-0 sudo[304599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:11:20 compute-0 sudo[304599]: pam_unix(sudo:session): session closed for user root
Nov 29 08:11:20 compute-0 ceph-mon[75050]: osdmap e453: 3 total, 3 up, 3 in
Nov 29 08:11:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:11:20 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:11:21 compute-0 nova_compute[256729]: 2025-11-29 08:11:21.178 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:21 compute-0 NetworkManager[48962]: <info>  [1764403881.1862] manager: (patch-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Nov 29 08:11:21 compute-0 NetworkManager[48962]: <info>  [1764403881.1880] manager: (patch-br-int-to-provnet-53893d16-43ff-4c9d-aa40-6eb91dbe033a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Nov 29 08:11:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 24 KiB/s wr, 105 op/s
Nov 29 08:11:21 compute-0 nova_compute[256729]: 2025-11-29 08:11:21.276 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:21 compute-0 ovn_controller[153383]: 2025-11-29T08:11:21Z|00295|binding|INFO|Releasing lport bf50d5e3-cc9a-491e-8a5a-4b199a4df39f from this chassis (sb_readonly=0)
Nov 29 08:11:21 compute-0 nova_compute[256729]: 2025-11-29 08:11:21.287 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:21 compute-0 ceph-mon[75050]: pgmap v2221: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 24 KiB/s wr, 105 op/s
Nov 29 08:11:22 compute-0 nova_compute[256729]: 2025-11-29 08:11:22.047 256736 DEBUG nova.compute.manager [req-64224b56-d70e-4b84-a63a-e24054a1b408 req-6ce90a5c-57d6-4176-9062-60e769c7727e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received event network-changed-503caeb9-24dd-41d1-bcb9-da6866a4b3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:22 compute-0 nova_compute[256729]: 2025-11-29 08:11:22.048 256736 DEBUG nova.compute.manager [req-64224b56-d70e-4b84-a63a-e24054a1b408 req-6ce90a5c-57d6-4176-9062-60e769c7727e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Refreshing instance network info cache due to event network-changed-503caeb9-24dd-41d1-bcb9-da6866a4b3cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 29 08:11:22 compute-0 nova_compute[256729]: 2025-11-29 08:11:22.049 256736 DEBUG oslo_concurrency.lockutils [req-64224b56-d70e-4b84-a63a-e24054a1b408 req-6ce90a5c-57d6-4176-9062-60e769c7727e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "refresh_cache-11d37006-0804-487e-93f1-217ea49e9a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 08:11:22 compute-0 nova_compute[256729]: 2025-11-29 08:11:22.049 256736 DEBUG oslo_concurrency.lockutils [req-64224b56-d70e-4b84-a63a-e24054a1b408 req-6ce90a5c-57d6-4176-9062-60e769c7727e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquired lock "refresh_cache-11d37006-0804-487e-93f1-217ea49e9a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 08:11:22 compute-0 nova_compute[256729]: 2025-11-29 08:11:22.049 256736 DEBUG nova.network.neutron [req-64224b56-d70e-4b84-a63a-e24054a1b408 req-6ce90a5c-57d6-4176-9062-60e769c7727e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Refreshing network info cache for port 503caeb9-24dd-41d1-bcb9-da6866a4b3cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 29 08:11:23 compute-0 nova_compute[256729]: 2025-11-29 08:11:23.063 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 24 KiB/s wr, 234 op/s
Nov 29 08:11:23 compute-0 nova_compute[256729]: 2025-11-29 08:11:23.318 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:24 compute-0 nova_compute[256729]: 2025-11-29 08:11:24.060 256736 DEBUG nova.network.neutron [req-64224b56-d70e-4b84-a63a-e24054a1b408 req-6ce90a5c-57d6-4176-9062-60e769c7727e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Updated VIF entry in instance network info cache for port 503caeb9-24dd-41d1-bcb9-da6866a4b3cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 29 08:11:24 compute-0 nova_compute[256729]: 2025-11-29 08:11:24.061 256736 DEBUG nova.network.neutron [req-64224b56-d70e-4b84-a63a-e24054a1b408 req-6ce90a5c-57d6-4176-9062-60e769c7727e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Updating instance_info_cache with network_info: [{"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:24 compute-0 nova_compute[256729]: 2025-11-29 08:11:24.125 256736 DEBUG oslo_concurrency.lockutils [req-64224b56-d70e-4b84-a63a-e24054a1b408 req-6ce90a5c-57d6-4176-9062-60e769c7727e ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Releasing lock "refresh_cache-11d37006-0804-487e-93f1-217ea49e9a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 08:11:24 compute-0 ceph-mon[75050]: pgmap v2222: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 24 KiB/s wr, 234 op/s
Nov 29 08:11:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3807922886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:24 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3807922886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e453 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 KiB/s wr, 194 op/s
Nov 29 08:11:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3807922886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:25 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3807922886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Nov 29 08:11:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Nov 29 08:11:26 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Nov 29 08:11:26 compute-0 ceph-mon[75050]: pgmap v2223: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.3 KiB/s wr, 194 op/s
Nov 29 08:11:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3165865637' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:26 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:26 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3165865637' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 KiB/s wr, 245 op/s
Nov 29 08:11:27 compute-0 ceph-mon[75050]: osdmap e454: 3 total, 3 up, 3 in
Nov 29 08:11:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3165865637' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:27 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3165865637' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:28 compute-0 nova_compute[256729]: 2025-11-29 08:11:28.090 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:28 compute-0 nova_compute[256729]: 2025-11-29 08:11:28.320 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Nov 29 08:11:28 compute-0 ceph-mon[75050]: pgmap v2225: 305 pgs: 305 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 KiB/s wr, 245 op/s
Nov 29 08:11:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Nov 29 08:11:28 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Nov 29 08:11:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/698975829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/698975829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.6 KiB/s wr, 269 op/s
Nov 29 08:11:29 compute-0 ceph-mon[75050]: osdmap e455: 3 total, 3 up, 3 in
Nov 29 08:11:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/698975829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:29 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/698975829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e455 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Nov 29 08:11:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Nov 29 08:11:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Nov 29 08:11:30 compute-0 ceph-mon[75050]: pgmap v2227: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.6 KiB/s wr, 269 op/s
Nov 29 08:11:30 compute-0 ceph-mon[75050]: osdmap e456: 3 total, 3 up, 3 in
Nov 29 08:11:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:30 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1078180471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:30 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1078180471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 2.3 KiB/s wr, 149 op/s
Nov 29 08:11:31 compute-0 ovn_controller[153383]: 2025-11-29T08:11:31Z|00074|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.14
Nov 29 08:11:31 compute-0 ovn_controller[153383]: 2025-11-29T08:11:31Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d2:8f:05 10.100.0.14
Nov 29 08:11:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1078180471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1078180471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:32 compute-0 ceph-mon[75050]: pgmap v2229: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 271 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 2.3 KiB/s wr, 149 op/s
Nov 29 08:11:33 compute-0 nova_compute[256729]: 2025-11-29 08:11:33.093 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 279 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 MiB/s wr, 262 op/s
Nov 29 08:11:33 compute-0 nova_compute[256729]: 2025-11-29 08:11:33.322 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Nov 29 08:11:33 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Nov 29 08:11:33 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Nov 29 08:11:34 compute-0 ceph-mon[75050]: pgmap v2230: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 279 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 MiB/s wr, 262 op/s
Nov 29 08:11:34 compute-0 ceph-mon[75050]: osdmap e457: 3 total, 3 up, 3 in
Nov 29 08:11:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e457 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 283 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Nov 29 08:11:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:11:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:11:36 compute-0 ovn_controller[153383]: 2025-11-29T08:11:36Z|00076|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.14
Nov 29 08:11:36 compute-0 ovn_controller[153383]: 2025-11-29T08:11:36Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d2:8f:05 10.100.0.14
Nov 29 08:11:36 compute-0 ovn_controller[153383]: 2025-11-29T08:11:36Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:8f:05 10.100.0.14
Nov 29 08:11:36 compute-0 ovn_controller[153383]: 2025-11-29T08:11:36Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:8f:05 10.100.0.14
Nov 29 08:11:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Nov 29 08:11:36 compute-0 ceph-mon[75050]: pgmap v2232: 305 pgs: 305 active+clean; 283 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Nov 29 08:11:36 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Nov 29 08:11:36 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Nov 29 08:11:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 283 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.7 MiB/s wr, 207 op/s
Nov 29 08:11:37 compute-0 ceph-mon[75050]: osdmap e458: 3 total, 3 up, 3 in
Nov 29 08:11:38 compute-0 nova_compute[256729]: 2025-11-29 08:11:38.096 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:38 compute-0 nova_compute[256729]: 2025-11-29 08:11:38.324 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:38 compute-0 ceph-mon[75050]: pgmap v2234: 305 pgs: 305 active+clean; 283 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.7 MiB/s wr, 207 op/s
Nov 29 08:11:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 187 op/s
Nov 29 08:11:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Nov 29 08:11:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Nov 29 08:11:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Nov 29 08:11:40 compute-0 ceph-mon[75050]: pgmap v2235: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 187 op/s
Nov 29 08:11:40 compute-0 ceph-mon[75050]: osdmap e459: 3 total, 3 up, 3 in
Nov 29 08:11:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 81 op/s
Nov 29 08:11:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Nov 29 08:11:42 compute-0 ceph-mon[75050]: pgmap v2237: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 81 op/s
Nov 29 08:11:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Nov 29 08:11:42 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Nov 29 08:11:43 compute-0 nova_compute[256729]: 2025-11-29 08:11:43.099 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/290126989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/290126989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 649 KiB/s wr, 145 op/s
Nov 29 08:11:43 compute-0 nova_compute[256729]: 2025-11-29 08:11:43.326 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:43 compute-0 ceph-mon[75050]: osdmap e460: 3 total, 3 up, 3 in
Nov 29 08:11:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/290126989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:43 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/290126989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Nov 29 08:11:44 compute-0 ceph-mon[75050]: pgmap v2239: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 649 KiB/s wr, 145 op/s
Nov 29 08:11:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Nov 29 08:11:44 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Nov 29 08:11:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/426596952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/426596952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e461 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 9.5 KiB/s wr, 107 op/s
Nov 29 08:11:45 compute-0 ceph-mon[75050]: osdmap e461: 3 total, 3 up, 3 in
Nov 29 08:11:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/426596952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/426596952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:46 compute-0 podman[304627]: 2025-11-29 08:11:46.741516797 +0000 UTC m=+0.083629958 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:11:46 compute-0 podman[304626]: 2025-11-29 08:11:46.74451162 +0000 UTC m=+0.087736272 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:11:46 compute-0 ceph-mon[75050]: pgmap v2241: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 9.5 KiB/s wr, 107 op/s
Nov 29 08:11:46 compute-0 podman[304625]: 2025-11-29 08:11:46.766025814 +0000 UTC m=+0.121355990 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:11:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630135185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630135185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 157 KiB/s rd, 11 KiB/s wr, 201 op/s
Nov 29 08:11:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3630135185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3630135185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:48 compute-0 nova_compute[256729]: 2025-11-29 08:11:48.101 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:48 compute-0 nova_compute[256729]: 2025-11-29 08:11:48.327 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:48 compute-0 ceph-mon[75050]: pgmap v2242: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 157 KiB/s rd, 11 KiB/s wr, 201 op/s
Nov 29 08:11:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 13 KiB/s wr, 182 op/s
Nov 29 08:11:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Nov 29 08:11:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Nov 29 08:11:49 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Nov 29 08:11:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Nov 29 08:11:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Nov 29 08:11:50 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Nov 29 08:11:51 compute-0 ceph-mon[75050]: pgmap v2243: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 13 KiB/s wr, 182 op/s
Nov 29 08:11:51 compute-0 ceph-mon[75050]: osdmap e462: 3 total, 3 up, 3 in
Nov 29 08:11:51 compute-0 ceph-mon[75050]: osdmap e463: 3 total, 3 up, 3 in
Nov 29 08:11:51 compute-0 ovn_controller[153383]: 2025-11-29T08:11:51Z|00296|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 08:11:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 7.2 KiB/s wr, 124 op/s
Nov 29 08:11:52 compute-0 ceph-mon[75050]: pgmap v2246: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 7.2 KiB/s wr, 124 op/s
Nov 29 08:11:53 compute-0 nova_compute[256729]: 2025-11-29 08:11:53.103 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 7.5 KiB/s wr, 128 op/s
Nov 29 08:11:53 compute-0 nova_compute[256729]: 2025-11-29 08:11:53.328 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:53 compute-0 nova_compute[256729]: 2025-11-29 08:11:53.825 256736 DEBUG oslo_concurrency.lockutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "11d37006-0804-487e-93f1-217ea49e9a51" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:53 compute-0 nova_compute[256729]: 2025-11-29 08:11:53.825 256736 DEBUG oslo_concurrency.lockutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:53 compute-0 nova_compute[256729]: 2025-11-29 08:11:53.826 256736 DEBUG oslo_concurrency.lockutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "11d37006-0804-487e-93f1-217ea49e9a51-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:53 compute-0 nova_compute[256729]: 2025-11-29 08:11:53.826 256736 DEBUG oslo_concurrency.lockutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:53 compute-0 nova_compute[256729]: 2025-11-29 08:11:53.826 256736 DEBUG oslo_concurrency.lockutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:53 compute-0 nova_compute[256729]: 2025-11-29 08:11:53.827 256736 INFO nova.compute.manager [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Terminating instance
Nov 29 08:11:53 compute-0 nova_compute[256729]: 2025-11-29 08:11:53.829 256736 DEBUG nova.compute.manager [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 29 08:11:54 compute-0 kernel: tap503caeb9-24 (unregistering): left promiscuous mode
Nov 29 08:11:54 compute-0 NetworkManager[48962]: <info>  [1764403914.2551] device (tap503caeb9-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 08:11:54 compute-0 ovn_controller[153383]: 2025-11-29T08:11:54Z|00297|binding|INFO|Releasing lport 503caeb9-24dd-41d1-bcb9-da6866a4b3cd from this chassis (sb_readonly=0)
Nov 29 08:11:54 compute-0 ovn_controller[153383]: 2025-11-29T08:11:54Z|00298|binding|INFO|Setting lport 503caeb9-24dd-41d1-bcb9-da6866a4b3cd down in Southbound
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.267 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:54 compute-0 ovn_controller[153383]: 2025-11-29T08:11:54Z|00299|binding|INFO|Removing iface tap503caeb9-24 ovn-installed in OVS
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.271 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.279 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:8f:05 10.100.0.14'], port_security=['fa:16:3e:d2:8f:05 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '11d37006-0804-487e-93f1-217ea49e9a51', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dda88d46-9162-4e7c-bb47-793ac4133966', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062fa36b3fb745529eb64d4b5bb52af6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e6edb27-9f1c-444b-901c-a9a15234db1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=767afc55-24b1-431b-aeef-ddbbabf80029, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>], logical_port=503caeb9-24dd-41d1-bcb9-da6866a4b3cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff72e6d7be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.282 163655 INFO neutron.agent.ovn.metadata.agent [-] Port 503caeb9-24dd-41d1-bcb9-da6866a4b3cd in datapath dda88d46-9162-4e7c-bb47-793ac4133966 unbound from our chassis
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.284 163655 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dda88d46-9162-4e7c-bb47-793ac4133966, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.286 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[ce983a02-8a06-47b3-8d85-698df1ec03f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.287 163655 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 namespace which is not needed anymore
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.310 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:54 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 29 08:11:54 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 17.390s CPU time.
Nov 29 08:11:54 compute-0 systemd-machined[217781]: Machine qemu-30-instance-0000001e terminated.
Nov 29 08:11:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Nov 29 08:11:54 compute-0 ceph-mon[75050]: pgmap v2247: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 7.5 KiB/s wr, 128 op/s
Nov 29 08:11:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Nov 29 08:11:54 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Nov 29 08:11:54 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[304008]: [NOTICE]   (304016) : haproxy version is 2.8.14-c23fe91
Nov 29 08:11:54 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[304008]: [NOTICE]   (304016) : path to executable is /usr/sbin/haproxy
Nov 29 08:11:54 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[304008]: [WARNING]  (304016) : Exiting Master process...
Nov 29 08:11:54 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[304008]: [ALERT]    (304016) : Current worker (304020) exited with code 143 (Terminated)
Nov 29 08:11:54 compute-0 neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966[304008]: [WARNING]  (304016) : All workers exited. Exiting... (0)
Nov 29 08:11:54 compute-0 systemd[1]: libpod-a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc.scope: Deactivated successfully.
Nov 29 08:11:54 compute-0 podman[304712]: 2025-11-29 08:11:54.47246381 +0000 UTC m=+0.078941699 container died a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.476 256736 INFO nova.virt.libvirt.driver [-] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Instance destroyed successfully.
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.477 256736 DEBUG nova.objects.instance [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lazy-loading 'resources' on Instance uuid 11d37006-0804-487e-93f1-217ea49e9a51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.500 256736 DEBUG nova.virt.libvirt.vif [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1949949001',display_name='tempest-TestEncryptedCinderVolumes-server-1949949001',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1949949001',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxX+PgMAZBuezORyRZTTDnmEkagoQZ/wV6Wk3lwyGDgLxEz+dGqkv0uj7q6iE8ZUn85LQMW2zUhk36PiQ5C6rOrwp08h1M8Rqk3HOI0Jn+9lui32YElh0SXij5turDSPw==',key_name='tempest-TestEncryptedCinderVolumes-1704602409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:11:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='062fa36b3fb745529eb64d4b5bb52af6',ramdisk_id='',reservation_id='r-jq90o5e9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-541864957',owner_user_name='tempest-TestEncryptedCinderVolumes-541864957-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:17Z,user_data=None,user_id='981b7946a749412f90d3d8148d99486a',uuid=11d37006-0804-487e-93f1-217ea49e9a51,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.501 256736 DEBUG nova.network.os_vif_util [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converting VIF {"id": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "address": "fa:16:3e:d2:8f:05", "network": {"id": "dda88d46-9162-4e7c-bb47-793ac4133966", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2039953618-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "062fa36b3fb745529eb64d4b5bb52af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap503caeb9-24", "ovs_interfaceid": "503caeb9-24dd-41d1-bcb9-da6866a4b3cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.502 256736 DEBUG nova.network.os_vif_util [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:8f:05,bridge_name='br-int',has_traffic_filtering=True,id=503caeb9-24dd-41d1-bcb9-da6866a4b3cd,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap503caeb9-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.502 256736 DEBUG os_vif [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:8f:05,bridge_name='br-int',has_traffic_filtering=True,id=503caeb9-24dd-41d1-bcb9-da6866a4b3cd,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap503caeb9-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.506 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.506 256736 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap503caeb9-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc-userdata-shm.mount: Deactivated successfully.
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.507 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.510 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 29 08:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9f7cd6a93255d6df0093836457f5c3f0761ad2e8c2e3f8d5d904ff203a38537-merged.mount: Deactivated successfully.
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.514 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.517 256736 INFO os_vif [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:8f:05,bridge_name='br-int',has_traffic_filtering=True,id=503caeb9-24dd-41d1-bcb9-da6866a4b3cd,network=Network(dda88d46-9162-4e7c-bb47-793ac4133966),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap503caeb9-24')
Nov 29 08:11:54 compute-0 podman[304712]: 2025-11-29 08:11:54.523305243 +0000 UTC m=+0.129783142 container cleanup a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 08:11:54 compute-0 systemd[1]: libpod-conmon-a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc.scope: Deactivated successfully.
Nov 29 08:11:54 compute-0 podman[304761]: 2025-11-29 08:11:54.609175673 +0000 UTC m=+0.050085273 container remove a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.615 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4d140d-821f-44c0-b027-50afea293895]: (4, ('Sat Nov 29 08:11:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 (a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc)\na39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc\nSat Nov 29 08:11:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 (a39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc)\na39f5667c5d102a1fad0426d3bcdeed3217b33f886d8fcab884c1d181e53f8fc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.618 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[5acfa34c-4122-480a-afb0-e0b1d7591c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.619 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdda88d46-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.622 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:54 compute-0 kernel: tapdda88d46-90: left promiscuous mode
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.627 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[dc80915d-2d73-4366-80e0-534eeaa09eaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.646 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.647 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[d82b9ccd-00bc-458e-947b-c3cb3c7b6ff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.648 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[23eb87d8-dcdd-44d8-8534-ed0dc136494a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.667 266092 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5babb7-58cd-4b0f-b311-1336791aa8eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618983, 'reachable_time': 35503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304787, 'error': None, 'target': 'ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.671 164178 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dda88d46-9162-4e7c-bb47-793ac4133966 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 29 08:11:54 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:54.672 164178 DEBUG oslo.privsep.daemon [-] privsep: reply[6955a705-14ce-4aa2-aed8-e3034837628f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 08:11:54 compute-0 systemd[1]: run-netns-ovnmeta\x2ddda88d46\x2d9162\x2d4e7c\x2dbb47\x2d793ac4133966.mount: Deactivated successfully.
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.752 256736 INFO nova.virt.libvirt.driver [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Deleting instance files /var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51_del
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.752 256736 INFO nova.virt.libvirt.driver [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Deletion of /var/lib/nova/instances/11d37006-0804-487e-93f1-217ea49e9a51_del complete
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.806 256736 INFO nova.compute.manager [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Took 0.98 seconds to destroy the instance on the hypervisor.
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.807 256736 DEBUG oslo.service.loopingcall [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.807 256736 DEBUG nova.compute.manager [-] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 29 08:11:54 compute-0 nova_compute[256729]: 2025-11-29 08:11:54.807 256736 DEBUG nova.network.neutron [-] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 29 08:11:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2576604636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2576604636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e464 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:11:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Nov 29 08:11:55 compute-0 ceph-mon[75050]: osdmap e464: 3 total, 3 up, 3 in
Nov 29 08:11:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2576604636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2576604636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:55 compute-0 nova_compute[256729]: 2025-11-29 08:11:55.918 256736 DEBUG nova.compute.manager [req-ebbe3477-4562-4ee6-a2c7-25cf6f0a04e1 req-c085b085-4a1c-4a5f-afb8-3818d48d7770 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received event network-vif-unplugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:55 compute-0 nova_compute[256729]: 2025-11-29 08:11:55.918 256736 DEBUG oslo_concurrency.lockutils [req-ebbe3477-4562-4ee6-a2c7-25cf6f0a04e1 req-c085b085-4a1c-4a5f-afb8-3818d48d7770 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "11d37006-0804-487e-93f1-217ea49e9a51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:55 compute-0 nova_compute[256729]: 2025-11-29 08:11:55.919 256736 DEBUG oslo_concurrency.lockutils [req-ebbe3477-4562-4ee6-a2c7-25cf6f0a04e1 req-c085b085-4a1c-4a5f-afb8-3818d48d7770 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:55 compute-0 nova_compute[256729]: 2025-11-29 08:11:55.919 256736 DEBUG oslo_concurrency.lockutils [req-ebbe3477-4562-4ee6-a2c7-25cf6f0a04e1 req-c085b085-4a1c-4a5f-afb8-3818d48d7770 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:55 compute-0 nova_compute[256729]: 2025-11-29 08:11:55.920 256736 DEBUG nova.compute.manager [req-ebbe3477-4562-4ee6-a2c7-25cf6f0a04e1 req-c085b085-4a1c-4a5f-afb8-3818d48d7770 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] No waiting events found dispatching network-vif-unplugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:55 compute-0 nova_compute[256729]: 2025-11-29 08:11:55.920 256736 DEBUG nova.compute.manager [req-ebbe3477-4562-4ee6-a2c7-25cf6f0a04e1 req-c085b085-4a1c-4a5f-afb8-3818d48d7770 ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received event network-vif-unplugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 29 08:11:56 compute-0 ceph-mon[75050]: pgmap v2249: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 3.2 KiB/s wr, 56 op/s
Nov 29 08:11:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:56.453 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:11:56 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:56.456 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:11:56 compute-0 nova_compute[256729]: 2025-11-29 08:11:56.454 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:56 compute-0 nova_compute[256729]: 2025-11-29 08:11:56.701 256736 DEBUG nova.network.neutron [-] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 29 08:11:56 compute-0 nova_compute[256729]: 2025-11-29 08:11:56.723 256736 INFO nova.compute.manager [-] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Took 1.92 seconds to deallocate network for instance.
Nov 29 08:11:56 compute-0 nova_compute[256729]: 2025-11-29 08:11:56.891 256736 INFO nova.compute.manager [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Took 0.17 seconds to detach 1 volumes for instance.
Nov 29 08:11:56 compute-0 nova_compute[256729]: 2025-11-29 08:11:56.941 256736 DEBUG oslo_concurrency.lockutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:56 compute-0 nova_compute[256729]: 2025-11-29 08:11:56.942 256736 DEBUG oslo_concurrency.lockutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:57 compute-0 nova_compute[256729]: 2025-11-29 08:11:57.009 256736 DEBUG oslo_concurrency.processutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:11:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 4.5 KiB/s wr, 124 op/s
Nov 29 08:11:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:11:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934859089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:57 compute-0 nova_compute[256729]: 2025-11-29 08:11:57.478 256736 DEBUG oslo_concurrency.processutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:11:57 compute-0 nova_compute[256729]: 2025-11-29 08:11:57.489 256736 DEBUG nova.compute.provider_tree [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:11:57 compute-0 nova_compute[256729]: 2025-11-29 08:11:57.764 256736 DEBUG nova.scheduler.client.report [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:11:57 compute-0 nova_compute[256729]: 2025-11-29 08:11:57.801 256736 DEBUG oslo_concurrency.lockutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:57 compute-0 nova_compute[256729]: 2025-11-29 08:11:57.860 256736 INFO nova.scheduler.client.report [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Deleted allocations for instance 11d37006-0804-487e-93f1-217ea49e9a51
Nov 29 08:11:57 compute-0 nova_compute[256729]: 2025-11-29 08:11:57.922 256736 DEBUG oslo_concurrency.lockutils [None req-9ba61aa3-4969-4253-a90b-f602f0e47fdb 981b7946a749412f90d3d8148d99486a 062fa36b3fb745529eb64d4b5bb52af6 - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:58 compute-0 nova_compute[256729]: 2025-11-29 08:11:58.137 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Nov 29 08:11:58 compute-0 ceph-mon[75050]: pgmap v2250: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 4.5 KiB/s wr, 124 op/s
Nov 29 08:11:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3934859089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:11:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Nov 29 08:11:58 compute-0 nova_compute[256729]: 2025-11-29 08:11:58.451 256736 DEBUG nova.compute.manager [req-fa6d4d84-a376-46be-aa50-0cef5397cbff req-67d6d34a-43cd-4f7b-88d2-0c4f572c353b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received event network-vif-plugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:58 compute-0 nova_compute[256729]: 2025-11-29 08:11:58.451 256736 DEBUG oslo_concurrency.lockutils [req-fa6d4d84-a376-46be-aa50-0cef5397cbff req-67d6d34a-43cd-4f7b-88d2-0c4f572c353b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Acquiring lock "11d37006-0804-487e-93f1-217ea49e9a51-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:58 compute-0 nova_compute[256729]: 2025-11-29 08:11:58.452 256736 DEBUG oslo_concurrency.lockutils [req-fa6d4d84-a376-46be-aa50-0cef5397cbff req-67d6d34a-43cd-4f7b-88d2-0c4f572c353b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:58 compute-0 nova_compute[256729]: 2025-11-29 08:11:58.452 256736 DEBUG oslo_concurrency.lockutils [req-fa6d4d84-a376-46be-aa50-0cef5397cbff req-67d6d34a-43cd-4f7b-88d2-0c4f572c353b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] Lock "11d37006-0804-487e-93f1-217ea49e9a51-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:11:58 compute-0 nova_compute[256729]: 2025-11-29 08:11:58.452 256736 DEBUG nova.compute.manager [req-fa6d4d84-a376-46be-aa50-0cef5397cbff req-67d6d34a-43cd-4f7b-88d2-0c4f572c353b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] No waiting events found dispatching network-vif-plugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 29 08:11:58 compute-0 nova_compute[256729]: 2025-11-29 08:11:58.453 256736 WARNING nova.compute.manager [req-fa6d4d84-a376-46be-aa50-0cef5397cbff req-67d6d34a-43cd-4f7b-88d2-0c4f572c353b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received unexpected event network-vif-plugged-503caeb9-24dd-41d1-bcb9-da6866a4b3cd for instance with vm_state deleted and task_state None.
Nov 29 08:11:58 compute-0 nova_compute[256729]: 2025-11-29 08:11:58.453 256736 DEBUG nova.compute.manager [req-fa6d4d84-a376-46be-aa50-0cef5397cbff req-67d6d34a-43cd-4f7b-88d2-0c4f572c353b ca77b3e56af348af8de57bfb7a317099 6855accf5b834c9f9590367437c455bf - - default default] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Received event network-vif-deleted-503caeb9-24dd-41d1-bcb9-da6866a4b3cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 29 08:11:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Nov 29 08:11:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 5.4 KiB/s wr, 125 op/s
Nov 29 08:11:59 compute-0 ceph-mon[75050]: osdmap e465: 3 total, 3 up, 3 in
Nov 29 08:11:59 compute-0 nova_compute[256729]: 2025-11-29 08:11:59.508 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:11:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:11:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131542137' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:11:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:11:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3131542137' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:11:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:59.791 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:11:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:59.792 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:11:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:11:59.792 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/96089490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/96089490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Nov 29 08:12:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Nov 29 08:12:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Nov 29 08:12:00 compute-0 ceph-mon[75050]: pgmap v2252: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 5.4 KiB/s wr, 125 op/s
Nov 29 08:12:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3131542137' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3131542137' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/96089490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/96089490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:00 compute-0 ceph-mon[75050]: osdmap e466: 3 total, 3 up, 3 in
Nov 29 08:12:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106164204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106164204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 3.5 KiB/s wr, 96 op/s
Nov 29 08:12:01 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:12:01.458 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:12:01 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/106164204' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:01 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/106164204' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2765186519' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2765186519' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1613540555' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1613540555' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:02 compute-0 ceph-mon[75050]: pgmap v2254: 305 pgs: 305 active+clean; 287 MiB data, 676 MiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 3.5 KiB/s wr, 96 op/s
Nov 29 08:12:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2765186519' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2765186519' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1613540555' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1613540555' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:03 compute-0 nova_compute[256729]: 2025-11-29 08:12:03.140 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 279 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 6.2 KiB/s wr, 150 op/s
Nov 29 08:12:04 compute-0 ceph-mon[75050]: pgmap v2255: 305 pgs: 305 active+clean; 279 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 6.2 KiB/s wr, 150 op/s
Nov 29 08:12:04 compute-0 nova_compute[256729]: 2025-11-29 08:12:04.511 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 4.7 KiB/s wr, 115 op/s
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:12:05
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['backups', '.rgw.root', 'vms', 'default.rgw.control', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'images']
Nov 29 08:12:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:12:06 compute-0 nova_compute[256729]: 2025-11-29 08:12:06.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:06 compute-0 ceph-mgr[75345]: client.0 ms_handle_reset on v2:192.168.122.100:6800/878361048
Nov 29 08:12:06 compute-0 ceph-mon[75050]: pgmap v2256: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 4.7 KiB/s wr, 115 op/s
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:12:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 4.8 KiB/s wr, 117 op/s
Nov 29 08:12:08 compute-0 nova_compute[256729]: 2025-11-29 08:12:08.004 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:08 compute-0 nova_compute[256729]: 2025-11-29 08:12:08.172 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:08 compute-0 nova_compute[256729]: 2025-11-29 08:12:08.177 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:08 compute-0 ceph-mon[75050]: pgmap v2257: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 4.8 KiB/s wr, 117 op/s
Nov 29 08:12:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2578201922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2578201922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2578201922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2578201922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 3.3 KiB/s wr, 96 op/s
Nov 29 08:12:09 compute-0 nova_compute[256729]: 2025-11-29 08:12:09.475 256736 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403914.4745874, 11d37006-0804-487e-93f1-217ea49e9a51 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 29 08:12:09 compute-0 nova_compute[256729]: 2025-11-29 08:12:09.476 256736 INFO nova.compute.manager [-] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] VM Stopped (Lifecycle Event)
Nov 29 08:12:09 compute-0 nova_compute[256729]: 2025-11-29 08:12:09.502 256736 DEBUG nova.compute.manager [None req-1ad23b30-0db5-4208-b5aa-b400f3902bc3 - - - - - -] [instance: 11d37006-0804-487e-93f1-217ea49e9a51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 29 08:12:09 compute-0 nova_compute[256729]: 2025-11-29 08:12:09.513 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:10 compute-0 nova_compute[256729]: 2025-11-29 08:12:10.144 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:10 compute-0 nova_compute[256729]: 2025-11-29 08:12:10.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:10 compute-0 nova_compute[256729]: 2025-11-29 08:12:10.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:10 compute-0 ceph-mon[75050]: pgmap v2258: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 3.3 KiB/s wr, 96 op/s
Nov 29 08:12:11 compute-0 nova_compute[256729]: 2025-11-29 08:12:11.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 3.0 KiB/s wr, 87 op/s
Nov 29 08:12:12 compute-0 nova_compute[256729]: 2025-11-29 08:12:12.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:12 compute-0 nova_compute[256729]: 2025-11-29 08:12:12.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:12:12 compute-0 nova_compute[256729]: 2025-11-29 08:12:12.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:12:12 compute-0 nova_compute[256729]: 2025-11-29 08:12:12.183 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:12:12 compute-0 nova_compute[256729]: 2025-11-29 08:12:12.184 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:12 compute-0 nova_compute[256729]: 2025-11-29 08:12:12.184 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:12:12 compute-0 ceph-mon[75050]: pgmap v2259: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 3.0 KiB/s wr, 87 op/s
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.174 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.180 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.181 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.181 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.182 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.183 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1154864542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 2.9 KiB/s wr, 80 op/s
Nov 29 08:12:13 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1154864542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:13 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232766941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.670 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.942 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.944 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4308MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.945 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:13 compute-0 nova_compute[256729]: 2025-11-29 08:12:13.945 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:14 compute-0 nova_compute[256729]: 2025-11-29 08:12:14.018 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:12:14 compute-0 nova_compute[256729]: 2025-11-29 08:12:14.018 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:12:14 compute-0 nova_compute[256729]: 2025-11-29 08:12:14.041 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:12:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Nov 29 08:12:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Nov 29 08:12:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Nov 29 08:12:14 compute-0 ceph-mon[75050]: pgmap v2260: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 2.9 KiB/s wr, 80 op/s
Nov 29 08:12:14 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2232766941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:14 compute-0 nova_compute[256729]: 2025-11-29 08:12:14.516 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:12:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2915265228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:14 compute-0 nova_compute[256729]: 2025-11-29 08:12:14.573 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:12:14 compute-0 nova_compute[256729]: 2025-11-29 08:12:14.580 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:12:14 compute-0 nova_compute[256729]: 2025-11-29 08:12:14.602 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:12:14 compute-0 nova_compute[256729]: 2025-11-29 08:12:14.621 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:12:14 compute-0 nova_compute[256729]: 2025-11-29 08:12:14.622 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:12:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 818 B/s wr, 13 op/s
Nov 29 08:12:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Nov 29 08:12:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Nov 29 08:12:15 compute-0 ceph-mon[75050]: osdmap e467: 3 total, 3 up, 3 in
Nov 29 08:12:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2915265228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:12:15 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894585429283063 of space, bias 1.0, pg target 0.8683756287849189 quantized to 32 (current 32)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:12:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:12:16 compute-0 ceph-mon[75050]: pgmap v2262: 305 pgs: 305 active+clean; 271 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 818 B/s wr, 13 op/s
Nov 29 08:12:16 compute-0 ceph-mon[75050]: osdmap e468: 3 total, 3 up, 3 in
Nov 29 08:12:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 29 08:12:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Nov 29 08:12:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Nov 29 08:12:17 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Nov 29 08:12:17 compute-0 podman[304858]: 2025-11-29 08:12:17.748309435 +0000 UTC m=+0.099774765 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:12:17 compute-0 podman[304859]: 2025-11-29 08:12:17.749733074 +0000 UTC m=+0.090531679 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:12:17 compute-0 podman[304857]: 2025-11-29 08:12:17.829781253 +0000 UTC m=+0.182458486 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:12:18 compute-0 nova_compute[256729]: 2025-11-29 08:12:18.177 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:18 compute-0 ceph-mon[75050]: pgmap v2264: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Nov 29 08:12:18 compute-0 ceph-mon[75050]: osdmap e469: 3 total, 3 up, 3 in
Nov 29 08:12:18 compute-0 nova_compute[256729]: 2025-11-29 08:12:18.622 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:12:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.8 KiB/s wr, 42 op/s
Nov 29 08:12:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Nov 29 08:12:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Nov 29 08:12:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Nov 29 08:12:19 compute-0 nova_compute[256729]: 2025-11-29 08:12:19.520 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4178500846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:19 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4178500846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:20 compute-0 ceph-mon[75050]: pgmap v2266: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.8 KiB/s wr, 42 op/s
Nov 29 08:12:20 compute-0 ceph-mon[75050]: osdmap e470: 3 total, 3 up, 3 in
Nov 29 08:12:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4178500846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:20 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4178500846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:20 compute-0 sudo[304920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:20 compute-0 sudo[304920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:20 compute-0 sudo[304920]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:20 compute-0 sudo[304945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:20 compute-0 sudo[304945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:20 compute-0 sudo[304945]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:21 compute-0 sudo[304970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:21 compute-0 sudo[304970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:21 compute-0 sudo[304970]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:21 compute-0 sudo[304995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:12:21 compute-0 sudo[304995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 KiB/s wr, 40 op/s
Nov 29 08:12:21 compute-0 sudo[304995]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:12:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:12:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:12:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:12:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:12:21 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev c015810a-cb1f-486a-ad32-5782e982e311 does not exist
Nov 29 08:12:21 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev dee9e970-9262-4ea5-8f53-73efe99a06d2 does not exist
Nov 29 08:12:21 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ad68bf92-8b38-4fb5-8d73-fd675def4f31 does not exist
Nov 29 08:12:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:12:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:12:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:12:21 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:12:21 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:12:21 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:21 compute-0 sudo[305052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:21 compute-0 sudo[305052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:21 compute-0 sudo[305052]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:21 compute-0 sudo[305077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:22 compute-0 sudo[305077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:22 compute-0 sudo[305077]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:22 compute-0 sudo[305102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:22 compute-0 sudo[305102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:22 compute-0 sudo[305102]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:22 compute-0 sudo[305127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:12:22 compute-0 sudo[305127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:22 compute-0 ceph-mon[75050]: pgmap v2268: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 KiB/s wr, 40 op/s
Nov 29 08:12:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:12:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:12:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:12:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:12:22 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:12:22 compute-0 podman[305194]: 2025-11-29 08:12:22.523056416 +0000 UTC m=+0.050968377 container create c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:12:22 compute-0 systemd[1]: Started libpod-conmon-c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15.scope.
Nov 29 08:12:22 compute-0 podman[305194]: 2025-11-29 08:12:22.496306408 +0000 UTC m=+0.024218429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:22 compute-0 podman[305194]: 2025-11-29 08:12:22.634530273 +0000 UTC m=+0.162442214 container init c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 08:12:22 compute-0 podman[305194]: 2025-11-29 08:12:22.641886826 +0000 UTC m=+0.169798767 container start c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:12:22 compute-0 podman[305194]: 2025-11-29 08:12:22.645181717 +0000 UTC m=+0.173093698 container attach c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:12:22 compute-0 suspicious_neumann[305210]: 167 167
Nov 29 08:12:22 compute-0 systemd[1]: libpod-c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15.scope: Deactivated successfully.
Nov 29 08:12:22 compute-0 conmon[305210]: conmon c0f5b15706dc894a2e7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15.scope/container/memory.events
Nov 29 08:12:22 compute-0 podman[305215]: 2025-11-29 08:12:22.717202204 +0000 UTC m=+0.042994918 container died c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 08:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-40040038e7b190247d7296dd13a49261c1d5f1bf8b2ec704d4d39c1ba34e14a3-merged.mount: Deactivated successfully.
Nov 29 08:12:22 compute-0 podman[305215]: 2025-11-29 08:12:22.770119644 +0000 UTC m=+0.095912368 container remove c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:12:22 compute-0 systemd[1]: libpod-conmon-c0f5b15706dc894a2e7ef2ca647f6574e0ac78ed6f03f361ca70c42a0afd1f15.scope: Deactivated successfully.
Nov 29 08:12:23 compute-0 podman[305239]: 2025-11-29 08:12:23.001249483 +0000 UTC m=+0.059300577 container create 4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:12:23 compute-0 systemd[1]: Started libpod-conmon-4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c.scope.
Nov 29 08:12:23 compute-0 podman[305239]: 2025-11-29 08:12:22.981279572 +0000 UTC m=+0.039330706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bbd15cd355a672488c73eaff51453b235bb56a9eae3f955326f371aa04f8c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bbd15cd355a672488c73eaff51453b235bb56a9eae3f955326f371aa04f8c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bbd15cd355a672488c73eaff51453b235bb56a9eae3f955326f371aa04f8c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bbd15cd355a672488c73eaff51453b235bb56a9eae3f955326f371aa04f8c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bbd15cd355a672488c73eaff51453b235bb56a9eae3f955326f371aa04f8c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:23 compute-0 podman[305239]: 2025-11-29 08:12:23.117205633 +0000 UTC m=+0.175256767 container init 4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_napier, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:12:23 compute-0 podman[305239]: 2025-11-29 08:12:23.1337619 +0000 UTC m=+0.191812984 container start 4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:12:23 compute-0 podman[305239]: 2025-11-29 08:12:23.1373898 +0000 UTC m=+0.195440894 container attach 4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_napier, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 08:12:23 compute-0 nova_compute[256729]: 2025-11-29 08:12:23.202 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.3 KiB/s wr, 74 op/s
Nov 29 08:12:24 compute-0 pedantic_napier[305255]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:12:24 compute-0 pedantic_napier[305255]: --> relative data size: 1.0
Nov 29 08:12:24 compute-0 pedantic_napier[305255]: --> All data devices are unavailable
Nov 29 08:12:24 compute-0 systemd[1]: libpod-4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c.scope: Deactivated successfully.
Nov 29 08:12:24 compute-0 podman[305239]: 2025-11-29 08:12:24.224268055 +0000 UTC m=+1.282319159 container died 4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_napier, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:12:24 compute-0 systemd[1]: libpod-4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c.scope: Consumed 1.035s CPU time.
Nov 29 08:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0bbd15cd355a672488c73eaff51453b235bb56a9eae3f955326f371aa04f8c3-merged.mount: Deactivated successfully.
Nov 29 08:12:24 compute-0 podman[305239]: 2025-11-29 08:12:24.305040434 +0000 UTC m=+1.363091518 container remove 4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_napier, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:12:24 compute-0 systemd[1]: libpod-conmon-4f4276bb7f24f23688def05204659fcd4ed72ec78695bc9e6ca2025031595b1c.scope: Deactivated successfully.
Nov 29 08:12:24 compute-0 sudo[305127]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:24 compute-0 sudo[305296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:24 compute-0 sudo[305296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:24 compute-0 sudo[305296]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:24 compute-0 sudo[305321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:24 compute-0 sudo[305321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:24 compute-0 sudo[305321]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:24 compute-0 ceph-mon[75050]: pgmap v2269: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.3 KiB/s wr, 74 op/s
Nov 29 08:12:24 compute-0 nova_compute[256729]: 2025-11-29 08:12:24.521 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:24 compute-0 sudo[305346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:24 compute-0 sudo[305346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:24 compute-0 sudo[305346]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:24 compute-0 sudo[305371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:12:24 compute-0 sudo[305371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:24 compute-0 podman[305436]: 2025-11-29 08:12:24.948391009 +0000 UTC m=+0.037868616 container create 8595033bd4ad0b0dd0de11a2ef23c27524ecbc0789263e68f4c387c6d22ffba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 08:12:24 compute-0 systemd[1]: Started libpod-conmon-8595033bd4ad0b0dd0de11a2ef23c27524ecbc0789263e68f4c387c6d22ffba9.scope.
Nov 29 08:12:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:25 compute-0 podman[305436]: 2025-11-29 08:12:25.02667406 +0000 UTC m=+0.116151677 container init 8595033bd4ad0b0dd0de11a2ef23c27524ecbc0789263e68f4c387c6d22ffba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:12:25 compute-0 podman[305436]: 2025-11-29 08:12:24.932077059 +0000 UTC m=+0.021554676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:25 compute-0 podman[305436]: 2025-11-29 08:12:25.03211222 +0000 UTC m=+0.121589817 container start 8595033bd4ad0b0dd0de11a2ef23c27524ecbc0789263e68f4c387c6d22ffba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:12:25 compute-0 podman[305436]: 2025-11-29 08:12:25.035041371 +0000 UTC m=+0.124518988 container attach 8595033bd4ad0b0dd0de11a2ef23c27524ecbc0789263e68f4c387c6d22ffba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:12:25 compute-0 bold_mendeleev[305452]: 167 167
Nov 29 08:12:25 compute-0 systemd[1]: libpod-8595033bd4ad0b0dd0de11a2ef23c27524ecbc0789263e68f4c387c6d22ffba9.scope: Deactivated successfully.
Nov 29 08:12:25 compute-0 podman[305436]: 2025-11-29 08:12:25.038899777 +0000 UTC m=+0.128377384 container died 8595033bd4ad0b0dd0de11a2ef23c27524ecbc0789263e68f4c387c6d22ffba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-df0882a8f5e0686d9d49aa6391f02001e11e2dcb102ea92ac3966c3a19dd2beb-merged.mount: Deactivated successfully.
Nov 29 08:12:25 compute-0 podman[305436]: 2025-11-29 08:12:25.071501757 +0000 UTC m=+0.160979364 container remove 8595033bd4ad0b0dd0de11a2ef23c27524ecbc0789263e68f4c387c6d22ffba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:12:25 compute-0 systemd[1]: libpod-conmon-8595033bd4ad0b0dd0de11a2ef23c27524ecbc0789263e68f4c387c6d22ffba9.scope: Deactivated successfully.
Nov 29 08:12:25 compute-0 podman[305477]: 2025-11-29 08:12:25.22236185 +0000 UTC m=+0.041795094 container create 58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 08:12:25 compute-0 systemd[1]: Started libpod-conmon-58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776.scope.
Nov 29 08:12:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Nov 29 08:12:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Nov 29 08:12:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.5 KiB/s wr, 50 op/s
Nov 29 08:12:25 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Nov 29 08:12:25 compute-0 podman[305477]: 2025-11-29 08:12:25.202054899 +0000 UTC m=+0.021488153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a3c25a9f6f7cd83d345b777bcde2cdce3cb48940da3cd0ed1b6d01e1636089/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a3c25a9f6f7cd83d345b777bcde2cdce3cb48940da3cd0ed1b6d01e1636089/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a3c25a9f6f7cd83d345b777bcde2cdce3cb48940da3cd0ed1b6d01e1636089/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a3c25a9f6f7cd83d345b777bcde2cdce3cb48940da3cd0ed1b6d01e1636089/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:25 compute-0 podman[305477]: 2025-11-29 08:12:25.321052444 +0000 UTC m=+0.140485668 container init 58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:12:25 compute-0 podman[305477]: 2025-11-29 08:12:25.334602958 +0000 UTC m=+0.154036212 container start 58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:12:25 compute-0 podman[305477]: 2025-11-29 08:12:25.339683558 +0000 UTC m=+0.159116802 container attach 58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]: {
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:     "0": [
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:         {
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "devices": [
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "/dev/loop3"
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             ],
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_name": "ceph_lv0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_size": "21470642176",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "name": "ceph_lv0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "tags": {
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.cluster_name": "ceph",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.crush_device_class": "",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.encrypted": "0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.osd_id": "0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.type": "block",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.vdo": "0"
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             },
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "type": "block",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "vg_name": "ceph_vg0"
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:         }
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:     ],
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:     "1": [
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:         {
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "devices": [
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "/dev/loop4"
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             ],
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_name": "ceph_lv1",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_size": "21470642176",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "name": "ceph_lv1",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "tags": {
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.cluster_name": "ceph",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.crush_device_class": "",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.encrypted": "0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.osd_id": "1",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.type": "block",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.vdo": "0"
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             },
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "type": "block",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "vg_name": "ceph_vg1"
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:         }
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:     ],
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:     "2": [
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:         {
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "devices": [
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "/dev/loop5"
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             ],
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_name": "ceph_lv2",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_size": "21470642176",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "name": "ceph_lv2",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "tags": {
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.cluster_name": "ceph",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.crush_device_class": "",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.encrypted": "0",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.osd_id": "2",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.type": "block",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:                 "ceph.vdo": "0"
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             },
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "type": "block",
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:             "vg_name": "ceph_vg2"
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:         }
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]:     ]
Nov 29 08:12:26 compute-0 ecstatic_dijkstra[305495]: }
Nov 29 08:12:26 compute-0 systemd[1]: libpod-58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776.scope: Deactivated successfully.
Nov 29 08:12:26 compute-0 conmon[305495]: conmon 58658b0bcf60349f7d61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776.scope/container/memory.events
Nov 29 08:12:26 compute-0 podman[305477]: 2025-11-29 08:12:26.19600564 +0000 UTC m=+1.015438894 container died 58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-68a3c25a9f6f7cd83d345b777bcde2cdce3cb48940da3cd0ed1b6d01e1636089-merged.mount: Deactivated successfully.
Nov 29 08:12:26 compute-0 podman[305477]: 2025-11-29 08:12:26.26592972 +0000 UTC m=+1.085362974 container remove 58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dijkstra, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:12:26 compute-0 systemd[1]: libpod-conmon-58658b0bcf60349f7d61664044c6780eec1debc5349b78b602c875b8caa2d776.scope: Deactivated successfully.
Nov 29 08:12:26 compute-0 sudo[305371]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:26 compute-0 ceph-mon[75050]: pgmap v2271: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.5 KiB/s wr, 50 op/s
Nov 29 08:12:26 compute-0 ceph-mon[75050]: osdmap e471: 3 total, 3 up, 3 in
Nov 29 08:12:26 compute-0 sudo[305516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:26 compute-0 sudo[305516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:26 compute-0 sudo[305516]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:26 compute-0 sudo[305541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:12:26 compute-0 sudo[305541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:26 compute-0 sudo[305541]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:26 compute-0 sudo[305566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:26 compute-0 sudo[305566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:26 compute-0 sudo[305566]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:26 compute-0 sudo[305591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:12:26 compute-0 sudo[305591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:27 compute-0 podman[305655]: 2025-11-29 08:12:27.017718068 +0000 UTC m=+0.066109366 container create e376433c7a3f791a59d1281c2b45e33eff9a6799bfa9e4e835477c82b5c794df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_snyder, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:12:27 compute-0 systemd[1]: Started libpod-conmon-e376433c7a3f791a59d1281c2b45e33eff9a6799bfa9e4e835477c82b5c794df.scope.
Nov 29 08:12:27 compute-0 podman[305655]: 2025-11-29 08:12:26.988001247 +0000 UTC m=+0.036392595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:27 compute-0 podman[305655]: 2025-11-29 08:12:27.102836546 +0000 UTC m=+0.151227804 container init e376433c7a3f791a59d1281c2b45e33eff9a6799bfa9e4e835477c82b5c794df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_snyder, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 08:12:27 compute-0 podman[305655]: 2025-11-29 08:12:27.111556887 +0000 UTC m=+0.159948145 container start e376433c7a3f791a59d1281c2b45e33eff9a6799bfa9e4e835477c82b5c794df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 08:12:27 compute-0 podman[305655]: 2025-11-29 08:12:27.114411826 +0000 UTC m=+0.162803124 container attach e376433c7a3f791a59d1281c2b45e33eff9a6799bfa9e4e835477c82b5c794df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_snyder, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:12:27 compute-0 vigorous_snyder[305671]: 167 167
Nov 29 08:12:27 compute-0 systemd[1]: libpod-e376433c7a3f791a59d1281c2b45e33eff9a6799bfa9e4e835477c82b5c794df.scope: Deactivated successfully.
Nov 29 08:12:27 compute-0 podman[305655]: 2025-11-29 08:12:27.119877957 +0000 UTC m=+0.168269255 container died e376433c7a3f791a59d1281c2b45e33eff9a6799bfa9e4e835477c82b5c794df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_snyder, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 08:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ce18a5fce114e4a4c7a4d911612f1b9a4fffa3b32e0e512084e41c3226df94d-merged.mount: Deactivated successfully.
Nov 29 08:12:27 compute-0 podman[305655]: 2025-11-29 08:12:27.156869487 +0000 UTC m=+0.205260755 container remove e376433c7a3f791a59d1281c2b45e33eff9a6799bfa9e4e835477c82b5c794df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_snyder, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:12:27 compute-0 systemd[1]: libpod-conmon-e376433c7a3f791a59d1281c2b45e33eff9a6799bfa9e4e835477c82b5c794df.scope: Deactivated successfully.
Nov 29 08:12:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Nov 29 08:12:27 compute-0 podman[305698]: 2025-11-29 08:12:27.366128912 +0000 UTC m=+0.051376948 container create e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 08:12:27 compute-0 systemd[1]: Started libpod-conmon-e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7.scope.
Nov 29 08:12:27 compute-0 podman[305698]: 2025-11-29 08:12:27.344412913 +0000 UTC m=+0.029660959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:12:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0234d6b96504262aa6c91d857558caf73d0d96d8f3e20616fe6a1c5f940e03f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0234d6b96504262aa6c91d857558caf73d0d96d8f3e20616fe6a1c5f940e03f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0234d6b96504262aa6c91d857558caf73d0d96d8f3e20616fe6a1c5f940e03f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0234d6b96504262aa6c91d857558caf73d0d96d8f3e20616fe6a1c5f940e03f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:12:27 compute-0 podman[305698]: 2025-11-29 08:12:27.468686533 +0000 UTC m=+0.153934579 container init e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:12:27 compute-0 podman[305698]: 2025-11-29 08:12:27.481460305 +0000 UTC m=+0.166708331 container start e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:12:27 compute-0 podman[305698]: 2025-11-29 08:12:27.485111426 +0000 UTC m=+0.170359442 container attach e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:12:27 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 08:12:27 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4272333394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:28 compute-0 nova_compute[256729]: 2025-11-29 08:12:28.205 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Nov 29 08:12:28 compute-0 ceph-mon[75050]: pgmap v2272: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Nov 29 08:12:28 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4272333394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 08:12:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Nov 29 08:12:28 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Nov 29 08:12:28 compute-0 serene_jemison[305714]: {
Nov 29 08:12:28 compute-0 serene_jemison[305714]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "osd_id": 2,
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "type": "bluestore"
Nov 29 08:12:28 compute-0 serene_jemison[305714]:     },
Nov 29 08:12:28 compute-0 serene_jemison[305714]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "osd_id": 1,
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "type": "bluestore"
Nov 29 08:12:28 compute-0 serene_jemison[305714]:     },
Nov 29 08:12:28 compute-0 serene_jemison[305714]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "osd_id": 0,
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:12:28 compute-0 serene_jemison[305714]:         "type": "bluestore"
Nov 29 08:12:28 compute-0 serene_jemison[305714]:     }
Nov 29 08:12:28 compute-0 serene_jemison[305714]: }
Nov 29 08:12:28 compute-0 systemd[1]: libpod-e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7.scope: Deactivated successfully.
Nov 29 08:12:28 compute-0 podman[305698]: 2025-11-29 08:12:28.550909059 +0000 UTC m=+1.236157075 container died e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 08:12:28 compute-0 systemd[1]: libpod-e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7.scope: Consumed 1.075s CPU time.
Nov 29 08:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0234d6b96504262aa6c91d857558caf73d0d96d8f3e20616fe6a1c5f940e03f2-merged.mount: Deactivated successfully.
Nov 29 08:12:28 compute-0 podman[305698]: 2025-11-29 08:12:28.618208547 +0000 UTC m=+1.303456593 container remove e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:12:28 compute-0 systemd[1]: libpod-conmon-e33e18faae7432913cbfe9d9bc8743fc9b9f399eec5a84ec12d6fac4a5cc8bc7.scope: Deactivated successfully.
Nov 29 08:12:28 compute-0 sudo[305591]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:12:28 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:12:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:12:28 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:12:28 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 6fbb6bb1-2015-42dd-ba23-0a5c1b0816ee does not exist
Nov 29 08:12:28 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b6a93dd4-b3bd-4b7d-a0da-606c723db794 does not exist
Nov 29 08:12:28 compute-0 sudo[305760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:12:28 compute-0 sudo[305760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:28 compute-0 sudo[305760]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:28 compute-0 sudo[305785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:12:28 compute-0 sudo[305785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:12:28 compute-0 sudo[305785]: pam_unix(sudo:session): session closed for user root
Nov 29 08:12:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Nov 29 08:12:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Nov 29 08:12:29 compute-0 ceph-mon[75050]: osdmap e472: 3 total, 3 up, 3 in
Nov 29 08:12:29 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:12:29 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:12:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Nov 29 08:12:29 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Nov 29 08:12:29 compute-0 nova_compute[256729]: 2025-11-29 08:12:29.523 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Nov 29 08:12:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Nov 29 08:12:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Nov 29 08:12:30 compute-0 ceph-mon[75050]: pgmap v2274: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Nov 29 08:12:30 compute-0 ceph-mon[75050]: osdmap e473: 3 total, 3 up, 3 in
Nov 29 08:12:30 compute-0 ceph-mon[75050]: osdmap e474: 3 total, 3 up, 3 in
Nov 29 08:12:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Nov 29 08:12:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Nov 29 08:12:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Nov 29 08:12:31 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Nov 29 08:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Nov 29 08:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Nov 29 08:12:32 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Nov 29 08:12:32 compute-0 ceph-mon[75050]: pgmap v2277: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Nov 29 08:12:32 compute-0 ceph-mon[75050]: osdmap e475: 3 total, 3 up, 3 in
Nov 29 08:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/455910545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:32 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:32 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/455910545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:33 compute-0 nova_compute[256729]: 2025-11-29 08:12:33.209 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 4.0 KiB/s wr, 76 op/s
Nov 29 08:12:33 compute-0 ceph-mon[75050]: osdmap e476: 3 total, 3 up, 3 in
Nov 29 08:12:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/455910545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:33 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/455910545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:34 compute-0 ceph-mon[75050]: pgmap v2280: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 4.0 KiB/s wr, 76 op/s
Nov 29 08:12:34 compute-0 nova_compute[256729]: 2025-11-29 08:12:34.526 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 3.4 KiB/s wr, 85 op/s
Nov 29 08:12:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e476 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:12:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:12:36 compute-0 ceph-mon[75050]: pgmap v2281: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 3.4 KiB/s wr, 85 op/s
Nov 29 08:12:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.3 KiB/s wr, 78 op/s
Nov 29 08:12:38 compute-0 nova_compute[256729]: 2025-11-29 08:12:38.211 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:38 compute-0 ceph-mon[75050]: pgmap v2282: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.3 KiB/s wr, 78 op/s
Nov 29 08:12:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.9 KiB/s wr, 68 op/s
Nov 29 08:12:39 compute-0 nova_compute[256729]: 2025-11-29 08:12:39.528 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e476 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Nov 29 08:12:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Nov 29 08:12:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Nov 29 08:12:40 compute-0 ceph-mon[75050]: pgmap v2283: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 2.9 KiB/s wr, 68 op/s
Nov 29 08:12:40 compute-0 ceph-mon[75050]: osdmap e477: 3 total, 3 up, 3 in
Nov 29 08:12:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 803 B/s wr, 27 op/s
Nov 29 08:12:42 compute-0 ceph-mon[75050]: pgmap v2285: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 803 B/s wr, 27 op/s
Nov 29 08:12:43 compute-0 nova_compute[256729]: 2025-11-29 08:12:43.212 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 716 B/s wr, 24 op/s
Nov 29 08:12:43 compute-0 ovn_controller[153383]: 2025-11-29T08:12:43Z|00300|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 29 08:12:44 compute-0 nova_compute[256729]: 2025-11-29 08:12:44.529 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:44 compute-0 ceph-mon[75050]: pgmap v2286: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 716 B/s wr, 24 op/s
Nov 29 08:12:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 511 B/s wr, 4 op/s
Nov 29 08:12:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e477 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Nov 29 08:12:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Nov 29 08:12:45 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Nov 29 08:12:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Nov 29 08:12:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Nov 29 08:12:46 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Nov 29 08:12:46 compute-0 ceph-mon[75050]: pgmap v2287: 305 pgs: 305 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 511 B/s wr, 4 op/s
Nov 29 08:12:46 compute-0 ceph-mon[75050]: osdmap e478: 3 total, 3 up, 3 in
Nov 29 08:12:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.0 KiB/s wr, 25 op/s
Nov 29 08:12:47 compute-0 ceph-mon[75050]: osdmap e479: 3 total, 3 up, 3 in
Nov 29 08:12:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2841664597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2841664597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:48 compute-0 nova_compute[256729]: 2025-11-29 08:12:48.214 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:48 compute-0 podman[305811]: 2025-11-29 08:12:48.712738442 +0000 UTC m=+0.072096831 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 08:12:48 compute-0 podman[305812]: 2025-11-29 08:12:48.727932501 +0000 UTC m=+0.088376110 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 08:12:48 compute-0 podman[305810]: 2025-11-29 08:12:48.769850638 +0000 UTC m=+0.137554917 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:12:48 compute-0 ceph-mon[75050]: pgmap v2290: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.0 KiB/s wr, 25 op/s
Nov 29 08:12:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2841664597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2841664597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Nov 29 08:12:49 compute-0 nova_compute[256729]: 2025-11-29 08:12:49.531 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e479 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Nov 29 08:12:50 compute-0 ceph-mon[75050]: pgmap v2291: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Nov 29 08:12:51 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Nov 29 08:12:51 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Nov 29 08:12:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.0 KiB/s wr, 37 op/s
Nov 29 08:12:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Nov 29 08:12:52 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Nov 29 08:12:52 compute-0 ceph-mon[75050]: osdmap e480: 3 total, 3 up, 3 in
Nov 29 08:12:52 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Nov 29 08:12:53 compute-0 ceph-mon[75050]: pgmap v2293: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 271 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.0 KiB/s wr, 37 op/s
Nov 29 08:12:53 compute-0 ceph-mon[75050]: osdmap e481: 3 total, 3 up, 3 in
Nov 29 08:12:53 compute-0 nova_compute[256729]: 2025-11-29 08:12:53.216 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 3.8 KiB/s wr, 79 op/s
Nov 29 08:12:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3743805428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3743805428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3743805428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3743805428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:54 compute-0 nova_compute[256729]: 2025-11-29 08:12:54.537 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:55 compute-0 ceph-mon[75050]: pgmap v2295: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 3.8 KiB/s wr, 79 op/s
Nov 29 08:12:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 KiB/s wr, 48 op/s
Nov 29 08:12:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:12:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Nov 29 08:12:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Nov 29 08:12:55 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Nov 29 08:12:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Nov 29 08:12:56 compute-0 ceph-mon[75050]: pgmap v2296: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 KiB/s wr, 48 op/s
Nov 29 08:12:56 compute-0 ceph-mon[75050]: osdmap e482: 3 total, 3 up, 3 in
Nov 29 08:12:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Nov 29 08:12:56 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Nov 29 08:12:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 4.5 KiB/s wr, 96 op/s
Nov 29 08:12:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Nov 29 08:12:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Nov 29 08:12:57 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Nov 29 08:12:57 compute-0 ceph-mon[75050]: osdmap e483: 3 total, 3 up, 3 in
Nov 29 08:12:58 compute-0 nova_compute[256729]: 2025-11-29 08:12:58.219 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:58 compute-0 ceph-mon[75050]: pgmap v2299: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 4.5 KiB/s wr, 96 op/s
Nov 29 08:12:58 compute-0 ceph-mon[75050]: osdmap e484: 3 total, 3 up, 3 in
Nov 29 08:12:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:12:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2250809525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:12:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2250809525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.5 KiB/s wr, 50 op/s
Nov 29 08:12:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2250809525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:12:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2250809525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:12:59 compute-0 nova_compute[256729]: 2025-11-29 08:12:59.549 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:12:59.792 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:12:59.793 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:12:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:12:59.793 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Nov 29 08:13:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Nov 29 08:13:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Nov 29 08:13:00 compute-0 ceph-mon[75050]: pgmap v2301: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.5 KiB/s wr, 50 op/s
Nov 29 08:13:00 compute-0 ceph-mon[75050]: osdmap e485: 3 total, 3 up, 3 in
Nov 29 08:13:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.0 KiB/s wr, 43 op/s
Nov 29 08:13:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Nov 29 08:13:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Nov 29 08:13:02 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Nov 29 08:13:02 compute-0 ceph-mon[75050]: pgmap v2303: 305 pgs: 305 active+clean; 271 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.0 KiB/s wr, 43 op/s
Nov 29 08:13:03 compute-0 nova_compute[256729]: 2025-11-29 08:13:03.144 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:03 compute-0 nova_compute[256729]: 2025-11-29 08:13:03.236 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 KiB/s wr, 51 op/s
Nov 29 08:13:03 compute-0 ceph-mon[75050]: osdmap e486: 3 total, 3 up, 3 in
Nov 29 08:13:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Nov 29 08:13:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Nov 29 08:13:04 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Nov 29 08:13:04 compute-0 ceph-mon[75050]: pgmap v2305: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 KiB/s wr, 51 op/s
Nov 29 08:13:04 compute-0 nova_compute[256729]: 2025-11-29 08:13:04.587 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:13:05 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 113K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.75 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 33.49 MB, 0.06 MB/s
                                           Interval WAL: 10K writes, 4444 syncs, 2.41 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/24008183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:05 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/24008183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 KiB/s wr, 54 op/s
Nov 29 08:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e487 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Nov 29 08:13:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Nov 29 08:13:05 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.323548) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403985323668, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 2121, "num_deletes": 272, "total_data_size": 2924568, "memory_usage": 2984136, "flush_reason": "Manual Compaction"}
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403985346668, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 2873937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39463, "largest_seqno": 41583, "table_properties": {"data_size": 2864050, "index_size": 6317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 21080, "raw_average_key_size": 21, "raw_value_size": 2844012, "raw_average_value_size": 2852, "num_data_blocks": 275, "num_entries": 997, "num_filter_entries": 997, "num_deletions": 272, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403843, "oldest_key_time": 1764403843, "file_creation_time": 1764403985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 23193 microseconds, and 11907 cpu microseconds.
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.346756) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 2873937 bytes OK
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.346786) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.348698) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.348715) EVENT_LOG_v1 {"time_micros": 1764403985348710, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.348735) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2915341, prev total WAL file size 2915341, number of live WAL files 2.
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.349752) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323539' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(2806KB)], [83(9225KB)]
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403985349826, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 12320988, "oldest_snapshot_seqno": -1}
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7517 keys, 12167164 bytes, temperature: kUnknown
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403985451830, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 12167164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12110708, "index_size": 36568, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 191065, "raw_average_key_size": 25, "raw_value_size": 11969571, "raw_average_value_size": 1592, "num_data_blocks": 1457, "num_entries": 7517, "num_filter_entries": 7517, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764403985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.452239) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 12167164 bytes
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.454606) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.6 rd, 119.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.5) write-amplify(4.2) OK, records in: 8062, records dropped: 545 output_compression: NoCompression
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.454635) EVENT_LOG_v1 {"time_micros": 1764403985454622, "job": 48, "event": "compaction_finished", "compaction_time_micros": 102169, "compaction_time_cpu_micros": 52912, "output_level": 6, "num_output_files": 1, "total_output_size": 12167164, "num_input_records": 8062, "num_output_records": 7517, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403985456075, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403985459260, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.349617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.459365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.459371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.459374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.459377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:05 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:05.459380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:05 compute-0 ceph-mon[75050]: osdmap e487: 3 total, 3 up, 3 in
Nov 29 08:13:05 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/24008183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:05 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/24008183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:05 compute-0 ceph-mon[75050]: osdmap e488: 3 total, 3 up, 3 in
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:13:05
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr']
Nov 29 08:13:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:13:06 compute-0 ceph-mon[75050]: pgmap v2307: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 KiB/s wr, 54 op/s
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:13:07 compute-0 nova_compute[256729]: 2025-11-29 08:13:07.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:13:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 4.7 KiB/s wr, 97 op/s
Nov 29 08:13:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Nov 29 08:13:07 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Nov 29 08:13:07 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Nov 29 08:13:08 compute-0 nova_compute[256729]: 2025-11-29 08:13:08.296 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:08 compute-0 ceph-mon[75050]: pgmap v2309: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 4.7 KiB/s wr, 97 op/s
Nov 29 08:13:08 compute-0 ceph-mon[75050]: osdmap e489: 3 total, 3 up, 3 in
Nov 29 08:13:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3381967590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3381967590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 60 op/s
Nov 29 08:13:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Nov 29 08:13:09 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Nov 29 08:13:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3381967590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3381967590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:09 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Nov 29 08:13:09 compute-0 nova_compute[256729]: 2025-11-29 08:13:09.624 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Nov 29 08:13:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Nov 29 08:13:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Nov 29 08:13:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2242229299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:10 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2242229299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:13:10 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.3 total, 600.0 interval
                                           Cumulative writes: 28K writes, 114K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.80 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8583 writes, 38K keys, 8583 commit groups, 1.0 writes per commit group, ingest: 28.44 MB, 0.05 MB/s
                                           Interval WAL: 8583 writes, 3572 syncs, 2.40 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:13:10 compute-0 ceph-mon[75050]: pgmap v2311: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 60 op/s
Nov 29 08:13:10 compute-0 ceph-mon[75050]: osdmap e490: 3 total, 3 up, 3 in
Nov 29 08:13:10 compute-0 ceph-mon[75050]: osdmap e491: 3 total, 3 up, 3 in
Nov 29 08:13:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2242229299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:10 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2242229299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:11 compute-0 nova_compute[256729]: 2025-11-29 08:13:11.143 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:11 compute-0 nova_compute[256729]: 2025-11-29 08:13:11.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:11 compute-0 nova_compute[256729]: 2025-11-29 08:13:11.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 2.8 KiB/s wr, 54 op/s
Nov 29 08:13:12 compute-0 nova_compute[256729]: 2025-11-29 08:13:12.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:12 compute-0 nova_compute[256729]: 2025-11-29 08:13:12.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:13:12 compute-0 nova_compute[256729]: 2025-11-29 08:13:12.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:13:12 compute-0 nova_compute[256729]: 2025-11-29 08:13:12.164 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:13:12 compute-0 nova_compute[256729]: 2025-11-29 08:13:12.164 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:12 compute-0 nova_compute[256729]: 2025-11-29 08:13:12.165 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:12 compute-0 nova_compute[256729]: 2025-11-29 08:13:12.165 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:13:12 compute-0 ceph-mon[75050]: pgmap v2314: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 2.8 KiB/s wr, 54 op/s
Nov 29 08:13:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Nov 29 08:13:13 compute-0 nova_compute[256729]: 2025-11-29 08:13:13.325 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Nov 29 08:13:13 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Nov 29 08:13:13 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.176 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.176 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.177 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.177 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.177 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Nov 29 08:13:14 compute-0 ceph-mon[75050]: pgmap v2315: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Nov 29 08:13:14 compute-0 ceph-mon[75050]: osdmap e492: 3 total, 3 up, 3 in
Nov 29 08:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Nov 29 08:13:14 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.627 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:14 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:14 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142958293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.699 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.939 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.941 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4332MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.942 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:14 compute-0 nova_compute[256729]: 2025-11-29 08:13:14.942 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.006 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.007 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.023 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing inventories for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.045 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating ProviderTree inventory for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.045 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Updating inventory in ProviderTree for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.073 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing aggregate associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.148 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Refreshing trait associations for resource provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f, traits: COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NODE,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE4A,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.165 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 3.4 KiB/s wr, 76 op/s
Nov 29 08:13:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629437911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629437911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:15 compute-0 ceph-mon[75050]: osdmap e493: 3 total, 3 up, 3 in
Nov 29 08:13:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2142958293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3629437911' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:15 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3629437911' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894585429283063 of space, bias 1.0, pg target 0.8683756287849189 quantized to 32 (current 32)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:13:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:13:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:13:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1126428554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.637 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.644 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.665 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.667 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:13:15 compute-0 nova_compute[256729]: 2025-11-29 08:13:15.667 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:13:16 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.5 total, 600.0 interval
                                           Cumulative writes: 23K writes, 95K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 23K writes, 8328 syncs, 2.86 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7321 writes, 32K keys, 7321 commit groups, 1.0 writes per commit group, ingest: 25.86 MB, 0.04 MB/s
                                           Interval WAL: 7321 writes, 3004 syncs, 2.44 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:13:16 compute-0 ceph-mon[75050]: pgmap v2318: 305 pgs: 305 active+clean; 271 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 3.4 KiB/s wr, 76 op/s
Nov 29 08:13:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1126428554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:13:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.9 KiB/s wr, 73 op/s
Nov 29 08:13:18 compute-0 nova_compute[256729]: 2025-11-29 08:13:18.325 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e493 do_prune osdmap full prune enabled
Nov 29 08:13:18 compute-0 ceph-mon[75050]: pgmap v2319: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.9 KiB/s wr, 73 op/s
Nov 29 08:13:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e494 e494: 3 total, 3 up, 3 in
Nov 29 08:13:18 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e494: 3 total, 3 up, 3 in
Nov 29 08:13:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.3 KiB/s wr, 68 op/s
Nov 29 08:13:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e494 do_prune osdmap full prune enabled
Nov 29 08:13:19 compute-0 ceph-mon[75050]: osdmap e494: 3 total, 3 up, 3 in
Nov 29 08:13:19 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e495 e495: 3 total, 3 up, 3 in
Nov 29 08:13:19 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e495: 3 total, 3 up, 3 in
Nov 29 08:13:19 compute-0 nova_compute[256729]: 2025-11-29 08:13:19.631 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:19 compute-0 podman[305919]: 2025-11-29 08:13:19.697701618 +0000 UTC m=+0.059812682 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 08:13:19 compute-0 podman[305920]: 2025-11-29 08:13:19.715926301 +0000 UTC m=+0.078650632 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:13:19 compute-0 podman[305918]: 2025-11-29 08:13:19.747749189 +0000 UTC m=+0.110862800 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 08:13:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e495 do_prune osdmap full prune enabled
Nov 29 08:13:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e496 e496: 3 total, 3 up, 3 in
Nov 29 08:13:20 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e496: 3 total, 3 up, 3 in
Nov 29 08:13:20 compute-0 ceph-mon[75050]: pgmap v2321: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.3 KiB/s wr, 68 op/s
Nov 29 08:13:20 compute-0 ceph-mon[75050]: osdmap e495: 3 total, 3 up, 3 in
Nov 29 08:13:20 compute-0 ceph-mon[75050]: osdmap e496: 3 total, 3 up, 3 in
Nov 29 08:13:20 compute-0 nova_compute[256729]: 2025-11-29 08:13:20.667 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:13:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1926099450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:20 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1926099450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.5 KiB/s wr, 44 op/s
Nov 29 08:13:21 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1926099450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:21 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1926099450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:21 compute-0 ceph-mgr[75345]: [devicehealth INFO root] Check health
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.602803) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404001602881, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 510, "num_deletes": 254, "total_data_size": 405807, "memory_usage": 416616, "flush_reason": "Manual Compaction"}
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404001608751, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 400545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41584, "largest_seqno": 42093, "table_properties": {"data_size": 397621, "index_size": 896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7217, "raw_average_key_size": 19, "raw_value_size": 391744, "raw_average_value_size": 1070, "num_data_blocks": 39, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403985, "oldest_key_time": 1764403985, "file_creation_time": 1764404001, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 5977 microseconds, and 2634 cpu microseconds.
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.608793) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 400545 bytes OK
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.608812) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.610299) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.610315) EVENT_LOG_v1 {"time_micros": 1764404001610310, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.610334) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 402775, prev total WAL file size 402775, number of live WAL files 2.
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.610882) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(391KB)], [86(11MB)]
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404001610982, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12567709, "oldest_snapshot_seqno": -1}
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7365 keys, 10900393 bytes, temperature: kUnknown
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404001699031, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10900393, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10846248, "index_size": 34641, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 188720, "raw_average_key_size": 25, "raw_value_size": 10709031, "raw_average_value_size": 1454, "num_data_blocks": 1366, "num_entries": 7365, "num_filter_entries": 7365, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764404001, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.699372) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10900393 bytes
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.701549) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.6 rd, 123.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 11.6 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(58.6) write-amplify(27.2) OK, records in: 7883, records dropped: 518 output_compression: NoCompression
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.701580) EVENT_LOG_v1 {"time_micros": 1764404001701566, "job": 50, "event": "compaction_finished", "compaction_time_micros": 88137, "compaction_time_cpu_micros": 47574, "output_level": 6, "num_output_files": 1, "total_output_size": 10900393, "num_input_records": 7883, "num_output_records": 7365, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404001701860, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404001705779, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.610760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.705825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.705831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.705834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.705837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:21 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:13:21.705840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:13:22 compute-0 ceph-mon[75050]: pgmap v2324: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.5 KiB/s wr, 44 op/s
Nov 29 08:13:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 87 op/s
Nov 29 08:13:23 compute-0 nova_compute[256729]: 2025-11-29 08:13:23.326 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e496 do_prune osdmap full prune enabled
Nov 29 08:13:23 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e497 e497: 3 total, 3 up, 3 in
Nov 29 08:13:23 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e497: 3 total, 3 up, 3 in
Nov 29 08:13:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e497 do_prune osdmap full prune enabled
Nov 29 08:13:24 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e498 e498: 3 total, 3 up, 3 in
Nov 29 08:13:24 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e498: 3 total, 3 up, 3 in
Nov 29 08:13:24 compute-0 ceph-mon[75050]: pgmap v2325: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 87 op/s
Nov 29 08:13:24 compute-0 ceph-mon[75050]: osdmap e497: 3 total, 3 up, 3 in
Nov 29 08:13:24 compute-0 nova_compute[256729]: 2025-11-29 08:13:24.634 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Nov 29 08:13:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e498 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:25 compute-0 ceph-mon[75050]: osdmap e498: 3 total, 3 up, 3 in
Nov 29 08:13:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3469573283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:25 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3469573283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:26 compute-0 ceph-mon[75050]: pgmap v2328: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Nov 29 08:13:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3469573283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:26 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3469573283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 4.1 KiB/s wr, 99 op/s
Nov 29 08:13:28 compute-0 nova_compute[256729]: 2025-11-29 08:13:28.328 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e498 do_prune osdmap full prune enabled
Nov 29 08:13:28 compute-0 ceph-mon[75050]: pgmap v2329: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 4.1 KiB/s wr, 99 op/s
Nov 29 08:13:28 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e499 e499: 3 total, 3 up, 3 in
Nov 29 08:13:28 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e499: 3 total, 3 up, 3 in
Nov 29 08:13:28 compute-0 sudo[305978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:28 compute-0 sudo[305978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:28 compute-0 sudo[305978]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:29 compute-0 sudo[306003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:29 compute-0 sudo[306003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:29 compute-0 sudo[306003]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:29 compute-0 sudo[306028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:29 compute-0 sudo[306028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:29 compute-0 sudo[306028]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:29 compute-0 sudo[306053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:13:29 compute-0 sudo[306053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.2 KiB/s wr, 66 op/s
Nov 29 08:13:29 compute-0 nova_compute[256729]: 2025-11-29 08:13:29.637 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e499 do_prune osdmap full prune enabled
Nov 29 08:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e500 e500: 3 total, 3 up, 3 in
Nov 29 08:13:29 compute-0 ceph-mon[75050]: osdmap e499: 3 total, 3 up, 3 in
Nov 29 08:13:29 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e500: 3 total, 3 up, 3 in
Nov 29 08:13:29 compute-0 sudo[306053]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:13:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:13:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:13:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:13:29 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 72749c80-30a0-4ca2-899b-6a8cc9dac9c6 does not exist
Nov 29 08:13:29 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 11245e22-8936-4290-a2b2-0794ac28a9ac does not exist
Nov 29 08:13:29 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7f323b1f-b822-4b0e-82e6-a84d73ea2bc0 does not exist
Nov 29 08:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:13:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:13:29 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:13:29 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:13:29 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:30 compute-0 sudo[306109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:30 compute-0 sudo[306109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:30 compute-0 sudo[306109]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:30 compute-0 sudo[306134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:30 compute-0 sudo[306134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:30 compute-0 sudo[306134]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:30 compute-0 sudo[306159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:30 compute-0 sudo[306159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:30 compute-0 sudo[306159]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:30 compute-0 sudo[306184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:13:30 compute-0 sudo[306184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e500 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e500 do_prune osdmap full prune enabled
Nov 29 08:13:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e501 e501: 3 total, 3 up, 3 in
Nov 29 08:13:30 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e501: 3 total, 3 up, 3 in
Nov 29 08:13:30 compute-0 ceph-mon[75050]: pgmap v2331: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.2 KiB/s wr, 66 op/s
Nov 29 08:13:30 compute-0 ceph-mon[75050]: osdmap e500: 3 total, 3 up, 3 in
Nov 29 08:13:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:13:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:13:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:13:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:13:30 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:13:30 compute-0 ceph-mon[75050]: osdmap e501: 3 total, 3 up, 3 in
Nov 29 08:13:30 compute-0 podman[306249]: 2025-11-29 08:13:30.800656031 +0000 UTC m=+0.080718699 container create 7cd845dcf297d8cc47049201fe3f3ca948ccbde2aa53bf5f972d78b83ca92acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 08:13:30 compute-0 systemd[1]: Started libpod-conmon-7cd845dcf297d8cc47049201fe3f3ca948ccbde2aa53bf5f972d78b83ca92acc.scope.
Nov 29 08:13:30 compute-0 podman[306249]: 2025-11-29 08:13:30.766575771 +0000 UTC m=+0.046638489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:30 compute-0 podman[306249]: 2025-11-29 08:13:30.905112564 +0000 UTC m=+0.185175232 container init 7cd845dcf297d8cc47049201fe3f3ca948ccbde2aa53bf5f972d78b83ca92acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:13:30 compute-0 podman[306249]: 2025-11-29 08:13:30.915671275 +0000 UTC m=+0.195733903 container start 7cd845dcf297d8cc47049201fe3f3ca948ccbde2aa53bf5f972d78b83ca92acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:13:30 compute-0 podman[306249]: 2025-11-29 08:13:30.919548912 +0000 UTC m=+0.199611630 container attach 7cd845dcf297d8cc47049201fe3f3ca948ccbde2aa53bf5f972d78b83ca92acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:13:30 compute-0 gifted_cannon[306265]: 167 167
Nov 29 08:13:30 compute-0 systemd[1]: libpod-7cd845dcf297d8cc47049201fe3f3ca948ccbde2aa53bf5f972d78b83ca92acc.scope: Deactivated successfully.
Nov 29 08:13:30 compute-0 podman[306249]: 2025-11-29 08:13:30.922875105 +0000 UTC m=+0.202937753 container died 7cd845dcf297d8cc47049201fe3f3ca948ccbde2aa53bf5f972d78b83ca92acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 08:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-02c5084e06a70dd0722febf4e5b92d7979d244a287751fb6e951166ee7ab8399-merged.mount: Deactivated successfully.
Nov 29 08:13:30 compute-0 podman[306249]: 2025-11-29 08:13:30.962479727 +0000 UTC m=+0.242542375 container remove 7cd845dcf297d8cc47049201fe3f3ca948ccbde2aa53bf5f972d78b83ca92acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_cannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 08:13:30 compute-0 systemd[1]: libpod-conmon-7cd845dcf297d8cc47049201fe3f3ca948ccbde2aa53bf5f972d78b83ca92acc.scope: Deactivated successfully.
Nov 29 08:13:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010385689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:31 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:31 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010385689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:31 compute-0 podman[306288]: 2025-11-29 08:13:31.215232102 +0000 UTC m=+0.080529073 container create 4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:13:31 compute-0 systemd[1]: Started libpod-conmon-4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c.scope.
Nov 29 08:13:31 compute-0 podman[306288]: 2025-11-29 08:13:31.186165381 +0000 UTC m=+0.051462382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/909db26a80226fd02fe77a09c1c2e2d284174aacb9487fccd1f571e10298e341/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/909db26a80226fd02fe77a09c1c2e2d284174aacb9487fccd1f571e10298e341/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/909db26a80226fd02fe77a09c1c2e2d284174aacb9487fccd1f571e10298e341/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/909db26a80226fd02fe77a09c1c2e2d284174aacb9487fccd1f571e10298e341/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/909db26a80226fd02fe77a09c1c2e2d284174aacb9487fccd1f571e10298e341/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.0 KiB/s wr, 62 op/s
Nov 29 08:13:31 compute-0 podman[306288]: 2025-11-29 08:13:31.338578106 +0000 UTC m=+0.203875137 container init 4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:13:31 compute-0 podman[306288]: 2025-11-29 08:13:31.352532552 +0000 UTC m=+0.217829523 container start 4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:13:31 compute-0 podman[306288]: 2025-11-29 08:13:31.357383526 +0000 UTC m=+0.222680577 container attach 4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 08:13:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4010385689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:31 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4010385689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:32 compute-0 hungry_mendel[306304]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:13:32 compute-0 hungry_mendel[306304]: --> relative data size: 1.0
Nov 29 08:13:32 compute-0 hungry_mendel[306304]: --> All data devices are unavailable
Nov 29 08:13:32 compute-0 systemd[1]: libpod-4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c.scope: Deactivated successfully.
Nov 29 08:13:32 compute-0 systemd[1]: libpod-4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c.scope: Consumed 1.064s CPU time.
Nov 29 08:13:32 compute-0 podman[306288]: 2025-11-29 08:13:32.454661147 +0000 UTC m=+1.319958068 container died 4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 08:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-909db26a80226fd02fe77a09c1c2e2d284174aacb9487fccd1f571e10298e341-merged.mount: Deactivated successfully.
Nov 29 08:13:32 compute-0 podman[306288]: 2025-11-29 08:13:32.51164305 +0000 UTC m=+1.376939981 container remove 4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 08:13:32 compute-0 systemd[1]: libpod-conmon-4d7580455c12136ced1d36e4972fbd1d045400f040c39b851d2293e9b2fddd7c.scope: Deactivated successfully.
Nov 29 08:13:32 compute-0 sudo[306184]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:32 compute-0 sudo[306347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:32 compute-0 sudo[306347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:32 compute-0 sudo[306347]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:32 compute-0 sudo[306372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:32 compute-0 sudo[306372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:32 compute-0 ceph-mon[75050]: pgmap v2334: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.0 KiB/s wr, 62 op/s
Nov 29 08:13:32 compute-0 sudo[306372]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:32 compute-0 sudo[306397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:32 compute-0 sudo[306397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:32 compute-0 sudo[306397]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:32 compute-0 sudo[306422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:13:32 compute-0 sudo[306422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:13:32.948 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:13:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:13:32.950 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:13:32 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:13:32.951 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:13:32 compute-0 nova_compute[256729]: 2025-11-29 08:13:32.952 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:33 compute-0 podman[306488]: 2025-11-29 08:13:33.208595214 +0000 UTC m=+0.055111842 container create a7be1b927be25f5d50f079366db74e55f392e0f739e5a6d1e04195c3d083c354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 08:13:33 compute-0 systemd[1]: Started libpod-conmon-a7be1b927be25f5d50f079366db74e55f392e0f739e5a6d1e04195c3d083c354.scope.
Nov 29 08:13:33 compute-0 podman[306488]: 2025-11-29 08:13:33.1812661 +0000 UTC m=+0.027782798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:33 compute-0 podman[306488]: 2025-11-29 08:13:33.304799209 +0000 UTC m=+0.151315917 container init a7be1b927be25f5d50f079366db74e55f392e0f739e5a6d1e04195c3d083c354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 08:13:33 compute-0 podman[306488]: 2025-11-29 08:13:33.314088025 +0000 UTC m=+0.160604683 container start a7be1b927be25f5d50f079366db74e55f392e0f739e5a6d1e04195c3d083c354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 08:13:33 compute-0 podman[306488]: 2025-11-29 08:13:33.318522478 +0000 UTC m=+0.165039176 container attach a7be1b927be25f5d50f079366db74e55f392e0f739e5a6d1e04195c3d083c354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 08:13:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.8 KiB/s wr, 55 op/s
Nov 29 08:13:33 compute-0 tender_merkle[306504]: 167 167
Nov 29 08:13:33 compute-0 systemd[1]: libpod-a7be1b927be25f5d50f079366db74e55f392e0f739e5a6d1e04195c3d083c354.scope: Deactivated successfully.
Nov 29 08:13:33 compute-0 podman[306488]: 2025-11-29 08:13:33.323586878 +0000 UTC m=+0.170103546 container died a7be1b927be25f5d50f079366db74e55f392e0f739e5a6d1e04195c3d083c354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_merkle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 29 08:13:33 compute-0 nova_compute[256729]: 2025-11-29 08:13:33.330 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf1e1e2930436c9cc6ae5b9566b16566f9b70575f1536f7abcd841156d5a4e5d-merged.mount: Deactivated successfully.
Nov 29 08:13:33 compute-0 podman[306488]: 2025-11-29 08:13:33.368176528 +0000 UTC m=+0.214693166 container remove a7be1b927be25f5d50f079366db74e55f392e0f739e5a6d1e04195c3d083c354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_merkle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:13:33 compute-0 systemd[1]: libpod-conmon-a7be1b927be25f5d50f079366db74e55f392e0f739e5a6d1e04195c3d083c354.scope: Deactivated successfully.
Nov 29 08:13:33 compute-0 podman[306526]: 2025-11-29 08:13:33.552546016 +0000 UTC m=+0.042413021 container create e0da5b421276f4f02130dd12aedb5070ca6d0a0c6cb45c89b3ec74ce1819de15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 08:13:33 compute-0 systemd[1]: Started libpod-conmon-e0da5b421276f4f02130dd12aedb5070ca6d0a0c6cb45c89b3ec74ce1819de15.scope.
Nov 29 08:13:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86486fec71f0778bdcc178abcf914e7864291cabc87ae7f971f0961e345ed2d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86486fec71f0778bdcc178abcf914e7864291cabc87ae7f971f0961e345ed2d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86486fec71f0778bdcc178abcf914e7864291cabc87ae7f971f0961e345ed2d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86486fec71f0778bdcc178abcf914e7864291cabc87ae7f971f0961e345ed2d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:33 compute-0 podman[306526]: 2025-11-29 08:13:33.538163499 +0000 UTC m=+0.028030534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:33 compute-0 podman[306526]: 2025-11-29 08:13:33.646304194 +0000 UTC m=+0.136171249 container init e0da5b421276f4f02130dd12aedb5070ca6d0a0c6cb45c89b3ec74ce1819de15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:13:33 compute-0 podman[306526]: 2025-11-29 08:13:33.653524313 +0000 UTC m=+0.143391358 container start e0da5b421276f4f02130dd12aedb5070ca6d0a0c6cb45c89b3ec74ce1819de15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:13:33 compute-0 podman[306526]: 2025-11-29 08:13:33.657478832 +0000 UTC m=+0.147345877 container attach e0da5b421276f4f02130dd12aedb5070ca6d0a0c6cb45c89b3ec74ce1819de15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 08:13:34 compute-0 eager_dewdney[306543]: {
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:     "0": [
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:         {
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "devices": [
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "/dev/loop3"
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             ],
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_name": "ceph_lv0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_size": "21470642176",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "name": "ceph_lv0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "tags": {
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.cluster_name": "ceph",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.crush_device_class": "",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.encrypted": "0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.osd_id": "0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.type": "block",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.vdo": "0"
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             },
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "type": "block",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "vg_name": "ceph_vg0"
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:         }
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:     ],
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:     "1": [
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:         {
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "devices": [
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "/dev/loop4"
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             ],
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_name": "ceph_lv1",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_size": "21470642176",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "name": "ceph_lv1",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "tags": {
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.cluster_name": "ceph",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.crush_device_class": "",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.encrypted": "0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.osd_id": "1",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.type": "block",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.vdo": "0"
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             },
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "type": "block",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "vg_name": "ceph_vg1"
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:         }
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:     ],
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:     "2": [
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:         {
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "devices": [
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "/dev/loop5"
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             ],
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_name": "ceph_lv2",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_size": "21470642176",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "name": "ceph_lv2",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "tags": {
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.cluster_name": "ceph",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.crush_device_class": "",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.encrypted": "0",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.osd_id": "2",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.type": "block",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:                 "ceph.vdo": "0"
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             },
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "type": "block",
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:             "vg_name": "ceph_vg2"
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:         }
Nov 29 08:13:34 compute-0 eager_dewdney[306543]:     ]
Nov 29 08:13:34 compute-0 eager_dewdney[306543]: }
Nov 29 08:13:34 compute-0 systemd[1]: libpod-e0da5b421276f4f02130dd12aedb5070ca6d0a0c6cb45c89b3ec74ce1819de15.scope: Deactivated successfully.
Nov 29 08:13:34 compute-0 podman[306526]: 2025-11-29 08:13:34.447387652 +0000 UTC m=+0.937254657 container died e0da5b421276f4f02130dd12aedb5070ca6d0a0c6cb45c89b3ec74ce1819de15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:13:34 compute-0 nova_compute[256729]: 2025-11-29 08:13:34.640 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-86486fec71f0778bdcc178abcf914e7864291cabc87ae7f971f0961e345ed2d0-merged.mount: Deactivated successfully.
Nov 29 08:13:34 compute-0 ceph-mon[75050]: pgmap v2335: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.8 KiB/s wr, 55 op/s
Nov 29 08:13:34 compute-0 podman[306526]: 2025-11-29 08:13:34.729754784 +0000 UTC m=+1.219621829 container remove e0da5b421276f4f02130dd12aedb5070ca6d0a0c6cb45c89b3ec74ce1819de15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_dewdney, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:13:34 compute-0 systemd[1]: libpod-conmon-e0da5b421276f4f02130dd12aedb5070ca6d0a0c6cb45c89b3ec74ce1819de15.scope: Deactivated successfully.
Nov 29 08:13:34 compute-0 sudo[306422]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:34 compute-0 sudo[306564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:34 compute-0 sudo[306564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:34 compute-0 sudo[306564]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:34 compute-0 sudo[306589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:13:34 compute-0 sudo[306589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:34 compute-0 sudo[306589]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:35 compute-0 sudo[306614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:35 compute-0 sudo[306614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:35 compute-0 sudo[306614]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:35 compute-0 sudo[306639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:13:35 compute-0 sudo[306639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.6 KiB/s wr, 56 op/s
Nov 29 08:13:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e501 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:35 compute-0 podman[306706]: 2025-11-29 08:13:35.494842539 +0000 UTC m=+0.050297839 container create 9c4861d690465ee9a047e12e8ee9b099cb8f8c932befae46d60ed00870132cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:13:35 compute-0 systemd[1]: Started libpod-conmon-9c4861d690465ee9a047e12e8ee9b099cb8f8c932befae46d60ed00870132cd7.scope.
Nov 29 08:13:35 compute-0 podman[306706]: 2025-11-29 08:13:35.474375143 +0000 UTC m=+0.029830473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:35 compute-0 podman[306706]: 2025-11-29 08:13:35.59888898 +0000 UTC m=+0.154344370 container init 9c4861d690465ee9a047e12e8ee9b099cb8f8c932befae46d60ed00870132cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:13:35 compute-0 podman[306706]: 2025-11-29 08:13:35.607350254 +0000 UTC m=+0.162805584 container start 9c4861d690465ee9a047e12e8ee9b099cb8f8c932befae46d60ed00870132cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:13:35 compute-0 podman[306706]: 2025-11-29 08:13:35.610928022 +0000 UTC m=+0.166383352 container attach 9c4861d690465ee9a047e12e8ee9b099cb8f8c932befae46d60ed00870132cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:13:35 compute-0 epic_johnson[306722]: 167 167
Nov 29 08:13:35 compute-0 systemd[1]: libpod-9c4861d690465ee9a047e12e8ee9b099cb8f8c932befae46d60ed00870132cd7.scope: Deactivated successfully.
Nov 29 08:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:13:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:13:35 compute-0 podman[306727]: 2025-11-29 08:13:35.674196449 +0000 UTC m=+0.038968387 container died 9c4861d690465ee9a047e12e8ee9b099cb8f8c932befae46d60ed00870132cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dba27e7ead83b2b5aa31e2c203562c782c3974939787d15237bd52359e8b34b-merged.mount: Deactivated successfully.
Nov 29 08:13:35 compute-0 podman[306727]: 2025-11-29 08:13:35.719212601 +0000 UTC m=+0.083984499 container remove 9c4861d690465ee9a047e12e8ee9b099cb8f8c932befae46d60ed00870132cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 08:13:35 compute-0 systemd[1]: libpod-conmon-9c4861d690465ee9a047e12e8ee9b099cb8f8c932befae46d60ed00870132cd7.scope: Deactivated successfully.
Nov 29 08:13:35 compute-0 podman[306750]: 2025-11-29 08:13:35.952335324 +0000 UTC m=+0.045719702 container create 5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 08:13:35 compute-0 systemd[1]: Started libpod-conmon-5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b.scope.
Nov 29 08:13:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c8e1637b3cbf00da6f8b203387710b440c78ba21884bcae8265151294d6ff0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:36 compute-0 podman[306750]: 2025-11-29 08:13:35.932274921 +0000 UTC m=+0.025659329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c8e1637b3cbf00da6f8b203387710b440c78ba21884bcae8265151294d6ff0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c8e1637b3cbf00da6f8b203387710b440c78ba21884bcae8265151294d6ff0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c8e1637b3cbf00da6f8b203387710b440c78ba21884bcae8265151294d6ff0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:13:36 compute-0 podman[306750]: 2025-11-29 08:13:36.042213914 +0000 UTC m=+0.135598342 container init 5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:13:36 compute-0 podman[306750]: 2025-11-29 08:13:36.048455027 +0000 UTC m=+0.141839435 container start 5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hypatia, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 08:13:36 compute-0 podman[306750]: 2025-11-29 08:13:36.05219067 +0000 UTC m=+0.145575098 container attach 5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:13:36 compute-0 ceph-mon[75050]: pgmap v2336: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.6 KiB/s wr, 56 op/s
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]: {
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "osd_id": 2,
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "type": "bluestore"
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:     },
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "osd_id": 1,
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "type": "bluestore"
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:     },
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "osd_id": 0,
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:         "type": "bluestore"
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]:     }
Nov 29 08:13:37 compute-0 gallant_hypatia[306766]: }
Nov 29 08:13:37 compute-0 systemd[1]: libpod-5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b.scope: Deactivated successfully.
Nov 29 08:13:37 compute-0 conmon[306766]: conmon 5bdae56bedd4a5a9ba4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b.scope/container/memory.events
Nov 29 08:13:37 compute-0 podman[306750]: 2025-11-29 08:13:37.03850027 +0000 UTC m=+1.131884668 container died 5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 08:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2c8e1637b3cbf00da6f8b203387710b440c78ba21884bcae8265151294d6ff0-merged.mount: Deactivated successfully.
Nov 29 08:13:37 compute-0 podman[306750]: 2025-11-29 08:13:37.084845538 +0000 UTC m=+1.178229926 container remove 5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 08:13:37 compute-0 systemd[1]: libpod-conmon-5bdae56bedd4a5a9ba4cf64eb6e5207c2c89a19f6c7f946bf35fcef9fc66b95b.scope: Deactivated successfully.
Nov 29 08:13:37 compute-0 sudo[306639]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:13:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:13:37 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:13:37 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:13:37 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7ed91c84-5991-442e-aa77-bfe9cff37d9c does not exist
Nov 29 08:13:37 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 6fed2bee-104b-4095-8d6b-079f0aa41d5c does not exist
Nov 29 08:13:37 compute-0 sudo[306811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:13:37 compute-0 sudo[306811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:37 compute-0 sudo[306811]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:37 compute-0 sudo[306836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:13:37 compute-0 sudo[306836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:13:37 compute-0 sudo[306836]: pam_unix(sudo:session): session closed for user root
Nov 29 08:13:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.2 KiB/s wr, 43 op/s
Nov 29 08:13:38 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:13:38 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:13:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e501 do_prune osdmap full prune enabled
Nov 29 08:13:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e502 e502: 3 total, 3 up, 3 in
Nov 29 08:13:38 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e502: 3 total, 3 up, 3 in
Nov 29 08:13:38 compute-0 nova_compute[256729]: 2025-11-29 08:13:38.332 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e502 do_prune osdmap full prune enabled
Nov 29 08:13:39 compute-0 ceph-mon[75050]: pgmap v2337: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.2 KiB/s wr, 43 op/s
Nov 29 08:13:39 compute-0 ceph-mon[75050]: osdmap e502: 3 total, 3 up, 3 in
Nov 29 08:13:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e503 e503: 3 total, 3 up, 3 in
Nov 29 08:13:39 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e503: 3 total, 3 up, 3 in
Nov 29 08:13:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.4 KiB/s wr, 44 op/s
Nov 29 08:13:39 compute-0 nova_compute[256729]: 2025-11-29 08:13:39.645 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:40 compute-0 ceph-mon[75050]: osdmap e503: 3 total, 3 up, 3 in
Nov 29 08:13:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e503 do_prune osdmap full prune enabled
Nov 29 08:13:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e504 e504: 3 total, 3 up, 3 in
Nov 29 08:13:40 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e504: 3 total, 3 up, 3 in
Nov 29 08:13:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281783904' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281783904' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:41 compute-0 ceph-mon[75050]: pgmap v2340: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.4 KiB/s wr, 44 op/s
Nov 29 08:13:41 compute-0 ceph-mon[75050]: osdmap e504: 3 total, 3 up, 3 in
Nov 29 08:13:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4281783904' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/4281783904' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Nov 29 08:13:42 compute-0 ceph-mon[75050]: pgmap v2342: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Nov 29 08:13:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e504 do_prune osdmap full prune enabled
Nov 29 08:13:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e505 e505: 3 total, 3 up, 3 in
Nov 29 08:13:43 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e505: 3 total, 3 up, 3 in
Nov 29 08:13:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.3 KiB/s wr, 67 op/s
Nov 29 08:13:43 compute-0 nova_compute[256729]: 2025-11-29 08:13:43.333 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e505 do_prune osdmap full prune enabled
Nov 29 08:13:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e506 e506: 3 total, 3 up, 3 in
Nov 29 08:13:44 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e506: 3 total, 3 up, 3 in
Nov 29 08:13:44 compute-0 ceph-mon[75050]: osdmap e505: 3 total, 3 up, 3 in
Nov 29 08:13:44 compute-0 ceph-mon[75050]: pgmap v2344: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.3 KiB/s wr, 67 op/s
Nov 29 08:13:44 compute-0 nova_compute[256729]: 2025-11-29 08:13:44.648 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:45 compute-0 ceph-mon[75050]: osdmap e506: 3 total, 3 up, 3 in
Nov 29 08:13:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.0 KiB/s wr, 60 op/s
Nov 29 08:13:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e506 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/184451652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/184451652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:46 compute-0 ceph-mon[75050]: pgmap v2346: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.0 KiB/s wr, 60 op/s
Nov 29 08:13:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/184451652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/184451652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 4.3 KiB/s wr, 99 op/s
Nov 29 08:13:48 compute-0 nova_compute[256729]: 2025-11-29 08:13:48.335 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e506 do_prune osdmap full prune enabled
Nov 29 08:13:48 compute-0 ceph-mon[75050]: pgmap v2347: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 4.3 KiB/s wr, 99 op/s
Nov 29 08:13:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e507 e507: 3 total, 3 up, 3 in
Nov 29 08:13:48 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e507: 3 total, 3 up, 3 in
Nov 29 08:13:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.3 KiB/s wr, 72 op/s
Nov 29 08:13:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e507 do_prune osdmap full prune enabled
Nov 29 08:13:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e508 e508: 3 total, 3 up, 3 in
Nov 29 08:13:49 compute-0 ceph-mon[75050]: osdmap e507: 3 total, 3 up, 3 in
Nov 29 08:13:49 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e508: 3 total, 3 up, 3 in
Nov 29 08:13:49 compute-0 nova_compute[256729]: 2025-11-29 08:13:49.651 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e508 do_prune osdmap full prune enabled
Nov 29 08:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e509 e509: 3 total, 3 up, 3 in
Nov 29 08:13:50 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e509: 3 total, 3 up, 3 in
Nov 29 08:13:50 compute-0 ceph-mon[75050]: pgmap v2349: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.3 KiB/s wr, 72 op/s
Nov 29 08:13:50 compute-0 ceph-mon[75050]: osdmap e508: 3 total, 3 up, 3 in
Nov 29 08:13:50 compute-0 ceph-mon[75050]: osdmap e509: 3 total, 3 up, 3 in
Nov 29 08:13:50 compute-0 podman[306863]: 2025-11-29 08:13:50.728168375 +0000 UTC m=+0.060998398 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 08:13:50 compute-0 podman[306862]: 2025-11-29 08:13:50.741115947 +0000 UTC m=+0.070520657 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 08:13:50 compute-0 podman[306861]: 2025-11-29 08:13:50.767873706 +0000 UTC m=+0.109146139 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 08:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3091399265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3091399265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Nov 29 08:13:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3091399265' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3091399265' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:52 compute-0 sshd-session[306926]: Connection closed by 80.9.196.204 port 57774
Nov 29 08:13:52 compute-0 ceph-mon[75050]: pgmap v2352: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.0 KiB/s wr, 70 op/s
Nov 29 08:13:52 compute-0 sshd-session[306927]: Invalid user a from 80.9.196.204 port 57790
Nov 29 08:13:52 compute-0 sshd-session[306927]: Connection closed by invalid user a 80.9.196.204 port 57790 [preauth]
Nov 29 08:13:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 26 op/s
Nov 29 08:13:53 compute-0 nova_compute[256729]: 2025-11-29 08:13:53.338 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e509 do_prune osdmap full prune enabled
Nov 29 08:13:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e510 e510: 3 total, 3 up, 3 in
Nov 29 08:13:53 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e510: 3 total, 3 up, 3 in
Nov 29 08:13:54 compute-0 ceph-mon[75050]: pgmap v2353: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 KiB/s wr, 26 op/s
Nov 29 08:13:54 compute-0 ceph-mon[75050]: osdmap e510: 3 total, 3 up, 3 in
Nov 29 08:13:54 compute-0 nova_compute[256729]: 2025-11-29 08:13:54.690 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 60 op/s
Nov 29 08:13:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e510 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:13:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e510 do_prune osdmap full prune enabled
Nov 29 08:13:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e511 e511: 3 total, 3 up, 3 in
Nov 29 08:13:55 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e511: 3 total, 3 up, 3 in
Nov 29 08:13:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:13:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/876116604' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:13:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/876116604' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:56 compute-0 ceph-mon[75050]: pgmap v2355: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 KiB/s wr, 60 op/s
Nov 29 08:13:56 compute-0 ceph-mon[75050]: osdmap e511: 3 total, 3 up, 3 in
Nov 29 08:13:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/876116604' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:13:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/876116604' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:13:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.2 KiB/s wr, 61 op/s
Nov 29 08:13:58 compute-0 nova_compute[256729]: 2025-11-29 08:13:58.370 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:58 compute-0 ceph-mon[75050]: pgmap v2357: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 3.2 KiB/s wr, 61 op/s
Nov 29 08:13:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.7 KiB/s wr, 82 op/s
Nov 29 08:13:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e511 do_prune osdmap full prune enabled
Nov 29 08:13:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e512 e512: 3 total, 3 up, 3 in
Nov 29 08:13:59 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e512: 3 total, 3 up, 3 in
Nov 29 08:13:59 compute-0 nova_compute[256729]: 2025-11-29 08:13:59.693 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:13:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:13:59.793 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:13:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:13:59.794 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:13:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:13:59.794 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e512 do_prune osdmap full prune enabled
Nov 29 08:14:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e513 e513: 3 total, 3 up, 3 in
Nov 29 08:14:00 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e513: 3 total, 3 up, 3 in
Nov 29 08:14:00 compute-0 ceph-mon[75050]: pgmap v2358: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.7 KiB/s wr, 82 op/s
Nov 29 08:14:00 compute-0 ceph-mon[75050]: osdmap e512: 3 total, 3 up, 3 in
Nov 29 08:14:00 compute-0 ceph-mon[75050]: osdmap e513: 3 total, 3 up, 3 in
Nov 29 08:14:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.3 KiB/s wr, 49 op/s
Nov 29 08:14:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e513 do_prune osdmap full prune enabled
Nov 29 08:14:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e514 e514: 3 total, 3 up, 3 in
Nov 29 08:14:01 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e514: 3 total, 3 up, 3 in
Nov 29 08:14:02 compute-0 ceph-mon[75050]: pgmap v2361: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.3 KiB/s wr, 49 op/s
Nov 29 08:14:02 compute-0 ceph-mon[75050]: osdmap e514: 3 total, 3 up, 3 in
Nov 29 08:14:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1703329740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1703329740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Nov 29 08:14:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1703329740' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:03 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1703329740' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:03 compute-0 nova_compute[256729]: 2025-11-29 08:14:03.371 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:04 compute-0 nova_compute[256729]: 2025-11-29 08:14:04.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:04 compute-0 nova_compute[256729]: 2025-11-29 08:14:04.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 08:14:04 compute-0 ceph-mon[75050]: pgmap v2363: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Nov 29 08:14:04 compute-0 nova_compute[256729]: 2025-11-29 08:14:04.696 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.0 KiB/s wr, 53 op/s
Nov 29 08:14:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e514 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:14:05
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta']
Nov 29 08:14:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:14:06 compute-0 ceph-mon[75050]: pgmap v2364: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.0 KiB/s wr, 53 op/s
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:14:07 compute-0 nova_compute[256729]: 2025-11-29 08:14:07.192 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:07 compute-0 nova_compute[256729]: 2025-11-29 08:14:07.192 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 08:14:07 compute-0 nova_compute[256729]: 2025-11-29 08:14:07.218 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 08:14:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 45 op/s
Nov 29 08:14:08 compute-0 nova_compute[256729]: 2025-11-29 08:14:08.175 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:08 compute-0 nova_compute[256729]: 2025-11-29 08:14:08.373 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:14:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1534211942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:14:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1534211942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:08 compute-0 ceph-mon[75050]: pgmap v2365: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 45 op/s
Nov 29 08:14:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1534211942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:14:08 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1534211942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:14:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Nov 29 08:14:09 compute-0 nova_compute[256729]: 2025-11-29 08:14:09.699 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e514 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e514 do_prune osdmap full prune enabled
Nov 29 08:14:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 e515: 3 total, 3 up, 3 in
Nov 29 08:14:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e515: 3 total, 3 up, 3 in
Nov 29 08:14:11 compute-0 ceph-mon[75050]: pgmap v2366: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Nov 29 08:14:11 compute-0 ceph-mon[75050]: osdmap e515: 3 total, 3 up, 3 in
Nov 29 08:14:11 compute-0 nova_compute[256729]: 2025-11-29 08:14:11.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Nov 29 08:14:12 compute-0 nova_compute[256729]: 2025-11-29 08:14:12.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:12 compute-0 nova_compute[256729]: 2025-11-29 08:14:12.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:14:12 compute-0 nova_compute[256729]: 2025-11-29 08:14:12.149 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:14:12 compute-0 nova_compute[256729]: 2025-11-29 08:14:12.164 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:14:12 compute-0 nova_compute[256729]: 2025-11-29 08:14:12.165 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:13 compute-0 ceph-mon[75050]: pgmap v2368: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Nov 29 08:14:13 compute-0 nova_compute[256729]: 2025-11-29 08:14:13.159 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.3 KiB/s wr, 32 op/s
Nov 29 08:14:13 compute-0 nova_compute[256729]: 2025-11-29 08:14:13.375 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:14 compute-0 nova_compute[256729]: 2025-11-29 08:14:14.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:14 compute-0 nova_compute[256729]: 2025-11-29 08:14:14.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:14 compute-0 nova_compute[256729]: 2025-11-29 08:14:14.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:14:14 compute-0 ceph-mon[75050]: pgmap v2369: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.3 KiB/s wr, 32 op/s
Nov 29 08:14:14 compute-0 nova_compute[256729]: 2025-11-29 08:14:14.702 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.183 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.183 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.183 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.184 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.184 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 1.6 KiB/s rd, 307 B/s wr, 3 op/s
Nov 29 08:14:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:14:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:14:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93988931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.672 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.875 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.876 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4315MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.877 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:15 compute-0 nova_compute[256729]: 2025-11-29 08:14:15.877 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:16 compute-0 nova_compute[256729]: 2025-11-29 08:14:16.071 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:14:16 compute-0 nova_compute[256729]: 2025-11-29 08:14:16.072 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:14:16 compute-0 nova_compute[256729]: 2025-11-29 08:14:16.087 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:14:16 compute-0 ceph-mon[75050]: pgmap v2370: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 1.6 KiB/s rd, 307 B/s wr, 3 op/s
Nov 29 08:14:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/93988931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:14:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2576689576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:16 compute-0 nova_compute[256729]: 2025-11-29 08:14:16.543 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:14:16 compute-0 nova_compute[256729]: 2025-11-29 08:14:16.550 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:14:16 compute-0 nova_compute[256729]: 2025-11-29 08:14:16.566 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:14:16 compute-0 nova_compute[256729]: 2025-11-29 08:14:16.568 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:14:16 compute-0 nova_compute[256729]: 2025-11-29 08:14:16.568 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:14:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2576689576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:14:18 compute-0 nova_compute[256729]: 2025-11-29 08:14:18.377 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:18 compute-0 ceph-mon[75050]: pgmap v2371: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:19 compute-0 nova_compute[256729]: 2025-11-29 08:14:19.704 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:20 compute-0 nova_compute[256729]: 2025-11-29 08:14:20.568 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:21 compute-0 ceph-mon[75050]: pgmap v2372: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:21 compute-0 podman[306974]: 2025-11-29 08:14:21.763191073 +0000 UTC m=+0.105707437 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 08:14:21 compute-0 podman[306975]: 2025-11-29 08:14:21.7660026 +0000 UTC m=+0.088833209 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 08:14:21 compute-0 podman[306973]: 2025-11-29 08:14:21.795998856 +0000 UTC m=+0.144216945 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:14:23 compute-0 ceph-mon[75050]: pgmap v2373: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:23 compute-0 nova_compute[256729]: 2025-11-29 08:14:23.380 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:24 compute-0 nova_compute[256729]: 2025-11-29 08:14:24.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:14:24 compute-0 nova_compute[256729]: 2025-11-29 08:14:24.708 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:25 compute-0 ceph-mon[75050]: pgmap v2374: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:27 compute-0 ceph-mon[75050]: pgmap v2375: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:28 compute-0 nova_compute[256729]: 2025-11-29 08:14:28.406 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:29 compute-0 ceph-mon[75050]: pgmap v2376: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:29 compute-0 nova_compute[256729]: 2025-11-29 08:14:29.710 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:31 compute-0 ceph-mon[75050]: pgmap v2377: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:33 compute-0 ceph-mon[75050]: pgmap v2378: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Nov 29 08:14:33 compute-0 nova_compute[256729]: 2025-11-29 08:14:33.408 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:34 compute-0 nova_compute[256729]: 2025-11-29 08:14:34.712 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 29 08:14:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:35 compute-0 ceph-mon[75050]: pgmap v2379: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Nov 29 08:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:14:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:14:36 compute-0 ceph-mon[75050]: pgmap v2380: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 29 08:14:36 compute-0 nova_compute[256729]: 2025-11-29 08:14:36.966 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:36 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:14:36.966 163655 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:d2:d2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '62:f9:75:97:ed:64'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 08:14:36 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:14:36.967 163655 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 08:14:37 compute-0 sudo[307033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:37 compute-0 sudo[307033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 29 08:14:37 compute-0 sudo[307033]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:37 compute-0 sudo[307058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:37 compute-0 sudo[307058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:37 compute-0 sudo[307058]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:37 compute-0 sudo[307083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:37 compute-0 sudo[307083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:37 compute-0 sudo[307083]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:37 compute-0 sudo[307108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:14:37 compute-0 sudo[307108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:38 compute-0 sudo[307108]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:38 compute-0 sudo[307165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:38 compute-0 sudo[307165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:38 compute-0 sudo[307165]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:38 compute-0 sudo[307190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:38 compute-0 sudo[307190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:38 compute-0 sudo[307190]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:38 compute-0 sudo[307215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:38 compute-0 sudo[307215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:38 compute-0 sudo[307215]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:38 compute-0 nova_compute[256729]: 2025-11-29 08:14:38.410 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:38 compute-0 sudo[307240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 08:14:38 compute-0 sudo[307240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:38 compute-0 ceph-mon[75050]: pgmap v2381: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 29 08:14:38 compute-0 sudo[307240]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:14:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:38 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:14:38 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:38 compute-0 sudo[307283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:38 compute-0 sudo[307283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:38 compute-0 sudo[307283]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:38 compute-0 sudo[307308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:38 compute-0 sudo[307308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:38 compute-0 sudo[307308]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:38 compute-0 sudo[307333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:38 compute-0 sudo[307333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:38 compute-0 sudo[307333]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:39 compute-0 sudo[307358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- inventory --format=json-pretty --filter-for-batch
Nov 29 08:14:39 compute-0 sudo[307358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 29 08:14:39 compute-0 podman[307424]: 2025-11-29 08:14:39.434355913 +0000 UTC m=+0.052540980 container create b00d5cc02bd4e9e382878a8d5ce310e1f18eaf41e87926bf314a20e4e07a94a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:14:39 compute-0 systemd[1]: Started libpod-conmon-b00d5cc02bd4e9e382878a8d5ce310e1f18eaf41e87926bf314a20e4e07a94a8.scope.
Nov 29 08:14:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:39 compute-0 podman[307424]: 2025-11-29 08:14:39.41180724 +0000 UTC m=+0.029992387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:39 compute-0 podman[307424]: 2025-11-29 08:14:39.516149358 +0000 UTC m=+0.134334435 container init b00d5cc02bd4e9e382878a8d5ce310e1f18eaf41e87926bf314a20e4e07a94a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:14:39 compute-0 podman[307424]: 2025-11-29 08:14:39.527421165 +0000 UTC m=+0.145606232 container start b00d5cc02bd4e9e382878a8d5ce310e1f18eaf41e87926bf314a20e4e07a94a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chebyshev, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:14:39 compute-0 podman[307424]: 2025-11-29 08:14:39.531457256 +0000 UTC m=+0.149642343 container attach b00d5cc02bd4e9e382878a8d5ce310e1f18eaf41e87926bf314a20e4e07a94a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chebyshev, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:14:39 compute-0 wonderful_chebyshev[307440]: 167 167
Nov 29 08:14:39 compute-0 systemd[1]: libpod-b00d5cc02bd4e9e382878a8d5ce310e1f18eaf41e87926bf314a20e4e07a94a8.scope: Deactivated successfully.
Nov 29 08:14:39 compute-0 podman[307424]: 2025-11-29 08:14:39.535747202 +0000 UTC m=+0.153932269 container died b00d5cc02bd4e9e382878a8d5ce310e1f18eaf41e87926bf314a20e4e07a94a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 08:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5abce14600586fdb8bafb62c63f7688c3821f564723038da40b4280a7e36a11c-merged.mount: Deactivated successfully.
Nov 29 08:14:39 compute-0 podman[307424]: 2025-11-29 08:14:39.580846079 +0000 UTC m=+0.199031136 container remove b00d5cc02bd4e9e382878a8d5ce310e1f18eaf41e87926bf314a20e4e07a94a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 08:14:39 compute-0 systemd[1]: libpod-conmon-b00d5cc02bd4e9e382878a8d5ce310e1f18eaf41e87926bf314a20e4e07a94a8.scope: Deactivated successfully.
Nov 29 08:14:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:39 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:39 compute-0 nova_compute[256729]: 2025-11-29 08:14:39.714 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:39 compute-0 podman[307462]: 2025-11-29 08:14:39.77166396 +0000 UTC m=+0.061082993 container create ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_khorana, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:14:39 compute-0 systemd[1]: Started libpod-conmon-ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404.scope.
Nov 29 08:14:39 compute-0 podman[307462]: 2025-11-29 08:14:39.740762459 +0000 UTC m=+0.030181492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c6b996433c6bf216fcb5c2b3a9cd11758efa098e6d5dd343f181aaced124a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c6b996433c6bf216fcb5c2b3a9cd11758efa098e6d5dd343f181aaced124a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c6b996433c6bf216fcb5c2b3a9cd11758efa098e6d5dd343f181aaced124a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c6b996433c6bf216fcb5c2b3a9cd11758efa098e6d5dd343f181aaced124a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:39 compute-0 podman[307462]: 2025-11-29 08:14:39.874262542 +0000 UTC m=+0.163681575 container init ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_khorana, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:14:39 compute-0 podman[307462]: 2025-11-29 08:14:39.886803853 +0000 UTC m=+0.176222856 container start ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_khorana, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 08:14:39 compute-0 podman[307462]: 2025-11-29 08:14:39.889796014 +0000 UTC m=+0.179215017 container attach ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:14:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:40 compute-0 ceph-mon[75050]: pgmap v2382: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 29 08:14:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]: [
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:     {
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         "available": false,
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         "ceph_device": false,
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         "lsm_data": {},
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         "lvs": [],
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         "path": "/dev/sr0",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         "rejected_reasons": [
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "Has a FileSystem",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "Insufficient space (<5GB)"
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         ],
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         "sys_api": {
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "actuators": null,
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "device_nodes": "sr0",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "devname": "sr0",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "human_readable_size": "482.00 KB",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "id_bus": "ata",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "model": "QEMU DVD-ROM",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "nr_requests": "2",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "parent": "/dev/sr0",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "partitions": {},
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "path": "/dev/sr0",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "removable": "1",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "rev": "2.5+",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "ro": "0",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "rotational": "1",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "sas_address": "",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "sas_device_handle": "",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "scheduler_mode": "mq-deadline",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "sectors": 0,
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "sectorsize": "2048",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "size": 493568.0,
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "support_discard": "2048",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "type": "disk",
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:             "vendor": "QEMU"
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:         }
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]:     }
Nov 29 08:14:41 compute-0 optimistic_khorana[307478]: ]
Nov 29 08:14:41 compute-0 systemd[1]: libpod-ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404.scope: Deactivated successfully.
Nov 29 08:14:41 compute-0 systemd[1]: libpod-ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404.scope: Consumed 1.616s CPU time.
Nov 29 08:14:41 compute-0 podman[309306]: 2025-11-29 08:14:41.489726485 +0000 UTC m=+0.026398460 container died ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 29 08:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-00c6b996433c6bf216fcb5c2b3a9cd11758efa098e6d5dd343f181aaced124a9-merged.mount: Deactivated successfully.
Nov 29 08:14:41 compute-0 podman[309306]: 2025-11-29 08:14:41.542725726 +0000 UTC m=+0.079397701 container remove ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:14:41 compute-0 systemd[1]: libpod-conmon-ddb8428bb6d2ead7d4dd71a400df07083494d9d9c39566ee827091f1ad3dc404.scope: Deactivated successfully.
Nov 29 08:14:41 compute-0 sudo[307358]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:41 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev caff3895-07b8-430f-bc5a-6fe458047d57 does not exist
Nov 29 08:14:41 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev b8ddd9c7-dae4-40eb-bfe8-00d622119b2f does not exist
Nov 29 08:14:41 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev ec66949f-3c82-4e2b-9cbe-86799225fee5 does not exist
Nov 29 08:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:14:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:14:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:41 compute-0 sudo[309321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:41 compute-0 sudo[309321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:41 compute-0 sudo[309321]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:41 compute-0 sudo[309346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:41 compute-0 sudo[309346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:41 compute-0 sudo[309346]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:41 compute-0 sudo[309371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:41 compute-0 sudo[309371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:41 compute-0 sudo[309371]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:41 compute-0 sudo[309396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:14:41 compute-0 sudo[309396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:42 compute-0 podman[309462]: 2025-11-29 08:14:42.375302709 +0000 UTC m=+0.065177235 container create df1e1d33bcb1ce968949c7cca84399cb7bcbc75b8d2b81b796411d1d064bd323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 08:14:42 compute-0 systemd[1]: Started libpod-conmon-df1e1d33bcb1ce968949c7cca84399cb7bcbc75b8d2b81b796411d1d064bd323.scope.
Nov 29 08:14:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:42 compute-0 podman[309462]: 2025-11-29 08:14:42.349134967 +0000 UTC m=+0.039009493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:42 compute-0 podman[309462]: 2025-11-29 08:14:42.459903141 +0000 UTC m=+0.149777677 container init df1e1d33bcb1ce968949c7cca84399cb7bcbc75b8d2b81b796411d1d064bd323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 08:14:42 compute-0 podman[309462]: 2025-11-29 08:14:42.472862763 +0000 UTC m=+0.162737279 container start df1e1d33bcb1ce968949c7cca84399cb7bcbc75b8d2b81b796411d1d064bd323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:14:42 compute-0 podman[309462]: 2025-11-29 08:14:42.477700005 +0000 UTC m=+0.167574531 container attach df1e1d33bcb1ce968949c7cca84399cb7bcbc75b8d2b81b796411d1d064bd323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:14:42 compute-0 zen_ellis[309478]: 167 167
Nov 29 08:14:42 compute-0 systemd[1]: libpod-df1e1d33bcb1ce968949c7cca84399cb7bcbc75b8d2b81b796411d1d064bd323.scope: Deactivated successfully.
Nov 29 08:14:42 compute-0 podman[309462]: 2025-11-29 08:14:42.480995624 +0000 UTC m=+0.170870190 container died df1e1d33bcb1ce968949c7cca84399cb7bcbc75b8d2b81b796411d1d064bd323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 29 08:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cc6e5a7d4a2cae4994ea1f6384633ccbec40b151bef68db809c4ff403030061-merged.mount: Deactivated successfully.
Nov 29 08:14:42 compute-0 podman[309462]: 2025-11-29 08:14:42.526127673 +0000 UTC m=+0.216002169 container remove df1e1d33bcb1ce968949c7cca84399cb7bcbc75b8d2b81b796411d1d064bd323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:14:42 compute-0 systemd[1]: libpod-conmon-df1e1d33bcb1ce968949c7cca84399cb7bcbc75b8d2b81b796411d1d064bd323.scope: Deactivated successfully.
Nov 29 08:14:42 compute-0 ceph-mon[75050]: pgmap v2383: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 29 08:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:14:42 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:14:42 compute-0 podman[309502]: 2025-11-29 08:14:42.759464131 +0000 UTC m=+0.052221832 container create 40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 08:14:42 compute-0 systemd[1]: Started libpod-conmon-40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a.scope.
Nov 29 08:14:42 compute-0 podman[309502]: 2025-11-29 08:14:42.736547978 +0000 UTC m=+0.029305649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299e6361bbf283bcf0f3c6626c8213e7a664d95e72bbf84149ef3911cf0e6ad7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299e6361bbf283bcf0f3c6626c8213e7a664d95e72bbf84149ef3911cf0e6ad7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299e6361bbf283bcf0f3c6626c8213e7a664d95e72bbf84149ef3911cf0e6ad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299e6361bbf283bcf0f3c6626c8213e7a664d95e72bbf84149ef3911cf0e6ad7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/299e6361bbf283bcf0f3c6626c8213e7a664d95e72bbf84149ef3911cf0e6ad7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:42 compute-0 podman[309502]: 2025-11-29 08:14:42.87921908 +0000 UTC m=+0.171976781 container init 40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:14:42 compute-0 podman[309502]: 2025-11-29 08:14:42.887992958 +0000 UTC m=+0.180750659 container start 40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 08:14:42 compute-0 podman[309502]: 2025-11-29 08:14:42.891948426 +0000 UTC m=+0.184706127 container attach 40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_williams, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 08:14:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 29 08:14:43 compute-0 nova_compute[256729]: 2025-11-29 08:14:43.412 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:43 compute-0 gifted_williams[309518]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:14:43 compute-0 gifted_williams[309518]: --> relative data size: 1.0
Nov 29 08:14:43 compute-0 gifted_williams[309518]: --> All data devices are unavailable
Nov 29 08:14:43 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:14:43.969 163655 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=df234f2c-4343-4c91-861d-13d184c56aa0, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 08:14:44 compute-0 systemd[1]: libpod-40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a.scope: Deactivated successfully.
Nov 29 08:14:44 compute-0 podman[309502]: 2025-11-29 08:14:44.002290465 +0000 UTC m=+1.295048206 container died 40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 08:14:44 compute-0 systemd[1]: libpod-40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a.scope: Consumed 1.060s CPU time.
Nov 29 08:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-299e6361bbf283bcf0f3c6626c8213e7a664d95e72bbf84149ef3911cf0e6ad7-merged.mount: Deactivated successfully.
Nov 29 08:14:44 compute-0 podman[309502]: 2025-11-29 08:14:44.082920559 +0000 UTC m=+1.375678260 container remove 40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_williams, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 08:14:44 compute-0 systemd[1]: libpod-conmon-40c910b0808936d9556a0d553fdc0f3cc412551f19676c71d23011488613a78a.scope: Deactivated successfully.
Nov 29 08:14:44 compute-0 sudo[309396]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:44 compute-0 sudo[309559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:44 compute-0 sudo[309559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:44 compute-0 sudo[309559]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:44 compute-0 sudo[309584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:44 compute-0 sudo[309584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:44 compute-0 sudo[309584]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:44 compute-0 sudo[309609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:44 compute-0 sudo[309609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:44 compute-0 sudo[309609]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:44 compute-0 sudo[309634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:14:44 compute-0 sudo[309634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:44 compute-0 ceph-mon[75050]: pgmap v2384: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Nov 29 08:14:44 compute-0 nova_compute[256729]: 2025-11-29 08:14:44.716 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:44 compute-0 podman[309698]: 2025-11-29 08:14:44.823035425 +0000 UTC m=+0.043668888 container create 503ed0cd93ab2c391312889ce7870b1c56501f0de57e6926c94a2869ee9dedea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 08:14:44 compute-0 systemd[1]: Started libpod-conmon-503ed0cd93ab2c391312889ce7870b1c56501f0de57e6926c94a2869ee9dedea.scope.
Nov 29 08:14:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:44 compute-0 podman[309698]: 2025-11-29 08:14:44.803580066 +0000 UTC m=+0.024213549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:44 compute-0 podman[309698]: 2025-11-29 08:14:44.900560025 +0000 UTC m=+0.121193488 container init 503ed0cd93ab2c391312889ce7870b1c56501f0de57e6926c94a2869ee9dedea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 08:14:44 compute-0 podman[309698]: 2025-11-29 08:14:44.906908898 +0000 UTC m=+0.127542341 container start 503ed0cd93ab2c391312889ce7870b1c56501f0de57e6926c94a2869ee9dedea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 08:14:44 compute-0 podman[309698]: 2025-11-29 08:14:44.909914319 +0000 UTC m=+0.130547762 container attach 503ed0cd93ab2c391312889ce7870b1c56501f0de57e6926c94a2869ee9dedea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 08:14:44 compute-0 brave_jemison[309715]: 167 167
Nov 29 08:14:44 compute-0 systemd[1]: libpod-503ed0cd93ab2c391312889ce7870b1c56501f0de57e6926c94a2869ee9dedea.scope: Deactivated successfully.
Nov 29 08:14:44 compute-0 podman[309698]: 2025-11-29 08:14:44.911481922 +0000 UTC m=+0.132115375 container died 503ed0cd93ab2c391312889ce7870b1c56501f0de57e6926c94a2869ee9dedea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 08:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c86f51dc461cd851bb740e23c9b44249992b2a20d1e106236cdadd08ed54eff0-merged.mount: Deactivated successfully.
Nov 29 08:14:44 compute-0 podman[309698]: 2025-11-29 08:14:44.944490321 +0000 UTC m=+0.165123774 container remove 503ed0cd93ab2c391312889ce7870b1c56501f0de57e6926c94a2869ee9dedea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:14:44 compute-0 systemd[1]: libpod-conmon-503ed0cd93ab2c391312889ce7870b1c56501f0de57e6926c94a2869ee9dedea.scope: Deactivated successfully.
Nov 29 08:14:45 compute-0 podman[309738]: 2025-11-29 08:14:45.130018318 +0000 UTC m=+0.055325046 container create b19071364f016e0dadf8e95bc43aa4b04620f816708878d5bad08d7b13a7040a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 08:14:45 compute-0 systemd[1]: Started libpod-conmon-b19071364f016e0dadf8e95bc43aa4b04620f816708878d5bad08d7b13a7040a.scope.
Nov 29 08:14:45 compute-0 podman[309738]: 2025-11-29 08:14:45.10436138 +0000 UTC m=+0.029668198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ccfde58b74c20ed7eaa752de2f25cf53ed4f48c3b4743af31d11e48e38c925/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ccfde58b74c20ed7eaa752de2f25cf53ed4f48c3b4743af31d11e48e38c925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ccfde58b74c20ed7eaa752de2f25cf53ed4f48c3b4743af31d11e48e38c925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ccfde58b74c20ed7eaa752de2f25cf53ed4f48c3b4743af31d11e48e38c925/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:45 compute-0 podman[309738]: 2025-11-29 08:14:45.233295478 +0000 UTC m=+0.158602256 container init b19071364f016e0dadf8e95bc43aa4b04620f816708878d5bad08d7b13a7040a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 08:14:45 compute-0 podman[309738]: 2025-11-29 08:14:45.240996978 +0000 UTC m=+0.166303716 container start b19071364f016e0dadf8e95bc43aa4b04620f816708878d5bad08d7b13a7040a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:14:45 compute-0 podman[309738]: 2025-11-29 08:14:45.243915267 +0000 UTC m=+0.169222005 container attach b19071364f016e0dadf8e95bc43aa4b04620f816708878d5bad08d7b13a7040a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 08:14:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Nov 29 08:14:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:45 compute-0 tender_cray[309754]: {
Nov 29 08:14:45 compute-0 tender_cray[309754]:     "0": [
Nov 29 08:14:45 compute-0 tender_cray[309754]:         {
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "devices": [
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "/dev/loop3"
Nov 29 08:14:45 compute-0 tender_cray[309754]:             ],
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_name": "ceph_lv0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_size": "21470642176",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "name": "ceph_lv0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "tags": {
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.cluster_name": "ceph",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.crush_device_class": "",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.encrypted": "0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.osd_id": "0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.type": "block",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.vdo": "0"
Nov 29 08:14:45 compute-0 tender_cray[309754]:             },
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "type": "block",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "vg_name": "ceph_vg0"
Nov 29 08:14:45 compute-0 tender_cray[309754]:         }
Nov 29 08:14:45 compute-0 tender_cray[309754]:     ],
Nov 29 08:14:45 compute-0 tender_cray[309754]:     "1": [
Nov 29 08:14:45 compute-0 tender_cray[309754]:         {
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "devices": [
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "/dev/loop4"
Nov 29 08:14:45 compute-0 tender_cray[309754]:             ],
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_name": "ceph_lv1",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_size": "21470642176",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "name": "ceph_lv1",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "tags": {
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.cluster_name": "ceph",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.crush_device_class": "",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.encrypted": "0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.osd_id": "1",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.type": "block",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.vdo": "0"
Nov 29 08:14:45 compute-0 tender_cray[309754]:             },
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "type": "block",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "vg_name": "ceph_vg1"
Nov 29 08:14:45 compute-0 tender_cray[309754]:         }
Nov 29 08:14:45 compute-0 tender_cray[309754]:     ],
Nov 29 08:14:45 compute-0 tender_cray[309754]:     "2": [
Nov 29 08:14:45 compute-0 tender_cray[309754]:         {
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "devices": [
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "/dev/loop5"
Nov 29 08:14:45 compute-0 tender_cray[309754]:             ],
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_name": "ceph_lv2",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_size": "21470642176",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "name": "ceph_lv2",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "tags": {
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.cluster_name": "ceph",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.crush_device_class": "",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.encrypted": "0",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.osd_id": "2",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.type": "block",
Nov 29 08:14:45 compute-0 tender_cray[309754]:                 "ceph.vdo": "0"
Nov 29 08:14:45 compute-0 tender_cray[309754]:             },
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "type": "block",
Nov 29 08:14:45 compute-0 tender_cray[309754]:             "vg_name": "ceph_vg2"
Nov 29 08:14:45 compute-0 tender_cray[309754]:         }
Nov 29 08:14:45 compute-0 tender_cray[309754]:     ]
Nov 29 08:14:45 compute-0 tender_cray[309754]: }
Nov 29 08:14:46 compute-0 systemd[1]: libpod-b19071364f016e0dadf8e95bc43aa4b04620f816708878d5bad08d7b13a7040a.scope: Deactivated successfully.
Nov 29 08:14:46 compute-0 podman[309738]: 2025-11-29 08:14:46.019873649 +0000 UTC m=+0.945180387 container died b19071364f016e0dadf8e95bc43aa4b04620f816708878d5bad08d7b13a7040a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 08:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-74ccfde58b74c20ed7eaa752de2f25cf53ed4f48c3b4743af31d11e48e38c925-merged.mount: Deactivated successfully.
Nov 29 08:14:46 compute-0 podman[309738]: 2025-11-29 08:14:46.087229902 +0000 UTC m=+1.012536630 container remove b19071364f016e0dadf8e95bc43aa4b04620f816708878d5bad08d7b13a7040a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:14:46 compute-0 systemd[1]: libpod-conmon-b19071364f016e0dadf8e95bc43aa4b04620f816708878d5bad08d7b13a7040a.scope: Deactivated successfully.
Nov 29 08:14:46 compute-0 sudo[309634]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:46 compute-0 sudo[309775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:46 compute-0 sudo[309775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:46 compute-0 sudo[309775]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:46 compute-0 sudo[309800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:14:46 compute-0 sudo[309800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:46 compute-0 sudo[309800]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:46 compute-0 sudo[309825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:46 compute-0 sudo[309825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:46 compute-0 sudo[309825]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:46 compute-0 sudo[309850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:14:46 compute-0 sudo[309850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:46 compute-0 ceph-mon[75050]: pgmap v2385: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Nov 29 08:14:46 compute-0 podman[309916]: 2025-11-29 08:14:46.848542834 +0000 UTC m=+0.050239866 container create b81a501cf42e6429898ff6477b4d3795df51065eb9db4bb1c0ecd05a595a9ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:14:46 compute-0 systemd[1]: Started libpod-conmon-b81a501cf42e6429898ff6477b4d3795df51065eb9db4bb1c0ecd05a595a9ee6.scope.
Nov 29 08:14:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:46 compute-0 podman[309916]: 2025-11-29 08:14:46.827435651 +0000 UTC m=+0.029132723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:46 compute-0 podman[309916]: 2025-11-29 08:14:46.931572994 +0000 UTC m=+0.133270046 container init b81a501cf42e6429898ff6477b4d3795df51065eb9db4bb1c0ecd05a595a9ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 08:14:46 compute-0 podman[309916]: 2025-11-29 08:14:46.939600442 +0000 UTC m=+0.141297504 container start b81a501cf42e6429898ff6477b4d3795df51065eb9db4bb1c0ecd05a595a9ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 08:14:46 compute-0 podman[309916]: 2025-11-29 08:14:46.943823717 +0000 UTC m=+0.145520769 container attach b81a501cf42e6429898ff6477b4d3795df51065eb9db4bb1c0ecd05a595a9ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:14:46 compute-0 gifted_heyrovsky[309932]: 167 167
Nov 29 08:14:46 compute-0 systemd[1]: libpod-b81a501cf42e6429898ff6477b4d3795df51065eb9db4bb1c0ecd05a595a9ee6.scope: Deactivated successfully.
Nov 29 08:14:46 compute-0 podman[309916]: 2025-11-29 08:14:46.947016375 +0000 UTC m=+0.148713417 container died b81a501cf42e6429898ff6477b4d3795df51065eb9db4bb1c0ecd05a595a9ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 08:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-17cc049b8788b0e1e00064e00d2859fbfb00ed9d1a0c06c2d724af242a1650fd-merged.mount: Deactivated successfully.
Nov 29 08:14:46 compute-0 podman[309916]: 2025-11-29 08:14:46.986131189 +0000 UTC m=+0.187828231 container remove b81a501cf42e6429898ff6477b4d3795df51065eb9db4bb1c0ecd05a595a9ee6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 08:14:47 compute-0 systemd[1]: libpod-conmon-b81a501cf42e6429898ff6477b4d3795df51065eb9db4bb1c0ecd05a595a9ee6.scope: Deactivated successfully.
Nov 29 08:14:47 compute-0 podman[309955]: 2025-11-29 08:14:47.174901574 +0000 UTC m=+0.047518303 container create d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:14:47 compute-0 systemd[1]: Started libpod-conmon-d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02.scope.
Nov 29 08:14:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:14:47 compute-0 podman[309955]: 2025-11-29 08:14:47.155129887 +0000 UTC m=+0.027746626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bb2e6b1bcf4ddb985e3cd602e24de43335bbb30c60a499cc03b8f49d619c25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bb2e6b1bcf4ddb985e3cd602e24de43335bbb30c60a499cc03b8f49d619c25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bb2e6b1bcf4ddb985e3cd602e24de43335bbb30c60a499cc03b8f49d619c25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53bb2e6b1bcf4ddb985e3cd602e24de43335bbb30c60a499cc03b8f49d619c25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:14:47 compute-0 podman[309955]: 2025-11-29 08:14:47.271348528 +0000 UTC m=+0.143965277 container init d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:14:47 compute-0 podman[309955]: 2025-11-29 08:14:47.282917074 +0000 UTC m=+0.155533803 container start d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:14:47 compute-0 podman[309955]: 2025-11-29 08:14:47.287546729 +0000 UTC m=+0.160163508 container attach d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:14:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]: {
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "osd_id": 2,
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "type": "bluestore"
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:     },
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "osd_id": 1,
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "type": "bluestore"
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:     },
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "osd_id": 0,
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:         "type": "bluestore"
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]:     }
Nov 29 08:14:48 compute-0 gallant_matsumoto[309972]: }
Nov 29 08:14:48 compute-0 systemd[1]: libpod-d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02.scope: Deactivated successfully.
Nov 29 08:14:48 compute-0 podman[309955]: 2025-11-29 08:14:48.308663292 +0000 UTC m=+1.181280061 container died d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:14:48 compute-0 systemd[1]: libpod-d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02.scope: Consumed 1.031s CPU time.
Nov 29 08:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-53bb2e6b1bcf4ddb985e3cd602e24de43335bbb30c60a499cc03b8f49d619c25-merged.mount: Deactivated successfully.
Nov 29 08:14:48 compute-0 podman[309955]: 2025-11-29 08:14:48.364642905 +0000 UTC m=+1.237259634 container remove d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_matsumoto, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:14:48 compute-0 systemd[1]: libpod-conmon-d28519f08d1cc36e367c2f60fd60df1593343822c532cb14d345784a48c64e02.scope: Deactivated successfully.
Nov 29 08:14:48 compute-0 sudo[309850]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:14:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:14:48 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:48 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 3058cff8-3451-4d59-90ce-d1f944c4ae29 does not exist
Nov 29 08:14:48 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev a331eea0-4840-4007-a498-7c47a450b32d does not exist
Nov 29 08:14:48 compute-0 nova_compute[256729]: 2025-11-29 08:14:48.413 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:48 compute-0 sudo[310019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:14:48 compute-0 sudo[310019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:48 compute-0 sudo[310019]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:48 compute-0 sudo[310044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:14:48 compute-0 sudo[310044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:14:48 compute-0 sudo[310044]: pam_unix(sudo:session): session closed for user root
Nov 29 08:14:48 compute-0 ceph-mon[75050]: pgmap v2386: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 29 08:14:48 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:48 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:14:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:49 compute-0 nova_compute[256729]: 2025-11-29 08:14:49.752 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:50 compute-0 ceph-mon[75050]: pgmap v2387: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:52 compute-0 podman[310070]: 2025-11-29 08:14:52.722904001 +0000 UTC m=+0.087162723 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:14:52 compute-0 podman[310071]: 2025-11-29 08:14:52.729418749 +0000 UTC m=+0.082013683 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:14:52 compute-0 podman[310069]: 2025-11-29 08:14:52.761807069 +0000 UTC m=+0.121277290 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 08:14:52 compute-0 ceph-mon[75050]: pgmap v2388: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:53 compute-0 nova_compute[256729]: 2025-11-29 08:14:53.415 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:54 compute-0 nova_compute[256729]: 2025-11-29 08:14:54.754 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:54 compute-0 ceph-mon[75050]: pgmap v2389: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:14:57 compute-0 ceph-mon[75050]: pgmap v2390: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:14:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 255 B/s wr, 0 op/s
Nov 29 08:14:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e515 do_prune osdmap full prune enabled
Nov 29 08:14:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e516 e516: 3 total, 3 up, 3 in
Nov 29 08:14:58 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e516: 3 total, 3 up, 3 in
Nov 29 08:14:58 compute-0 nova_compute[256729]: 2025-11-29 08:14:58.417 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:59 compute-0 ceph-mon[75050]: pgmap v2391: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 255 B/s wr, 0 op/s
Nov 29 08:14:59 compute-0 ceph-mon[75050]: osdmap e516: 3 total, 3 up, 3 in
Nov 29 08:14:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 511 B/s wr, 1 op/s
Nov 29 08:14:59 compute-0 nova_compute[256729]: 2025-11-29 08:14:59.756 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:14:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:14:59.793 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:14:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:14:59.794 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:14:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:14:59.794 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e516 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:01 compute-0 ceph-mon[75050]: pgmap v2393: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 511 B/s wr, 1 op/s
Nov 29 08:15:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 511 B/s wr, 1 op/s
Nov 29 08:15:02 compute-0 ceph-mon[75050]: pgmap v2394: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 716 B/s rd, 511 B/s wr, 1 op/s
Nov 29 08:15:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e516 do_prune osdmap full prune enabled
Nov 29 08:15:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Nov 29 08:15:03 compute-0 nova_compute[256729]: 2025-11-29 08:15:03.418 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:03 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e517 e517: 3 total, 3 up, 3 in
Nov 29 08:15:03 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e517: 3 total, 3 up, 3 in
Nov 29 08:15:04 compute-0 ceph-mon[75050]: pgmap v2395: 305 pgs: 305 active+clean; 271 MiB data, 671 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Nov 29 08:15:04 compute-0 ceph-mon[75050]: osdmap e517: 3 total, 3 up, 3 in
Nov 29 08:15:04 compute-0 nova_compute[256729]: 2025-11-29 08:15:04.760 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:04 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2111833844' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:04 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:04 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2111833844' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 41 op/s
Nov 29 08:15:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e517 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:05 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2111833844' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:05 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/2111833844' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:15:05
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'images', 'backups', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 29 08:15:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:15:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344171415' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:06 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:06 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344171415' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:06 compute-0 ceph-mon[75050]: pgmap v2397: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 41 op/s
Nov 29 08:15:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1344171415' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:06 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/1344171415' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:15:07 compute-0 nova_compute[256729]: 2025-11-29 08:15:07.199 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 3.3 KiB/s wr, 93 op/s
Nov 29 08:15:07 compute-0 ovn_controller[153383]: 2025-11-29T08:15:07Z|00301|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 29 08:15:08 compute-0 nova_compute[256729]: 2025-11-29 08:15:08.420 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:08 compute-0 ceph-mon[75050]: pgmap v2398: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 3.3 KiB/s wr, 93 op/s
Nov 29 08:15:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:15:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/221553465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:15:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/221553465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:09 compute-0 nova_compute[256729]: 2025-11-29 08:15:09.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 2.8 KiB/s wr, 88 op/s
Nov 29 08:15:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/221553465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:15:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/221553465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:15:09 compute-0 nova_compute[256729]: 2025-11-29 08:15:09.763 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e517 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e517 do_prune osdmap full prune enabled
Nov 29 08:15:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 e518: 3 total, 3 up, 3 in
Nov 29 08:15:10 compute-0 ceph-mon[75050]: log_channel(cluster) log [DBG] : osdmap e518: 3 total, 3 up, 3 in
Nov 29 08:15:10 compute-0 ceph-mon[75050]: pgmap v2399: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 2.8 KiB/s wr, 88 op/s
Nov 29 08:15:10 compute-0 ceph-mon[75050]: osdmap e518: 3 total, 3 up, 3 in
Nov 29 08:15:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 1.5 KiB/s wr, 74 op/s
Nov 29 08:15:12 compute-0 ceph-mon[75050]: pgmap v2401: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 1.5 KiB/s wr, 74 op/s
Nov 29 08:15:13 compute-0 nova_compute[256729]: 2025-11-29 08:15:13.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:13 compute-0 nova_compute[256729]: 2025-11-29 08:15:13.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.2 KiB/s wr, 60 op/s
Nov 29 08:15:13 compute-0 nova_compute[256729]: 2025-11-29 08:15:13.422 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:14 compute-0 nova_compute[256729]: 2025-11-29 08:15:14.144 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:14 compute-0 nova_compute[256729]: 2025-11-29 08:15:14.151 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:14 compute-0 nova_compute[256729]: 2025-11-29 08:15:14.151 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:15:14 compute-0 nova_compute[256729]: 2025-11-29 08:15:14.151 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:15:14 compute-0 nova_compute[256729]: 2025-11-29 08:15:14.237 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:15:14 compute-0 nova_compute[256729]: 2025-11-29 08:15:14.766 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:15 compute-0 ceph-mon[75050]: pgmap v2402: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.2 KiB/s wr, 60 op/s
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.150 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.326 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.326 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.326 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.327 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.327 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1023 B/s wr, 55 op/s
Nov 29 08:15:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:15:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:15:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:15 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1477786524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.768 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.984 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.986 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4272MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.986 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:15 compute-0 nova_compute[256729]: 2025-11-29 08:15:15.987 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:16 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1477786524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:16 compute-0 nova_compute[256729]: 2025-11-29 08:15:16.183 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:15:16 compute-0 nova_compute[256729]: 2025-11-29 08:15:16.184 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:15:16 compute-0 nova_compute[256729]: 2025-11-29 08:15:16.202 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:15:16 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:15:16 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3959034319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:16 compute-0 nova_compute[256729]: 2025-11-29 08:15:16.669 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:15:16 compute-0 nova_compute[256729]: 2025-11-29 08:15:16.676 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:15:16 compute-0 nova_compute[256729]: 2025-11-29 08:15:16.783 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:15:16 compute-0 nova_compute[256729]: 2025-11-29 08:15:16.786 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:15:16 compute-0 nova_compute[256729]: 2025-11-29 08:15:16.786 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:15:17 compute-0 ceph-mon[75050]: pgmap v2403: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1023 B/s wr, 55 op/s
Nov 29 08:15:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3959034319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:15:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 08:15:17 compute-0 nova_compute[256729]: 2025-11-29 08:15:17.785 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:17 compute-0 nova_compute[256729]: 2025-11-29 08:15:17.786 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:15:18 compute-0 ceph-mon[75050]: pgmap v2404: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 08:15:18 compute-0 nova_compute[256729]: 2025-11-29 08:15:18.426 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:19 compute-0 nova_compute[256729]: 2025-11-29 08:15:19.769 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:20 compute-0 nova_compute[256729]: 2025-11-29 08:15:20.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:15:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:20 compute-0 ceph-mon[75050]: pgmap v2405: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:22 compute-0 ceph-mon[75050]: pgmap v2406: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:23 compute-0 nova_compute[256729]: 2025-11-29 08:15:23.427 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:23 compute-0 podman[310180]: 2025-11-29 08:15:23.685255732 +0000 UTC m=+0.055789759 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 08:15:23 compute-0 podman[310181]: 2025-11-29 08:15:23.692679205 +0000 UTC m=+0.054513325 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 08:15:23 compute-0 podman[310179]: 2025-11-29 08:15:23.707319753 +0000 UTC m=+0.080885412 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 08:15:24 compute-0 nova_compute[256729]: 2025-11-29 08:15:24.771 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:25 compute-0 ceph-mon[75050]: pgmap v2407: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:26 compute-0 ceph-mon[75050]: pgmap v2408: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:28 compute-0 nova_compute[256729]: 2025-11-29 08:15:28.430 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:28 compute-0 ceph-mon[75050]: pgmap v2409: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:29 compute-0 nova_compute[256729]: 2025-11-29 08:15:29.775 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:30 compute-0 ceph-mon[75050]: pgmap v2410: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:32 compute-0 ceph-mon[75050]: pgmap v2411: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:33 compute-0 nova_compute[256729]: 2025-11-29 08:15:33.431 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:34 compute-0 ceph-mon[75050]: pgmap v2412: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:34 compute-0 nova_compute[256729]: 2025-11-29 08:15:34.778 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:15:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:15:36 compute-0 ceph-mon[75050]: pgmap v2413: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:38 compute-0 nova_compute[256729]: 2025-11-29 08:15:38.436 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:38 compute-0 ceph-mon[75050]: pgmap v2414: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:39 compute-0 nova_compute[256729]: 2025-11-29 08:15:39.781 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:40 compute-0 ceph-mon[75050]: pgmap v2415: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:43 compute-0 ceph-mon[75050]: pgmap v2416: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:43 compute-0 nova_compute[256729]: 2025-11-29 08:15:43.476 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:44 compute-0 ceph-mon[75050]: pgmap v2417: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:44 compute-0 nova_compute[256729]: 2025-11-29 08:15:44.784 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:46 compute-0 ceph-mon[75050]: pgmap v2418: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:48 compute-0 nova_compute[256729]: 2025-11-29 08:15:48.480 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:48 compute-0 sudo[310245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:48 compute-0 sudo[310245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:48 compute-0 sudo[310245]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:48 compute-0 ceph-mon[75050]: pgmap v2419: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:48 compute-0 sudo[310270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:15:48 compute-0 sudo[310270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:48 compute-0 sudo[310270]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:48 compute-0 sudo[310295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:48 compute-0 sudo[310295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:48 compute-0 sudo[310295]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:48 compute-0 sudo[310320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:15:48 compute-0 sudo[310320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:49 compute-0 sudo[310320]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:15:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:15:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:15:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:15:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:15:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:15:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 330473ad-bab0-4c72-83b8-eac5a62108b5 does not exist
Nov 29 08:15:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 1d4a2a59-6261-47a1-a9a2-f74cb5dfcc87 does not exist
Nov 29 08:15:49 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 69bcf294-218f-460c-86e0-5878fda181fc does not exist
Nov 29 08:15:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:15:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:15:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:15:49 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:15:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:15:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:15:49 compute-0 sudo[310375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:49 compute-0 sudo[310375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:49 compute-0 sudo[310375]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:49 compute-0 sudo[310400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:15:49 compute-0 sudo[310400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:49 compute-0 sudo[310400]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:15:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:15:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:15:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:15:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:15:49 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:15:49 compute-0 sudo[310425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:49 compute-0 sudo[310425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:49 compute-0 sudo[310425]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:49 compute-0 nova_compute[256729]: 2025-11-29 08:15:49.786 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:49 compute-0 sudo[310450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:15:49 compute-0 sudo[310450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:50 compute-0 podman[310515]: 2025-11-29 08:15:50.250823813 +0000 UTC m=+0.046686012 container create c5e6e3345d1ec9e52e20afb0f878fa6ee9dbfbc3449f04715123bb9b706a0f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_benz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:15:50 compute-0 systemd[1]: Started libpod-conmon-c5e6e3345d1ec9e52e20afb0f878fa6ee9dbfbc3449f04715123bb9b706a0f32.scope.
Nov 29 08:15:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:50 compute-0 podman[310515]: 2025-11-29 08:15:50.231505227 +0000 UTC m=+0.027367466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:50 compute-0 podman[310515]: 2025-11-29 08:15:50.350433122 +0000 UTC m=+0.146295411 container init c5e6e3345d1ec9e52e20afb0f878fa6ee9dbfbc3449f04715123bb9b706a0f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_benz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:15:50 compute-0 podman[310515]: 2025-11-29 08:15:50.365931654 +0000 UTC m=+0.161793863 container start c5e6e3345d1ec9e52e20afb0f878fa6ee9dbfbc3449f04715123bb9b706a0f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_benz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 08:15:50 compute-0 podman[310515]: 2025-11-29 08:15:50.369868821 +0000 UTC m=+0.165731060 container attach c5e6e3345d1ec9e52e20afb0f878fa6ee9dbfbc3449f04715123bb9b706a0f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 08:15:50 compute-0 flamboyant_benz[310531]: 167 167
Nov 29 08:15:50 compute-0 systemd[1]: libpod-c5e6e3345d1ec9e52e20afb0f878fa6ee9dbfbc3449f04715123bb9b706a0f32.scope: Deactivated successfully.
Nov 29 08:15:50 compute-0 podman[310515]: 2025-11-29 08:15:50.375159865 +0000 UTC m=+0.171022064 container died c5e6e3345d1ec9e52e20afb0f878fa6ee9dbfbc3449f04715123bb9b706a0f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-481beba0fdd0287f304caf9a6a678cbee164ea81d99331986988af953158dfcb-merged.mount: Deactivated successfully.
Nov 29 08:15:50 compute-0 podman[310515]: 2025-11-29 08:15:50.421787384 +0000 UTC m=+0.217649623 container remove c5e6e3345d1ec9e52e20afb0f878fa6ee9dbfbc3449f04715123bb9b706a0f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_benz, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 08:15:50 compute-0 systemd[1]: libpod-conmon-c5e6e3345d1ec9e52e20afb0f878fa6ee9dbfbc3449f04715123bb9b706a0f32.scope: Deactivated successfully.
Nov 29 08:15:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:50 compute-0 podman[310556]: 2025-11-29 08:15:50.632864736 +0000 UTC m=+0.048074238 container create 4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khorana, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:15:50 compute-0 systemd[1]: Started libpod-conmon-4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6.scope.
Nov 29 08:15:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:50 compute-0 podman[310556]: 2025-11-29 08:15:50.616439709 +0000 UTC m=+0.031649231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7065e0805c49b90ecb6fe3595577d67da18bd034ac0ac86e5491915df8a08ff2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7065e0805c49b90ecb6fe3595577d67da18bd034ac0ac86e5491915df8a08ff2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7065e0805c49b90ecb6fe3595577d67da18bd034ac0ac86e5491915df8a08ff2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7065e0805c49b90ecb6fe3595577d67da18bd034ac0ac86e5491915df8a08ff2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7065e0805c49b90ecb6fe3595577d67da18bd034ac0ac86e5491915df8a08ff2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:50 compute-0 ceph-mon[75050]: pgmap v2420: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:50 compute-0 podman[310556]: 2025-11-29 08:15:50.734574884 +0000 UTC m=+0.149784436 container init 4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khorana, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:15:50 compute-0 podman[310556]: 2025-11-29 08:15:50.756151691 +0000 UTC m=+0.171361233 container start 4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khorana, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 08:15:50 compute-0 podman[310556]: 2025-11-29 08:15:50.761288851 +0000 UTC m=+0.176498403 container attach 4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:15:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:51 compute-0 trusting_khorana[310572]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:15:51 compute-0 trusting_khorana[310572]: --> relative data size: 1.0
Nov 29 08:15:51 compute-0 trusting_khorana[310572]: --> All data devices are unavailable
Nov 29 08:15:51 compute-0 systemd[1]: libpod-4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6.scope: Deactivated successfully.
Nov 29 08:15:51 compute-0 systemd[1]: libpod-4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6.scope: Consumed 1.096s CPU time.
Nov 29 08:15:51 compute-0 podman[310556]: 2025-11-29 08:15:51.905178713 +0000 UTC m=+1.320388225 container died 4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khorana, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 08:15:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7065e0805c49b90ecb6fe3595577d67da18bd034ac0ac86e5491915df8a08ff2-merged.mount: Deactivated successfully.
Nov 29 08:15:52 compute-0 podman[310556]: 2025-11-29 08:15:52.028132228 +0000 UTC m=+1.443341730 container remove 4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 08:15:52 compute-0 systemd[1]: libpod-conmon-4e7b5e3107bc2fb403b1bd9a35c98504eb6f85aeb59f1be19cd67321fb7100b6.scope: Deactivated successfully.
Nov 29 08:15:52 compute-0 sudo[310450]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:52 compute-0 sudo[310615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:52 compute-0 sudo[310615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:52 compute-0 sudo[310615]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:52 compute-0 sudo[310640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:15:52 compute-0 sudo[310640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:52 compute-0 sudo[310640]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:52 compute-0 sudo[310665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:52 compute-0 sudo[310665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:52 compute-0 sudo[310665]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:52 compute-0 sudo[310690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:15:52 compute-0 sudo[310690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:52 compute-0 ceph-mon[75050]: pgmap v2421: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:52 compute-0 podman[310754]: 2025-11-29 08:15:52.812809907 +0000 UTC m=+0.044452080 container create d6b3bfbaee3d6d551a116c2070223f48c2de6640b2541e85621a9effb0805415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:15:52 compute-0 systemd[1]: Started libpod-conmon-d6b3bfbaee3d6d551a116c2070223f48c2de6640b2541e85621a9effb0805415.scope.
Nov 29 08:15:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:52 compute-0 podman[310754]: 2025-11-29 08:15:52.793113981 +0000 UTC m=+0.024756194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:52 compute-0 podman[310754]: 2025-11-29 08:15:52.912964202 +0000 UTC m=+0.144606395 container init d6b3bfbaee3d6d551a116c2070223f48c2de6640b2541e85621a9effb0805415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcnulty, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 08:15:52 compute-0 podman[310754]: 2025-11-29 08:15:52.925152204 +0000 UTC m=+0.156794367 container start d6b3bfbaee3d6d551a116c2070223f48c2de6640b2541e85621a9effb0805415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:15:52 compute-0 podman[310754]: 2025-11-29 08:15:52.929093241 +0000 UTC m=+0.160735434 container attach d6b3bfbaee3d6d551a116c2070223f48c2de6640b2541e85621a9effb0805415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:15:52 compute-0 cranky_mcnulty[310770]: 167 167
Nov 29 08:15:52 compute-0 systemd[1]: libpod-d6b3bfbaee3d6d551a116c2070223f48c2de6640b2541e85621a9effb0805415.scope: Deactivated successfully.
Nov 29 08:15:52 compute-0 podman[310754]: 2025-11-29 08:15:52.93494298 +0000 UTC m=+0.166585143 container died d6b3bfbaee3d6d551a116c2070223f48c2de6640b2541e85621a9effb0805415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcnulty, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 08:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-46068d464b8de3292d49b7a3005b90e8d6a672e8671e8a5385b4ce734dfd4633-merged.mount: Deactivated successfully.
Nov 29 08:15:52 compute-0 podman[310754]: 2025-11-29 08:15:52.980275713 +0000 UTC m=+0.211917876 container remove d6b3bfbaee3d6d551a116c2070223f48c2de6640b2541e85621a9effb0805415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:15:52 compute-0 systemd[1]: libpod-conmon-d6b3bfbaee3d6d551a116c2070223f48c2de6640b2541e85621a9effb0805415.scope: Deactivated successfully.
Nov 29 08:15:53 compute-0 podman[310794]: 2025-11-29 08:15:53.23886063 +0000 UTC m=+0.069646186 container create ef8faf5f580c11f8e0868ce419f6da07591a49f141732688fb06fdbc050aec5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 08:15:53 compute-0 systemd[1]: Started libpod-conmon-ef8faf5f580c11f8e0868ce419f6da07591a49f141732688fb06fdbc050aec5e.scope.
Nov 29 08:15:53 compute-0 podman[310794]: 2025-11-29 08:15:53.208591196 +0000 UTC m=+0.039376812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6910d3054a4f343efb2e5238faf4d4d0645fec68d21a503cd196a84b885895b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6910d3054a4f343efb2e5238faf4d4d0645fec68d21a503cd196a84b885895b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6910d3054a4f343efb2e5238faf4d4d0645fec68d21a503cd196a84b885895b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6910d3054a4f343efb2e5238faf4d4d0645fec68d21a503cd196a84b885895b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:53 compute-0 podman[310794]: 2025-11-29 08:15:53.346883208 +0000 UTC m=+0.177668824 container init ef8faf5f580c11f8e0868ce419f6da07591a49f141732688fb06fdbc050aec5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 08:15:53 compute-0 podman[310794]: 2025-11-29 08:15:53.358497894 +0000 UTC m=+0.189283410 container start ef8faf5f580c11f8e0868ce419f6da07591a49f141732688fb06fdbc050aec5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 08:15:53 compute-0 podman[310794]: 2025-11-29 08:15:53.361871696 +0000 UTC m=+0.192657312 container attach ef8faf5f580c11f8e0868ce419f6da07591a49f141732688fb06fdbc050aec5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 08:15:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:53 compute-0 nova_compute[256729]: 2025-11-29 08:15:53.483 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]: {
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:     "0": [
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:         {
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "devices": [
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "/dev/loop3"
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             ],
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_name": "ceph_lv0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_size": "21470642176",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "name": "ceph_lv0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "tags": {
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.cluster_name": "ceph",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.crush_device_class": "",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.encrypted": "0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.osd_id": "0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.type": "block",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.vdo": "0"
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             },
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "type": "block",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "vg_name": "ceph_vg0"
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:         }
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:     ],
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:     "1": [
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:         {
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "devices": [
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "/dev/loop4"
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             ],
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_name": "ceph_lv1",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_size": "21470642176",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "name": "ceph_lv1",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "tags": {
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.cluster_name": "ceph",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.crush_device_class": "",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.encrypted": "0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.osd_id": "1",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.type": "block",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.vdo": "0"
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             },
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "type": "block",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "vg_name": "ceph_vg1"
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:         }
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:     ],
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:     "2": [
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:         {
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "devices": [
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "/dev/loop5"
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             ],
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_name": "ceph_lv2",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_size": "21470642176",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "name": "ceph_lv2",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "tags": {
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.cluster_name": "ceph",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.crush_device_class": "",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.encrypted": "0",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.osd_id": "2",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.type": "block",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:                 "ceph.vdo": "0"
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             },
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "type": "block",
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:             "vg_name": "ceph_vg2"
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:         }
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]:     ]
Nov 29 08:15:54 compute-0 angry_mcclintock[310810]: }
Nov 29 08:15:54 compute-0 systemd[1]: libpod-ef8faf5f580c11f8e0868ce419f6da07591a49f141732688fb06fdbc050aec5e.scope: Deactivated successfully.
Nov 29 08:15:54 compute-0 podman[310794]: 2025-11-29 08:15:54.140917682 +0000 UTC m=+0.971703198 container died ef8faf5f580c11f8e0868ce419f6da07591a49f141732688fb06fdbc050aec5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:15:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6910d3054a4f343efb2e5238faf4d4d0645fec68d21a503cd196a84b885895b1-merged.mount: Deactivated successfully.
Nov 29 08:15:54 compute-0 podman[310794]: 2025-11-29 08:15:54.209602801 +0000 UTC m=+1.040388317 container remove ef8faf5f580c11f8e0868ce419f6da07591a49f141732688fb06fdbc050aec5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:15:54 compute-0 systemd[1]: libpod-conmon-ef8faf5f580c11f8e0868ce419f6da07591a49f141732688fb06fdbc050aec5e.scope: Deactivated successfully.
Nov 29 08:15:54 compute-0 sudo[310690]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:54 compute-0 podman[310828]: 2025-11-29 08:15:54.257805272 +0000 UTC m=+0.079164825 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 08:15:54 compute-0 podman[310827]: 2025-11-29 08:15:54.270478157 +0000 UTC m=+0.079690430 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 08:15:54 compute-0 podman[310820]: 2025-11-29 08:15:54.276767148 +0000 UTC m=+0.102411927 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 08:15:54 compute-0 sudo[310886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:54 compute-0 sudo[310886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:54 compute-0 sudo[310886]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:54 compute-0 sudo[310920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:15:54 compute-0 sudo[310920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:54 compute-0 sudo[310920]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:54 compute-0 sudo[310945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:54 compute-0 sudo[310945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:54 compute-0 sudo[310945]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:54 compute-0 sudo[310970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:15:54 compute-0 sudo[310970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:54 compute-0 ceph-mon[75050]: pgmap v2422: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:54 compute-0 nova_compute[256729]: 2025-11-29 08:15:54.789 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:54 compute-0 podman[311035]: 2025-11-29 08:15:54.958158687 +0000 UTC m=+0.067907109 container create 6f68ce0614f65346010284775f6426f04dcb419025b6d0aa2fbd0cad4cd67759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 08:15:55 compute-0 systemd[1]: Started libpod-conmon-6f68ce0614f65346010284775f6426f04dcb419025b6d0aa2fbd0cad4cd67759.scope.
Nov 29 08:15:55 compute-0 podman[311035]: 2025-11-29 08:15:54.929555599 +0000 UTC m=+0.039304071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:55 compute-0 podman[311035]: 2025-11-29 08:15:55.06231606 +0000 UTC m=+0.172064512 container init 6f68ce0614f65346010284775f6426f04dcb419025b6d0aa2fbd0cad4cd67759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:15:55 compute-0 podman[311035]: 2025-11-29 08:15:55.076093855 +0000 UTC m=+0.185842267 container start 6f68ce0614f65346010284775f6426f04dcb419025b6d0aa2fbd0cad4cd67759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:15:55 compute-0 podman[311035]: 2025-11-29 08:15:55.080680381 +0000 UTC m=+0.190428853 container attach 6f68ce0614f65346010284775f6426f04dcb419025b6d0aa2fbd0cad4cd67759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 08:15:55 compute-0 hardcore_wozniak[311051]: 167 167
Nov 29 08:15:55 compute-0 systemd[1]: libpod-6f68ce0614f65346010284775f6426f04dcb419025b6d0aa2fbd0cad4cd67759.scope: Deactivated successfully.
Nov 29 08:15:55 compute-0 podman[311035]: 2025-11-29 08:15:55.084653158 +0000 UTC m=+0.194401560 container died 6f68ce0614f65346010284775f6426f04dcb419025b6d0aa2fbd0cad4cd67759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f5b0e10b3db82c0ffb994944748f0e8dde7339e1c1a7ae0086bc9bd55f1a759-merged.mount: Deactivated successfully.
Nov 29 08:15:55 compute-0 podman[311035]: 2025-11-29 08:15:55.136042857 +0000 UTC m=+0.245791259 container remove 6f68ce0614f65346010284775f6426f04dcb419025b6d0aa2fbd0cad4cd67759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 08:15:55 compute-0 systemd[1]: libpod-conmon-6f68ce0614f65346010284775f6426f04dcb419025b6d0aa2fbd0cad4cd67759.scope: Deactivated successfully.
Nov 29 08:15:55 compute-0 podman[311075]: 2025-11-29 08:15:55.320549967 +0000 UTC m=+0.052636423 container create db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_thompson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:15:55 compute-0 systemd[1]: Started libpod-conmon-db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e.scope.
Nov 29 08:15:55 compute-0 podman[311075]: 2025-11-29 08:15:55.290944071 +0000 UTC m=+0.023030497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:15:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e06c6cc3d93e9a222d72d091b8fa84384cf955dd25465fbaec6d72de042e2bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e06c6cc3d93e9a222d72d091b8fa84384cf955dd25465fbaec6d72de042e2bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e06c6cc3d93e9a222d72d091b8fa84384cf955dd25465fbaec6d72de042e2bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e06c6cc3d93e9a222d72d091b8fa84384cf955dd25465fbaec6d72de042e2bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:15:55 compute-0 podman[311075]: 2025-11-29 08:15:55.455283823 +0000 UTC m=+0.187370289 container init db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_thompson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 08:15:55 compute-0 podman[311075]: 2025-11-29 08:15:55.464469923 +0000 UTC m=+0.196556349 container start db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_thompson, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:15:55 compute-0 podman[311075]: 2025-11-29 08:15:55.467785213 +0000 UTC m=+0.199871649 container attach db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:15:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]: {
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:     "1ebe47c8-fe69-46c9-9931-3ba50f4dae48": {
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "osd_id": 2,
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "osd_uuid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "type": "bluestore"
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:     },
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:     "3596f226-aedb-4f7c-95c0-eea7b670ed3d": {
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "osd_id": 1,
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "osd_uuid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "type": "bluestore"
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:     },
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:     "8cd0a453-4c8d-429b-b547-2404357db43c": {
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "ceph_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "osd_id": 0,
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "osd_uuid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:         "type": "bluestore"
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]:     }
Nov 29 08:15:56 compute-0 relaxed_thompson[311091]: }
Nov 29 08:15:56 compute-0 systemd[1]: libpod-db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e.scope: Deactivated successfully.
Nov 29 08:15:56 compute-0 podman[311075]: 2025-11-29 08:15:56.460355548 +0000 UTC m=+1.192442004 container died db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_thompson, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 08:15:56 compute-0 systemd[1]: libpod-db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e.scope: Consumed 1.002s CPU time.
Nov 29 08:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e06c6cc3d93e9a222d72d091b8fa84384cf955dd25465fbaec6d72de042e2bd-merged.mount: Deactivated successfully.
Nov 29 08:15:56 compute-0 podman[311075]: 2025-11-29 08:15:56.655413055 +0000 UTC m=+1.387499501 container remove db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_thompson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 08:15:56 compute-0 systemd[1]: libpod-conmon-db17c04cf2ce9002e0426a310e99b4f1f96a7d07401f125ba040c8e82dbe727e.scope: Deactivated successfully.
Nov 29 08:15:56 compute-0 sudo[310970]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:15:56 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:15:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:15:56 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:15:56 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7f6d932f-9d2e-4956-8fc6-caea7057f0cb does not exist
Nov 29 08:15:56 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 5e721ed4-8cfb-47cd-b359-1eca9234da99 does not exist
Nov 29 08:15:56 compute-0 ceph-mon[75050]: pgmap v2423: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:56 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:15:56 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:15:56 compute-0 sudo[311136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:15:56 compute-0 sudo[311136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:56 compute-0 sudo[311136]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:56 compute-0 sudo[311161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 08:15:56 compute-0 sudo[311161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:15:56 compute-0 sudo[311161]: pam_unix(sudo:session): session closed for user root
Nov 29 08:15:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:58 compute-0 nova_compute[256729]: 2025-11-29 08:15:58.485 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:59 compute-0 ceph-mon[75050]: pgmap v2424: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:15:59 compute-0 nova_compute[256729]: 2025-11-29 08:15:59.791 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:15:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:15:59.794 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:15:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:15:59.795 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:15:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:15:59.795 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:01 compute-0 ceph-mon[75050]: pgmap v2425: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:03 compute-0 ceph-mon[75050]: pgmap v2426: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:03 compute-0 nova_compute[256729]: 2025-11-29 08:16:03.488 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:04 compute-0 ceph-mon[75050]: pgmap v2427: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:04 compute-0 nova_compute[256729]: 2025-11-29 08:16:04.793 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:05 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Optimize plan auto_2025-11-29_08:16:05
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] do_upmap
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'backups', 'default.rgw.control', '.mgr']
Nov 29 08:16:05 compute-0 ceph-mgr[75345]: [balancer INFO root] prepared 0/10 changes
Nov 29 08:16:06 compute-0 ceph-mon[75050]: pgmap v2428: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 08:16:07 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:08 compute-0 nova_compute[256729]: 2025-11-29 08:16:08.490 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:08 compute-0 ceph-mon[75050]: pgmap v2429: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 08:16:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3555104631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:08 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 08:16:08 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3555104631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:09 compute-0 nova_compute[256729]: 2025-11-29 08:16:09.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:09 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3555104631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 08:16:09 compute-0 ceph-mon[75050]: from='client.? 192.168.122.10:0/3555104631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 08:16:09 compute-0 nova_compute[256729]: 2025-11-29 08:16:09.796 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:10 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:10 compute-0 ceph-mon[75050]: pgmap v2430: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:11 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:12 compute-0 ceph-mon[75050]: pgmap v2431: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:13 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:13 compute-0 nova_compute[256729]: 2025-11-29 08:16:13.493 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:14 compute-0 nova_compute[256729]: 2025-11-29 08:16:14.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:14 compute-0 nova_compute[256729]: 2025-11-29 08:16:14.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:14 compute-0 ceph-mon[75050]: pgmap v2432: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:14 compute-0 nova_compute[256729]: 2025-11-29 08:16:14.799 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:15 compute-0 nova_compute[256729]: 2025-11-29 08:16:15.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:15 compute-0 nova_compute[256729]: 2025-11-29 08:16:15.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 08:16:15 compute-0 nova_compute[256729]: 2025-11-29 08:16:15.150 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 08:16:15 compute-0 nova_compute[256729]: 2025-11-29 08:16:15.172 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:15 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:16:15 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:16:16 compute-0 nova_compute[256729]: 2025-11-29 08:16:16.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:16 compute-0 nova_compute[256729]: 2025-11-29 08:16:16.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:16 compute-0 ceph-mon[75050]: pgmap v2433: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.148 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.148 256736 DEBUG nova.compute.manager [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.149 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.195 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.195 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.196 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.196 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.196 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:17 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:17 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:17 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966682178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.638 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:17 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2966682178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.919 256736 WARNING nova.virt.libvirt.driver [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.921 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4259MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.922 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:17 compute-0 nova_compute[256729]: 2025-11-29 08:16:17.923 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:18 compute-0 nova_compute[256729]: 2025-11-29 08:16:18.288 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 08:16:18 compute-0 nova_compute[256729]: 2025-11-29 08:16:18.289 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 08:16:18 compute-0 nova_compute[256729]: 2025-11-29 08:16:18.308 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 08:16:18 compute-0 nova_compute[256729]: 2025-11-29 08:16:18.495 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:18 compute-0 ceph-mon[75050]: pgmap v2434: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:18 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 08:16:18 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1727439430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:18 compute-0 nova_compute[256729]: 2025-11-29 08:16:18.807 256736 DEBUG oslo_concurrency.processutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 08:16:18 compute-0 nova_compute[256729]: 2025-11-29 08:16:18.815 256736 DEBUG nova.compute.provider_tree [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed in ProviderTree for provider: ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 08:16:18 compute-0 nova_compute[256729]: 2025-11-29 08:16:18.843 256736 DEBUG nova.scheduler.client.report [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Inventory has not changed for provider ccbe3d76-fe87-47c9-8a0a-e9860fc22f5f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 08:16:18 compute-0 nova_compute[256729]: 2025-11-29 08:16:18.846 256736 DEBUG nova.compute.resource_tracker [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 08:16:18 compute-0 nova_compute[256729]: 2025-11-29 08:16:18.846 256736 DEBUG oslo_concurrency.lockutils [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:19 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:19 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1727439430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 08:16:19 compute-0 nova_compute[256729]: 2025-11-29 08:16:19.802 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:20 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:20 compute-0 ceph-mon[75050]: pgmap v2435: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:21 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:22 compute-0 ceph-mon[75050]: pgmap v2436: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:22 compute-0 nova_compute[256729]: 2025-11-29 08:16:22.848 256736 DEBUG oslo_service.periodic_task [None req-27de958c-c2ab-4243-8865-e92c1e69ab3e - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 08:16:23 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:23 compute-0 nova_compute[256729]: 2025-11-29 08:16:23.497 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:24 compute-0 podman[311232]: 2025-11-29 08:16:24.712074966 +0000 UTC m=+0.072757970 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 08:16:24 compute-0 podman[311231]: 2025-11-29 08:16:24.733826058 +0000 UTC m=+0.092268991 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:16:24 compute-0 podman[311230]: 2025-11-29 08:16:24.753460212 +0000 UTC m=+0.123754507 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 08:16:24 compute-0 ceph-mon[75050]: pgmap v2437: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:24 compute-0 nova_compute[256729]: 2025-11-29 08:16:24.804 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:25 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:25 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:26 compute-0 sshd-session[311291]: Accepted publickey for zuul from 192.168.122.10 port 54518 ssh2: ECDSA SHA256:yjwSeYo61Cp1bcL7y5AlYLjzNZeAFiW5isMWg/hA4OQ
Nov 29 08:16:26 compute-0 systemd-logind[807]: New session 51 of user zuul.
Nov 29 08:16:26 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 29 08:16:26 compute-0 sshd-session[311291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 08:16:26 compute-0 ceph-mon[75050]: pgmap v2438: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:26 compute-0 sudo[311295]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 29 08:16:26 compute-0 sudo[311295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 08:16:27 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:28 compute-0 nova_compute[256729]: 2025-11-29 08:16:28.499 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:28 compute-0 ceph-mon[75050]: pgmap v2439: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:29 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:29 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19245 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:29 compute-0 nova_compute[256729]: 2025-11-29 08:16:29.806 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:30 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19247 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:30 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 08:16:30 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2555475359' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 08:16:30 compute-0 ceph-mon[75050]: pgmap v2440: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:30 compute-0 ceph-mon[75050]: from='client.19245 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:30 compute-0 ceph-mon[75050]: from='client.19247 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:30 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2555475359' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 08:16:31 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:32 compute-0 ceph-mon[75050]: pgmap v2441: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:33 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:33 compute-0 nova_compute[256729]: 2025-11-29 08:16:33.533 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-0 nova_compute[256729]: 2025-11-29 08:16:34.810 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:34 compute-0 ceph-mon[75050]: pgmap v2442: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:35 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:35 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 08:16:35 compute-0 ceph-mgr[75345]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 08:16:36 compute-0 ovs-vsctl[311624]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 08:16:36 compute-0 ceph-mon[75050]: pgmap v2443: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:37 compute-0 virtqemud[256259]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 08:16:37 compute-0 virtqemud[256259]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 08:16:37 compute-0 virtqemud[256259]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 08:16:37 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:38 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: cache status {prefix=cache status} (starting...)
Nov 29 08:16:38 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: client ls {prefix=client ls} (starting...)
Nov 29 08:16:38 compute-0 lvm[311963]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 08:16:38 compute-0 lvm[311963]: VG ceph_vg0 finished
Nov 29 08:16:38 compute-0 lvm[311965]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 08:16:38 compute-0 lvm[311965]: VG ceph_vg2 finished
Nov 29 08:16:38 compute-0 lvm[311995]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 08:16:38 compute-0 lvm[311995]: VG ceph_vg1 finished
Nov 29 08:16:38 compute-0 nova_compute[256729]: 2025-11-29 08:16:38.535 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:38 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19251 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:38 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 08:16:38 compute-0 ceph-mon[75050]: pgmap v2444: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:38 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 08:16:38 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19253 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:38 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 08:16:39 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 08:16:39 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 08:16:39 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:39 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 08:16:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 08:16:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221898803' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 08:16:39 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 08:16:39 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19259 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:39 compute-0 ceph-mgr[75345]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 08:16:39 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T08:16:39.772+0000 7f4b59335640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 08:16:39 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:16:39 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984797843' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:39 compute-0 nova_compute[256729]: 2025-11-29 08:16:39.813 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:39 compute-0 ceph-mon[75050]: from='client.19251 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:39 compute-0 ceph-mon[75050]: from='client.19253 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2221898803' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 08:16:39 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/984797843' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.863431) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404199863584, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2186, "num_deletes": 260, "total_data_size": 3416876, "memory_usage": 3465448, "flush_reason": "Manual Compaction"}
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404199892082, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 3335134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42094, "largest_seqno": 44279, "table_properties": {"data_size": 3325008, "index_size": 6492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20699, "raw_average_key_size": 20, "raw_value_size": 3304753, "raw_average_value_size": 3304, "num_data_blocks": 286, "num_entries": 1000, "num_filter_entries": 1000, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404002, "oldest_key_time": 1764404002, "file_creation_time": 1764404199, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 28809 microseconds, and 14420 cpu microseconds.
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.892262) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 3335134 bytes OK
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.892321) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.893517) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.893541) EVENT_LOG_v1 {"time_micros": 1764404199893533, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.893566) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3407602, prev total WAL file size 3407602, number of live WAL files 2.
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.896289) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(3256KB)], [89(10MB)]
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404199896347, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 14235527, "oldest_snapshot_seqno": -1}
Nov 29 08:16:39 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 7833 keys, 12575321 bytes, temperature: kUnknown
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404199965714, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 12575321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12516201, "index_size": 38464, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 199294, "raw_average_key_size": 25, "raw_value_size": 12368801, "raw_average_value_size": 1579, "num_data_blocks": 1522, "num_entries": 7833, "num_filter_entries": 7833, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764404199, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.966051) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 12575321 bytes
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.967533) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.9 rd, 181.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.4 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(8.0) write-amplify(3.8) OK, records in: 8365, records dropped: 532 output_compression: NoCompression
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.967560) EVENT_LOG_v1 {"time_micros": 1764404199967547, "job": 52, "event": "compaction_finished", "compaction_time_micros": 69482, "compaction_time_cpu_micros": 26541, "output_level": 6, "num_output_files": 1, "total_output_size": 12575321, "num_input_records": 8365, "num_output_records": 7833, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404199968502, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404199971213, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.896224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.971254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.971258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.971260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.971263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:16:39 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:16:39.971265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:16:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 08:16:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840816613' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 08:16:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836654839' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: ops {prefix=ops} (starting...)
Nov 29 08:16:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 08:16:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1567881499' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 08:16:40 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1568749901' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mon[75050]: pgmap v2445: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:40 compute-0 ceph-mon[75050]: from='client.19259 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3840816613' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2836654839' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1567881499' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1568749901' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 08:16:40 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: session ls {prefix=session ls} (starting...)
Nov 29 08:16:40 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19271 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:41 compute-0 ceph-mds[102316]: mds.cephfs.compute-0.bdhrqf asok_command: status {prefix=status} (starting...)
Nov 29 08:16:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 08:16:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142635577' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 08:16:41 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19275 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:41 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 08:16:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156308908' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 08:16:41 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 08:16:41 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609116411' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 08:16:41 compute-0 ceph-mon[75050]: from='client.19271 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1142635577' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 08:16:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/156308908' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 08:16:41 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1609116411' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 08:16:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175589000' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 08:16:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147045686' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 08:16:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3667445318' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19287 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mgr[75345]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 08:16:42 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T08:16:42.675+0000 7f4b59335640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 08:16:42 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 08:16:42 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362162557' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mon[75050]: from='client.19275 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mon[75050]: pgmap v2446: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/175589000' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/147045686' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3667445318' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 08:16:42 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/362162557' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 08:16:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 08:16:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3821455305' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 08:16:43 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19293 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:43 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:43 compute-0 nova_compute[256729]: 2025-11-29 08:16:43.537 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:43 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19297 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:43 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 08:16:43 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711274115' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 08:16:43 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19299 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:44 compute-0 ceph-mon[75050]: from='client.19287 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3821455305' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 08:16:44 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2711274115' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 08:16:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 08:16:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117533323' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 08:16:44 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19303 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:51:53.115590+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 197 ms_handle_reset con 0x560f3ea0d400 session 0x560f3f565860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 197 ms_handle_reset con 0x560f3ed0e400 session 0x560f3f4ec960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 42573824 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 197 ms_handle_reset con 0x560f43532c00 session 0x560f3fd56960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:51:54.115760+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 42450944 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:51:55.118186+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 33734656 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:51:56.118364+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 197 ms_handle_reset con 0x560f43532000 session 0x560f3fecc3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 197 ms_handle_reset con 0x560f43533000 session 0x560f3f2a90e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2481470 data_alloc: 218103808 data_used: 3698688
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 99565568 unmapped: 41648128 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 197 ms_handle_reset con 0x560f43533000 session 0x560f3f58dc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:51:57.118577+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 197 heartbeat osd_stat(store_statfs(0x4efc16000/0x0/0x4ffc00000, data 0xb93a7bf/0xba57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 198 ms_handle_reset con 0x560f3ea0d400 session 0x560f3fd57680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 33054720 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 198 ms_handle_reset con 0x560f3ed0e400 session 0x560f41c1a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 198 ms_handle_reset con 0x560f43533400 session 0x560f3ff21c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:51:58.118724+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 198 ms_handle_reset con 0x560f43532000 session 0x560f3fd4f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 111501312 unmapped: 29712384 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:51:59.118868+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 38019072 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:00.119007+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 111935488 unmapped: 29278208 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:01.119186+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.819067478s of 10.233036995s, submitted: 128
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259930 data_alloc: 218103808 data_used: 6864896
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 112132096 unmapped: 29081600 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:02.119342+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 36536320 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 200 ms_handle_reset con 0x560f3ea0d400 session 0x560f3ec741e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 200 heartbeat osd_stat(store_statfs(0x4e7c10000/0x0/0x4ffc00000, data 0x1393df1d/0x13a5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:03.119521+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 200 ms_handle_reset con 0x560f43532000 session 0x560f41ace960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 200 ms_handle_reset con 0x560f3ed0e400 session 0x560f401265a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 35364864 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:04.119716+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 35463168 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:05.119851+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 105750528 unmapped: 35463168 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:06.120019+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 201 ms_handle_reset con 0x560f43533000 session 0x560f3fd4ed20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3852198 data_alloc: 218103808 data_used: 6868992
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 105865216 unmapped: 35348480 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:07.120187+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 201 heartbeat osd_stat(store_statfs(0x4e300f000/0x0/0x4ffc00000, data 0x1853d695/0x1865e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 114311168 unmapped: 26902528 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:08.120407+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 35258368 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:09.120604+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 202 ms_handle_reset con 0x560f43533400 session 0x560f41aceb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 35258368 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:10.120780+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 35258368 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:11.121057+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.720509052s of 10.032491684s, submitted: 153
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 202 ms_handle_reset con 0x560f3ea0d400 session 0x560f413801e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3965572 data_alloc: 218103808 data_used: 6877184
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 26812416 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:12.121949+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 202 heartbeat osd_stat(store_statfs(0x4e200c000/0x0/0x4ffc00000, data 0x1953f140/0x19662000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 106094592 unmapped: 35119104 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:13.122261+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 114655232 unmapped: 26558464 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:14.124198+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 203 ms_handle_reset con 0x560f3ed0e400 session 0x560f3fd4e1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 106340352 unmapped: 34873344 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:15.124370+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 106340352 unmapped: 34873344 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:16.124725+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 203 heartbeat osd_stat(store_statfs(0x4dfc05000/0x0/0x4ffc00000, data 0x1b944bdb/0x1ba69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4228886 data_alloc: 218103808 data_used: 6885376
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 34799616 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:17.125047+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 114876416 unmapped: 26337280 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:18.125369+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:19.125571+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 106545152 unmapped: 34668544 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:20.126186+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 26148864 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:21.126459+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 34529280 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 204 ms_handle_reset con 0x560f43532800 session 0x560f3ff21860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4432756 data_alloc: 218103808 data_used: 6893568
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:22.127148+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 34529280 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 204 heartbeat osd_stat(store_statfs(0x4ddc01000/0x0/0x4ffc00000, data 0x1d94679a/0x1da6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 204 heartbeat osd_stat(store_statfs(0x4ddc01000/0x0/0x4ffc00000, data 0x1d94679a/0x1da6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:23.127540+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 106684416 unmapped: 34529280 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.753343105s of 11.993419647s, submitted: 62
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:24.128143+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 34512896 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 heartbeat osd_stat(store_statfs(0x4ddbfe000/0x0/0x4ffc00000, data 0x1d948329/0x1da6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,3])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f43533000 session 0x560f420bd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f43532c00 session 0x560f420bda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f43532c00 session 0x560f420bcf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f43532000 session 0x560f3f2a8000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:25.128320+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 103727104 unmapped: 37486592 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f3ea0d400 session 0x560f40126000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f3ed0e400 session 0x560f40126780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f43532800 session 0x560f41c1be00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f43532800 session 0x560f3f58d860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f3ea0d400 session 0x560f3febdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f3ed0e400 session 0x560f3fd561e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:26.128513+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104710144 unmapped: 36503552 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1441233 data_alloc: 218103808 data_used: 6901760
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:27.128690+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 36495360 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f43532000 session 0x560f41acf860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f43532c00 session 0x560f3feade00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:28.128941+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 36495360 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:29.129187+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 36495360 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 heartbeat osd_stat(store_statfs(0x4f7d66000/0x0/0x4ffc00000, data 0x17e02e7/0x1906000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:30.129314+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 36495360 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:31.129542+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 36495360 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440353 data_alloc: 218103808 data_used: 6897664
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:32.129770+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 36495360 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 heartbeat osd_stat(store_statfs(0x4f7d66000/0x0/0x4ffc00000, data 0x17e02e7/0x1906000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:33.130090+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 36495360 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.401391983s of 10.150086403s, submitted: 114
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 ms_handle_reset con 0x560f3ea0d400 session 0x560f3feac5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 heartbeat osd_stat(store_statfs(0x4f9d67000/0x0/0x4ffc00000, data 0x17e030a/0x1907000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:34.130317+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 36421632 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:35.130452+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 36421632 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:36.130563+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 36421632 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1456869 data_alloc: 218103808 data_used: 8425472
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:37.130838+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 107454464 unmapped: 33759232 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:38.131038+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 25894912 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f9d63000/0x0/0x4ffc00000, data 0x17e1d6d/0x190a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:39.131191+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 25894912 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:40.131318+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 25894912 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f9d63000/0x0/0x4ffc00000, data 0x17e1d6d/0x190a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:41.131474+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 25894912 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553509 data_alloc: 234881024 data_used: 20652032
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:42.131606+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 25894912 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:43.131736+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 25894912 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:44.131861+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 25894912 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.973765373s of 11.206119537s, submitted: 24
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:45.132011+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 25894912 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f43533800 session 0x560f41e012c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:46.132152+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 114958336 unmapped: 26255360 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f9d63000/0x0/0x4ffc00000, data 0x17e1d6d/0x190a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.5 total, 600.0 interval
                                           Cumulative writes: 9552 writes, 39K keys, 9552 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9552 writes, 2490 syncs, 3.84 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3896 writes, 15K keys, 3896 commit groups, 1.0 writes per commit group, ingest: 9.05 MB, 0.02 MB/s
                                           Interval WAL: 3896 writes, 1603 syncs, 2.43 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f43533c00 session 0x560f41e00d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1556581 data_alloc: 234881024 data_used: 21241856
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:47.132409+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25526272 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f41606c00 session 0x560f41e001e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:48.132750+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25526272 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f9d63000/0x0/0x4ffc00000, data 0x17e1d7d/0x190b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f9d63000/0x0/0x4ffc00000, data 0x17e1d7d/0x190b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:49.133476+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 25526272 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:50.133600+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 20430848 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f41606800 session 0x560f41e003c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f3ea0d400 session 0x560f41e01680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:51.133723+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119783424 unmapped: 21430272 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f426b8c00 session 0x560f3fd4f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f426b8800 session 0x560f3fd57680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x2454d7d/0x257e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1657978 data_alloc: 234881024 data_used: 21237760
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:52.134081+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 22355968 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:53.134234+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 118857728 unmapped: 22355968 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f4384c800 session 0x560f3f4ec960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:54.134388+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 22347776 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:55.134536+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 22347776 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:56.134709+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 22347776 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.254083633s of 11.889360428s, submitted: 101
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f4384c400 session 0x560f3f4ecb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:57.134888+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1658122 data_alloc: 234881024 data_used: 21237760
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119128064 unmapped: 22085632 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f90d0000/0x0/0x4ffc00000, data 0x2474d7d/0x259e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:58.135035+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 22069248 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 08:16:44 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016917989' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f3ea0d400 session 0x560f3f58d860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:59.135191+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 22061056 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:00.135411+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 22061056 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 ms_handle_reset con 0x560f426b8800 session 0x560f3fd4ed20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:01.135631+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 22061056 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f9836000/0x0/0x4ffc00000, data 0x1d0fd6d/0x1e38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:02.135807+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1604262 data_alloc: 234881024 data_used: 21233664
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 22061056 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f9835000/0x0/0x4ffc00000, data 0x1d0fd7d/0x1e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:03.135900+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21979136 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:04.136019+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 21979136 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 207 ms_handle_reset con 0x560f426b8c00 session 0x560f413801e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:05.136188+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 21970944 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 207 ms_handle_reset con 0x560f4384c800 session 0x560f3ec741e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:06.136339+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 21970944 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f9826000/0x0/0x4ffc00000, data 0x1d1b95c/0x1e47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:07.136517+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1610308 data_alloc: 234881024 data_used: 21245952
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 21970944 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 207 ms_handle_reset con 0x560f4384c000 session 0x560f402ebc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.875162125s of 11.044590950s, submitted: 29
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 207 ms_handle_reset con 0x560f3ea0d400 session 0x560f40153a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:08.136640+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119242752 unmapped: 21970944 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 208 ms_handle_reset con 0x560f426b8800 session 0x560f410e05a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 208 ms_handle_reset con 0x560f426b8c00 session 0x560f420bc000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:09.136809+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119300096 unmapped: 21913600 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 208 heartbeat osd_stat(store_statfs(0x4f9825000/0x0/0x4ffc00000, data 0x1d1d4bb/0x1e48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:10.136951+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 21889024 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:11.137220+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 21889024 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:12.137409+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1610422 data_alloc: 234881024 data_used: 21241856
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 21889024 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:13.137582+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 21889024 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:14.137736+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 22118400 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 heartbeat osd_stat(store_statfs(0x4f981f000/0x0/0x4ffc00000, data 0x1d21f1e/0x1e4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:15.137932+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 heartbeat osd_stat(store_statfs(0x4f981f000/0x0/0x4ffc00000, data 0x1d21f1e/0x1e4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119095296 unmapped: 22118400 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:16.138168+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 22167552 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:17.138359+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1614404 data_alloc: 234881024 data_used: 21254144
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 22167552 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:18.138539+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119046144 unmapped: 22167552 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.823190689s of 10.919818878s, submitted: 24
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:19.138710+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f4384d400 session 0x560f3fead860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f4384dc00 session 0x560f402eb4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 22151168 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f3ea0d400 session 0x560f3f2a83c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f426b8800 session 0x560f3febd0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:20.138889+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f4384dc00 session 0x560f420bd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f426b8c00 session 0x560f3f564b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f43532800 session 0x560f3ec75860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f3ea0d400 session 0x560f3ec74780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119447552 unmapped: 21766144 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f426b8800 session 0x560f41463a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f426b8c00 session 0x560f41c1a1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 ms_handle_reset con 0x560f4384dc00 session 0x560f410e1680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:21.139123+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119480320 unmapped: 21733376 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 heartbeat osd_stat(store_statfs(0x4f94d5000/0x0/0x4ffc00000, data 0x2068fb0/0x2199000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 210 ms_handle_reset con 0x560f43533c00 session 0x560f420bc1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:22.139289+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1652679 data_alloc: 234881024 data_used: 21266432
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 210 ms_handle_reset con 0x560f3ea0d400 session 0x560f41acf680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 21651456 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 210 ms_handle_reset con 0x560f43533000 session 0x560f413d5860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 211 ms_handle_reset con 0x560f426b8800 session 0x560f3f58c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 211 ms_handle_reset con 0x560f4384d400 session 0x560f3febc000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:23.139410+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 21651456 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:24.139546+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 211 ms_handle_reset con 0x560f426b8c00 session 0x560f3f6ba3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 21651456 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:25.139685+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 21651456 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 211 ms_handle_reset con 0x560f3ea0d400 session 0x560f3f479860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426b8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 211 ms_handle_reset con 0x560f43533000 session 0x560f410e0b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:26.139830+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119873536 unmapped: 21340160 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 212 ms_handle_reset con 0x560f4384dc00 session 0x560f401534a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 212 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x2091c3d/0x21c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:27.139979+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673470 data_alloc: 234881024 data_used: 21319680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121487360 unmapped: 19726336 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 213 ms_handle_reset con 0x560f43533800 session 0x560f3f58da40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 213 ms_handle_reset con 0x560f426b8800 session 0x560f3f478f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f949d000/0x0/0x4ffc00000, data 0x2095337/0x21cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:28.140166+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 19611648 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 214 ms_handle_reset con 0x560f3ea0d400 session 0x560f41acfe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:29.140358+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 19611648 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.538229942s of 11.167844772s, submitted: 68
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 214 ms_handle_reset con 0x560f43533000 session 0x560f41c1b0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 214 ms_handle_reset con 0x560f43533800 session 0x560f414625a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:30.140494+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121602048 unmapped: 19611648 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 214 ms_handle_reset con 0x560f3ea7cc00 session 0x560f40127860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:31.140655+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 19595264 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 215 ms_handle_reset con 0x560f3ea7c000 session 0x560f4115e3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 215 ms_handle_reset con 0x560f3ea7d000 session 0x560f41462780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 215 ms_handle_reset con 0x560f3ea0d400 session 0x560f41e00b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:32.140774+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1696405 data_alloc: 234881024 data_used: 23490560
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 19595264 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 215 ms_handle_reset con 0x560f42ce9c00 session 0x560f41e00780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 215 ms_handle_reset con 0x560f4384dc00 session 0x560f3fd570e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:33.140922+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 215 heartbeat osd_stat(store_statfs(0x4f949a000/0x0/0x4ffc00000, data 0x2098a81/0x21d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 215 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 215 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 216 ms_handle_reset con 0x560f3ea7c000 session 0x560f41c1b4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121618432 unmapped: 19595264 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 217 ms_handle_reset con 0x560f3ea7cc00 session 0x560f3fecc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 217 ms_handle_reset con 0x560f3ea0d400 session 0x560f40153860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:34.141013+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121659392 unmapped: 19554304 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:35.141151+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121659392 unmapped: 19554304 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:36.141299+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 218 ms_handle_reset con 0x560f3ea7d000 session 0x560f41ecdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 218 ms_handle_reset con 0x560f42ce9c00 session 0x560f41ecd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18489344 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:37.141434+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704101 data_alloc: 234881024 data_used: 23502848
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18489344 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 218 heartbeat osd_stat(store_statfs(0x4f9493000/0x0/0x4ffc00000, data 0x209d8ce/0x21d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:38.141641+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 218 ms_handle_reset con 0x560f4384c800 session 0x560f420bcb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18489344 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:39.141780+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122724352 unmapped: 18489344 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 218 ms_handle_reset con 0x560f3ed0e400 session 0x560f3f479680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 218 ms_handle_reset con 0x560f43532000 session 0x560f3f479c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.552082062s of 10.138615608s, submitted: 75
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:40.141895+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 219 ms_handle_reset con 0x560f3ea0d400 session 0x560f40153a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 114434048 unmapped: 26779648 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:41.142060+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 24895488 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:42.142307+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511785 data_alloc: 234881024 data_used: 9965568
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 24895488 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:43.142472+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x127231e/0x13ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 24895488 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:44.142625+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 24895488 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f9ea6000/0x0/0x4ffc00000, data 0x127231e/0x13ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:45.142791+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 24895488 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:46.142951+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 24887296 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 219 ms_handle_reset con 0x560f4384dc00 session 0x560f41eccd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:47.143159+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1512733 data_alloc: 234881024 data_used: 9973760
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 114974720 unmapped: 26238976 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:48.143280+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 221 ms_handle_reset con 0x560f43533000 session 0x560f41ecc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 26230784 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:49.143425+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f9ea9000/0x0/0x4ffc00000, data 0x1275ac0/0x13b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 221 ms_handle_reset con 0x560f42ce9c00 session 0x560f3f4790e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 222 ms_handle_reset con 0x560f3ea0d400 session 0x560f410e1680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 26230784 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 222 ms_handle_reset con 0x560f3ed0e400 session 0x560f420bd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f9ea9000/0x0/0x4ffc00000, data 0x1275ac0/0x13b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:50.143557+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.813988686s of 10.413812637s, submitted: 150
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 26173440 heap: 141213696 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:51.143706+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f9ea7000/0x0/0x4ffc00000, data 0x12776e3/0x13b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 38625280 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:52.143807+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1991497 data_alloc: 234881024 data_used: 9969664
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f5ea7000/0x0/0x4ffc00000, data 0x5277745/0x53b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 37584896 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:53.143920+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 34119680 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:54.144052+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 222 heartbeat osd_stat(store_statfs(0x4f36a7000/0x0/0x4ffc00000, data 0x7a77745/0x7bb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,1,0,5])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 222 handle_osd_map epochs [222,223], i have 222, src has [1,223]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115621888 unmapped: 38199296 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:55.144205+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 38068224 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:56.144352+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 35921920 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 223 heartbeat osd_stat(store_statfs(0x4ed6a4000/0x0/0x4ffc00000, data 0xda791c4/0xdbba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,6])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:57.144548+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2927623 data_alloc: 234881024 data_used: 9977856
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 35897344 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:58.144674+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 31694848 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:59.144852+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 35889152 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 223 heartbeat osd_stat(store_statfs(0x4ecaa4000/0x0/0x4ffc00000, data 0xe6791c4/0xe7ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:00.145014+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31686656 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:01.145211+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.774837017s of 11.191875458s, submitted: 283
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 35880960 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 223 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:02.145368+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 224 heartbeat osd_stat(store_statfs(0x4ec6a4000/0x0/0x4ffc00000, data 0xea791c4/0xebba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2,0,1,3])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3043701 data_alloc: 234881024 data_used: 9986048
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122150912 unmapped: 31670272 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:03.145615+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31662080 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 224 heartbeat osd_stat(store_statfs(0x4ec2a0000/0x0/0x4ffc00000, data 0xee7ac27/0xefbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,2,2])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:04.145819+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122159104 unmapped: 31662080 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:05.146002+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122167296 unmapped: 31653888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:06.146197+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 35840000 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:07.146354+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241933 data_alloc: 234881024 data_used: 9986048
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 27435008 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 224 heartbeat osd_stat(store_statfs(0x4eaea1000/0x0/0x4ffc00000, data 0x1027ac27/0x103bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,3])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:08.146480+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122191872 unmapped: 31629312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:09.146628+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 224 ms_handle_reset con 0x560f4384dc00 session 0x560f41c1a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122208256 unmapped: 31612928 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:10.146748+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122241024 unmapped: 31580160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 225 ms_handle_reset con 0x560f43532000 session 0x560f3f565860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 225 heartbeat osd_stat(store_statfs(0x4e9ea1000/0x0/0x4ffc00000, data 0x1127ac27/0x113bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 225 ms_handle_reset con 0x560f3ea0d400 session 0x560f420bd4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 225 ms_handle_reset con 0x560f43532000 session 0x560f41e01860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:11.146941+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 225 ms_handle_reset con 0x560f43533000 session 0x560f41ecc3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 0.422288179s of 10.001431465s, submitted: 125
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 35758080 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 226 ms_handle_reset con 0x560f3ed0e400 session 0x560f3fe1c5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 226 ms_handle_reset con 0x560f43533800 session 0x560f3fd570e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 226 ms_handle_reset con 0x560f42ce9c00 session 0x560f41370b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:12.147098+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3359753 data_alloc: 234881024 data_used: 9998336
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 118120448 unmapped: 35700736 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:13.147267+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 226 ms_handle_reset con 0x560f43533800 session 0x560f41e003c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 226 ms_handle_reset con 0x560f3ed0e400 session 0x560f413801e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 36118528 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:14.147400+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 227 ms_handle_reset con 0x560f43532000 session 0x560f4115fe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 227 ms_handle_reset con 0x560f3ea0d400 session 0x560f3ff20f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 36118528 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:15.147550+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 228 ms_handle_reset con 0x560f3ea0d400 session 0x560f41ecc1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 117702656 unmapped: 36118528 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:16.147696+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 228 ms_handle_reset con 0x560f43533000 session 0x560f3f6bb680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 117710848 unmapped: 36110336 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 228 heartbeat osd_stat(store_statfs(0x4f9e97000/0x0/0x4ffc00000, data 0x1281b6d/0x13c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:17.147914+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 229 ms_handle_reset con 0x560f42ce9c00 session 0x560f4208b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 229 ms_handle_reset con 0x560f3ed0e400 session 0x560f3f6ba3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 229 ms_handle_reset con 0x560f43533800 session 0x560f3fd56000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 229 ms_handle_reset con 0x560f43532000 session 0x560f4115e960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619513 data_alloc: 234881024 data_used: 10018816
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124043264 unmapped: 29777920 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 229 heartbeat osd_stat(store_statfs(0x4f9c06000/0x0/0x4ffc00000, data 0x150e980/0x1657000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 229 ms_handle_reset con 0x560f3ea0d400 session 0x560f41ecc960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:18.148034+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 229 heartbeat osd_stat(store_statfs(0x4f9c06000/0x0/0x4ffc00000, data 0x150e980/0x1657000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 29433856 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:19.148168+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 29433856 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 230 ms_handle_reset con 0x560f42ce9c00 session 0x560f413714a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 230 ms_handle_reset con 0x560f3ed0e400 session 0x560f41cb85a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:20.148283+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 230 ms_handle_reset con 0x560f43533000 session 0x560f41c1a5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 29122560 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 230 heartbeat osd_stat(store_statfs(0x4f9b9d000/0x0/0x4ffc00000, data 0x1576537/0x16bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:21.148453+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.068708420s of 10.017577171s, submitted: 239
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 28016640 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 231 ms_handle_reset con 0x560f4384dc00 session 0x560f3fd56b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:22.148613+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 231 ms_handle_reset con 0x560f3ea0d400 session 0x560f41381860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673394 data_alloc: 234881024 data_used: 17039360
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 28016640 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:23.148750+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 28016640 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:24.148904+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 28016640 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 231 heartbeat osd_stat(store_statfs(0x4f9b99000/0x0/0x4ffc00000, data 0x157818a/0x16c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:25.171419+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 231 ms_handle_reset con 0x560f3ed0e400 session 0x560f420bda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 28016640 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:26.171562+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 28016640 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 ms_handle_reset con 0x560f43532000 session 0x560f3feacd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 ms_handle_reset con 0x560f42ce9c00 session 0x560f3feccd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:27.171732+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 ms_handle_reset con 0x560f42ce9c00 session 0x560f41ecd0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1676160 data_alloc: 234881024 data_used: 17047552
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125804544 unmapped: 28016640 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:28.171896+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 heartbeat osd_stat(store_statfs(0x4f9b96000/0x0/0x4ffc00000, data 0x1579c61/0x16c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 ms_handle_reset con 0x560f3ea0d400 session 0x560f3febc1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:29.172056+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 heartbeat osd_stat(store_statfs(0x4f9b96000/0x0/0x4ffc00000, data 0x1579c61/0x16c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 ms_handle_reset con 0x560f3ed0e400 session 0x560f41acf2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125894656 unmapped: 27926528 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 ms_handle_reset con 0x560f43532000 session 0x560f3fe1d4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:30.172222+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 ms_handle_reset con 0x560f4384dc00 session 0x560f414625a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125894656 unmapped: 27926528 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:31.172463+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f3ea0d400 session 0x560f410e12c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.599530220s of 10.074112892s, submitted: 61
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f3ed0e400 session 0x560f3fecd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f43532000 session 0x560f41e1d860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 130007040 unmapped: 23814144 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f42ce9400 session 0x560f41c1ba40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:32.172651+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1715112 data_alloc: 234881024 data_used: 17117184
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 27754496 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f42ce9c00 session 0x560f402ebc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f4384dc00 session 0x560f3fd4f4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:33.172838+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f4384c800 session 0x560f41e1cd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f42ce9000 session 0x560f40152d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f3ed0e400 session 0x560f410e03c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f3ea0d400 session 0x560f3f564b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f3ed0e400 session 0x560f40127c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f42ce9000 session 0x560f3f47ad20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:34.173052+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:35.173263+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 heartbeat osd_stat(store_statfs(0x4f97c9000/0x0/0x4ffc00000, data 0x1945726/0x1a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f4384c800 session 0x560f3fd4e5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:36.173431+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126074880 unmapped: 27746304 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:37.173636+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1714703 data_alloc: 234881024 data_used: 17137664
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126124032 unmapped: 27697152 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:38.173898+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f4384dc00 session 0x560f4014a5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f42ce9400 session 0x560f41eccd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 26411008 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:39.174051+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125796352 unmapped: 28024832 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:40.174245+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 ms_handle_reset con 0x560f3ed0e400 session 0x560f3febc000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125796352 unmapped: 28024832 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:41.174486+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 heartbeat osd_stat(store_statfs(0x4f95f7000/0x0/0x4ffc00000, data 0x1b17726/0x1c67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 heartbeat osd_stat(store_statfs(0x4f95f7000/0x0/0x4ffc00000, data 0x1b17726/0x1c67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:42.174644+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1731654 data_alloc: 234881024 data_used: 17297408
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:43.174810+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:44.175020+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:45.175186+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 heartbeat osd_stat(store_statfs(0x4f95f7000/0x0/0x4ffc00000, data 0x1b17726/0x1c67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 heartbeat osd_stat(store_statfs(0x4f95f7000/0x0/0x4ffc00000, data 0x1b17726/0x1c67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:46.175356+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:47.175545+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1731654 data_alloc: 234881024 data_used: 17297408
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:48.175723+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.851809502s of 17.424745560s, submitted: 32
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:49.175900+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125812736 unmapped: 28008448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 233 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:50.176080+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 22872064 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f95f2000/0x0/0x4ffc00000, data 0x1b19305/0x1c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,9])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:51.176271+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 22872064 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:52.176443+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1789618 data_alloc: 234881024 data_used: 17305600
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x1f8e305/0x20e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,9])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 130949120 unmapped: 22872064 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f4384dc00 session 0x560f41c1b2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f43532000 session 0x560f41e01e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f42ce9800 session 0x560f410e14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fa000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:53.176652+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f461fa000 session 0x560f41cb8960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fa000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f461fa000 session 0x560f3f2a83c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f42ce9c00 session 0x560f41c1b4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f917e000/0x0/0x4ffc00000, data 0x1f8e305/0x20e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 27000832 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:54.176812+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f3ed0e400 session 0x560f41acf860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127918080 unmapped: 25903104 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f42ce9800 session 0x560f402eb860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f4384dc00 session 0x560f3f58c780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:55.176992+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f3ed0e400 session 0x560f41e1c3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f42ce9800 session 0x560f41ecd4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f43532000 session 0x560f3fd4fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 25870336 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:56.177399+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 25870336 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fa000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fa400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:57.177524+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1790440 data_alloc: 234881024 data_used: 19820544
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f461fa400 session 0x560f41381860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127934464 unmapped: 25886720 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fa800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:58.177669+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f917e000/0x0/0x4ffc00000, data 0x1f8e305/0x20e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129073152 unmapped: 24748032 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:59.177813+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129073152 unmapped: 24748032 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:00.178015+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129073152 unmapped: 24748032 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:01.178277+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129073152 unmapped: 24748032 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:02.178431+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1824200 data_alloc: 234881024 data_used: 24719360
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:03.178710+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f461fa800 session 0x560f3fd56b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:04.179092+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f917e000/0x0/0x4ffc00000, data 0x1f8e305/0x20e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:05.179277+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 ms_handle_reset con 0x560f3ed0e400 session 0x560f3feb1860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:06.179444+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:07.179589+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1824200 data_alloc: 234881024 data_used: 24719360
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:08.179930+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:09.180056+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:10.180234+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f917e000/0x0/0x4ffc00000, data 0x1f8e305/0x20e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:11.180484+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 24485888 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f917e000/0x0/0x4ffc00000, data 0x1f8e305/0x20e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:12.180751+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.696016312s of 23.531091690s, submitted: 38
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1846858 data_alloc: 234881024 data_used: 24723456
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132882432 unmapped: 20938752 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:13.180998+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132939776 unmapped: 20881408 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:14.181419+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 ms_handle_reset con 0x560f43532000 session 0x560f410e32c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 ms_handle_reset con 0x560f42ce9800 session 0x560f3feb1e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134627328 unmapped: 19193856 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:15.181603+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89ae000/0x0/0x4ffc00000, data 0x2751ef4/0x28a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fa400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134627328 unmapped: 19193856 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 ms_handle_reset con 0x560f461fa400 session 0x560f41cb85a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:16.181720+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 ms_handle_reset con 0x560f461fac00 session 0x560f41cb92c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 20840448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:17.182309+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1892109 data_alloc: 234881024 data_used: 25014272
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 20840448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:18.182487+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 ms_handle_reset con 0x560f3ed0e400 session 0x560f4115f0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 20840448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89ac000/0x0/0x4ffc00000, data 0x275dee4/0x28b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:19.182632+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 20840448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:20.182805+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 20840448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:21.182996+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 20840448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:22.183153+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1892285 data_alloc: 234881024 data_used: 25018368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 20840448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:23.183284+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 20840448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:24.183436+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.803125381s of 11.839346886s, submitted: 111
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89ac000/0x0/0x4ffc00000, data 0x275dee4/0x28b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132382720 unmapped: 21438464 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:25.183573+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132382720 unmapped: 21438464 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:26.183719+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132382720 unmapped: 21438464 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:27.183925+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89ac000/0x0/0x4ffc00000, data 0x275dee4/0x28b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1890609 data_alloc: 234881024 data_used: 25018368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132382720 unmapped: 21438464 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:28.184085+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89ac000/0x0/0x4ffc00000, data 0x275dee4/0x28b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132382720 unmapped: 21438464 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:29.184252+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132382720 unmapped: 21438464 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89ac000/0x0/0x4ffc00000, data 0x275dee4/0x28b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:30.184409+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132399104 unmapped: 21422080 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:31.184607+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132399104 unmapped: 21422080 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:32.184777+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1890827 data_alloc: 234881024 data_used: 25018368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89ac000/0x0/0x4ffc00000, data 0x275dee4/0x28b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132431872 unmapped: 21389312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:33.185002+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132431872 unmapped: 21389312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:34.185146+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132431872 unmapped: 21389312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:35.185351+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89ac000/0x0/0x4ffc00000, data 0x275dee4/0x28b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132431872 unmapped: 21389312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:36.185526+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132431872 unmapped: 21389312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:37.185672+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.074749947s of 12.860436440s, submitted: 6
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1890819 data_alloc: 234881024 data_used: 25030656
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 ms_handle_reset con 0x560f42ce9800 session 0x560f41ecd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132440064 unmapped: 21381120 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:38.185837+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132440064 unmapped: 21381120 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:39.185991+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fa400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89ac000/0x0/0x4ffc00000, data 0x275dee4/0x28b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132440064 unmapped: 21381120 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:40.186174+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 ms_handle_reset con 0x560f42ce9000 session 0x560f41ecc1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 ms_handle_reset con 0x560f4384c800 session 0x560f3f58c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 236 ms_handle_reset con 0x560f461fa400 session 0x560f41ecdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 236 ms_handle_reset con 0x560f461fb000 session 0x560f410e2000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132448256 unmapped: 21372928 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:41.186394+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132448256 unmapped: 21372928 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:42.186553+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 236 heartbeat osd_stat(store_statfs(0x4f89a8000/0x0/0x4ffc00000, data 0x275fa61/0x28b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 237 handle_osd_map epochs [237,237], i have 237, src has [1,237]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 237 ms_handle_reset con 0x560f3ed0e400 session 0x560f40153860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1900235 data_alloc: 234881024 data_used: 25047040
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132472832 unmapped: 21348352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:43.186693+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132472832 unmapped: 21348352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:44.186887+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 237 ms_handle_reset con 0x560f42ce9800 session 0x560f413d5860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 21340160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:45.187077+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 238 ms_handle_reset con 0x560f42ce9000 session 0x560f3f479a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 238 ms_handle_reset con 0x560f4384c800 session 0x560f3f47a960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 238 heartbeat osd_stat(store_statfs(0x4f89a0000/0x0/0x4ffc00000, data 0x276321f/0x28bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132513792 unmapped: 21307392 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:46.187205+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 238 ms_handle_reset con 0x560f3ed0e400 session 0x560f41acf4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 238 heartbeat osd_stat(store_statfs(0x4f89a0000/0x0/0x4ffc00000, data 0x27631bd/0x28bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,1,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 21258240 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:47.187371+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.666428089s of 10.435134888s, submitted: 66
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 239 ms_handle_reset con 0x560f42ce9000 session 0x560f410e2d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1907037 data_alloc: 234881024 data_used: 24973312
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 239 ms_handle_reset con 0x560f42ce9800 session 0x560f40127860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 131948544 unmapped: 21872640 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:48.187521+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 239 ms_handle_reset con 0x560f461fb000 session 0x560f3f58d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 239 heartbeat osd_stat(store_statfs(0x4f8c8e000/0x0/0x4ffc00000, data 0x2473e70/0x25cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 239 ms_handle_reset con 0x560f461fb400 session 0x560f401532c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 21848064 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:49.187683+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 131973120 unmapped: 21848064 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:50.187864+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 241 ms_handle_reset con 0x560f461fb400 session 0x560f4115e5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 131989504 unmapped: 21831680 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:51.188063+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132005888 unmapped: 21815296 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:52.188347+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 241 ms_handle_reset con 0x560f4384d400 session 0x560f410e1a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 241 ms_handle_reset con 0x560f3ed0e400 session 0x560f41ecde00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 241 ms_handle_reset con 0x560f43533c00 session 0x560f41e00960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 241 heartbeat osd_stat(store_statfs(0x4f8c8d000/0x0/0x4ffc00000, data 0x2477618/0x25d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1870631 data_alloc: 234881024 data_used: 21909504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 22069248 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:53.188540+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 131751936 unmapped: 22069248 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:54.188681+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 ms_handle_reset con 0x560f42ce9800 session 0x560f41c1a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125722624 unmapped: 28098560 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:55.188838+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 ms_handle_reset con 0x560f461fb000 session 0x560f41cb9a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 ms_handle_reset con 0x560f42ce9800 session 0x560f3f4785a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125730816 unmapped: 28090368 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:56.189022+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 ms_handle_reset con 0x560f43532000 session 0x560f3fd57e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 ms_handle_reset con 0x560f42ce9c00 session 0x560f3ff20000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 ms_handle_reset con 0x560f461fa000 session 0x560f401265a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125730816 unmapped: 28090368 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:57.189181+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f94c2000/0x0/0x4ffc00000, data 0x1b6712e/0x1cbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.813049793s of 10.035358429s, submitted: 140
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 ms_handle_reset con 0x560f42ce9800 session 0x560f4014a5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa1dc000/0x0/0x4ffc00000, data 0xf2412e/0x107c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1638915 data_alloc: 218103808 data_used: 7651328
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:58.189308+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 32178176 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:59.189420+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 32178176 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:00.189572+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 32178176 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 heartbeat osd_stat(store_statfs(0x4fa1e2000/0x0/0x4ffc00000, data 0xf2412e/0x107c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:01.189733+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 32178176 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0xf25be9/0x107f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:02.189870+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 32178176 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643089 data_alloc: 218103808 data_used: 7659520
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:03.190027+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 32178176 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 ms_handle_reset con 0x560f42ce9c00 session 0x560f402ebe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 heartbeat osd_stat(store_statfs(0x4fa1de000/0x0/0x4ffc00000, data 0xf25be9/0x107f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:04.190713+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 21233664 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 ms_handle_reset con 0x560f43532000 session 0x560f410e0000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 ms_handle_reset con 0x560f3ed0e400 session 0x560f3f2a8d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 ms_handle_reset con 0x560f43533c00 session 0x560f3ec745a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 ms_handle_reset con 0x560f43533c00 session 0x560f3feccd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 ms_handle_reset con 0x560f461fb000 session 0x560f420bc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 ms_handle_reset con 0x560f3ed0e400 session 0x560f41e01c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:05.190888+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31956992 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:06.191090+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121864192 unmapped: 31956992 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 244 ms_handle_reset con 0x560f42ce9800 session 0x560f410e03c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 244 ms_handle_reset con 0x560f42ce9c00 session 0x560f3f6bb680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:07.191224+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31940608 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 244 ms_handle_reset con 0x560f42ce9c00 session 0x560f41e014a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 244 ms_handle_reset con 0x560f3ed0e400 session 0x560f4014b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673878 data_alloc: 218103808 data_used: 7204864
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.478636742s of 10.288905144s, submitted: 67
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 244 ms_handle_reset con 0x560f43533c00 session 0x560f3feac960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:08.191377+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 121880576 unmapped: 31940608 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 244 ms_handle_reset con 0x560f461fb000 session 0x560f3feb05a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 245 ms_handle_reset con 0x560f42ce9800 session 0x560f402eb4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:09.191510+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 31539200 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f9e31000/0x0/0x4ffc00000, data 0x12d1766/0x142c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:10.191660+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 122306560 unmapped: 31514624 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:11.191868+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 29548544 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:12.192002+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124272640 unmapped: 29548544 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f9e2d000/0x0/0x4ffc00000, data 0x12d3337/0x142f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 246 ms_handle_reset con 0x560f43533c00 session 0x560f4115ef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1741163 data_alloc: 234881024 data_used: 10866688
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:13.192152+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 29540352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:14.192341+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 29540352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:15.192511+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 29540352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:16.192684+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 29540352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f9e2b000/0x0/0x4ffc00000, data 0x12d4ed0/0x1432000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:17.192830+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 29540352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1741163 data_alloc: 234881024 data_used: 10866688
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:18.193033+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 29540352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:19.193187+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 29540352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.313535690s of 11.448345184s, submitted: 27
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:20.193337+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124280832 unmapped: 29540352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f461fb000 session 0x560f413d45a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f9e28000/0x0/0x4ffc00000, data 0x12d6933/0x1435000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:21.193490+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f43532000 session 0x560f3fd4e3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f4384d400 session 0x560f41e1dc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 29122560 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:22.193662+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 29122560 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f461fb400 session 0x560f3f6ba000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f43532000 session 0x560f410e01e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1827427 data_alloc: 234881024 data_used: 11026432
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:23.193814+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126001152 unmapped: 27820032 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:24.194023+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 27770880 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f93b6000/0x0/0x4ffc00000, data 0x1d49933/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:25.194196+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:26.194317+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:27.194439+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f43533c00 session 0x560f4014a1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f4384d400 session 0x560f3ff20780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1840257 data_alloc: 234881024 data_used: 11026432
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f93b6000/0x0/0x4ffc00000, data 0x1d49933/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:28.194591+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f461fb000 session 0x560f420bd4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:29.194720+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f461fb800 session 0x560f3fd4e5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:30.194851+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:31.195202+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f93b6000/0x0/0x4ffc00000, data 0x1d49933/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:32.195373+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1840257 data_alloc: 234881024 data_used: 11026432
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:33.196230+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f93b6000/0x0/0x4ffc00000, data 0x1d49933/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:34.197064+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.866088867s of 15.214266777s, submitted: 58
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:35.197221+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:36.197919+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126083072 unmapped: 27738112 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f4384d400 session 0x560f420bdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:37.199242+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f461fb000 session 0x560f41462000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 28729344 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f93b6000/0x0/0x4ffc00000, data 0x1d49933/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1839297 data_alloc: 234881024 data_used: 11649024
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:38.199390+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125280256 unmapped: 28540928 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140e800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:39.199515+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 28508160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f4140e800 session 0x560f41e1de00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:40.199644+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 28508160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:41.199809+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 28508160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f93b5000/0x0/0x4ffc00000, data 0x1d49995/0x1ea9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:42.200053+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 28508160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140ec00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1843246 data_alloc: 234881024 data_used: 11636736
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f4140ec00 session 0x560f402ea780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:43.200193+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41409000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126074880 unmapped: 27746304 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f41409000 session 0x560f41ace780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41406400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f41406400 session 0x560f41e014a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:44.200418+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125542400 unmapped: 28278784 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 heartbeat osd_stat(store_statfs(0x4f919c000/0x0/0x4ffc00000, data 0x1f62995/0x20c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:45.200623+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125542400 unmapped: 28278784 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41408c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:46.200758+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125542400 unmapped: 28278784 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41409000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.970570564s of 11.914559364s, submitted: 43
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 ms_handle_reset con 0x560f41409000 session 0x560f413705a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140ec00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:47.200887+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125542400 unmapped: 28278784 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140e800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 248 heartbeat osd_stat(store_statfs(0x4f9197000/0x0/0x4ffc00000, data 0x1f64574/0x20c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918381 data_alloc: 234881024 data_used: 11649024
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 248 ms_handle_reset con 0x560f4140e800 session 0x560f3feb1a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:48.201036+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 26116096 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:49.201221+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 29335552 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 248 ms_handle_reset con 0x560f461fb000 session 0x560f3f4edc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:50.201397+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126328832 unmapped: 27492352 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:51.201626+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 21864448 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f461fbc00 session 0x560f420bd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:52.202609+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127221760 unmapped: 26599424 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f4140ec00 session 0x560f3f6bbe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f41408c00 session 0x560f41e001e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1993457 data_alloc: 234881024 data_used: 11706368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:53.202784+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127221760 unmapped: 26599424 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f8277000/0x0/0x4ffc00000, data 0x2e7d0f1/0x2fe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:54.203037+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f8277000/0x0/0x4ffc00000, data 0x2e7d0f1/0x2fe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,3])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127311872 unmapped: 26509312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:55.203190+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127328256 unmapped: 26492928 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f42ce9800 session 0x560f410e1e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f3ed0e400 session 0x560f402ead20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f461fbc00 session 0x560f3fd4e1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:56.203312+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f3f61c800 session 0x560f41acfc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127803392 unmapped: 26017792 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.460558891s of 10.048460960s, submitted: 145
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f401c3c00 session 0x560f4040e960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f401c3400 session 0x560f4040fa40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:57.203462+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f84a3000/0x0/0x4ffc00000, data 0x21760f1/0x22d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 ms_handle_reset con 0x560f3ea7cc00 session 0x560f41df10e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126271488 unmapped: 27549696 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 ms_handle_reset con 0x560f461fb000 session 0x560f41df0000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 ms_handle_reset con 0x560f461fb000 session 0x560f3febd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 ms_handle_reset con 0x560f3f61d400 session 0x560f41acf680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1880111 data_alloc: 218103808 data_used: 7696384
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:58.203625+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f8306000/0x0/0x4ffc00000, data 0x23130f1/0x2476000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126287872 unmapped: 27533312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:59.204057+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126287872 unmapped: 27533312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:00.204303+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126287872 unmapped: 27533312 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:01.204529+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f8de4000/0x0/0x4ffc00000, data 0x2314c6e/0x2479000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 28508160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 ms_handle_reset con 0x560f3ea7cc00 session 0x560f4115f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 ms_handle_reset con 0x560f3f61c800 session 0x560f4115e3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:02.204682+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 28508160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1879791 data_alloc: 218103808 data_used: 7692288
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:03.204839+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 28508160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:04.205023+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 28508160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:05.205167+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 28508160 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:06.205457+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 28499968 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f8dc1000/0x0/0x4ffc00000, data 0x2339c5e/0x249d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:07.205657+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 28499968 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:08.205850+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1879791 data_alloc: 218103808 data_used: 7692288
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 28499968 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:09.206049+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 28499968 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:10.206203+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 28499968 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:11.206372+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.160706520s of 14.601575851s, submitted: 25
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 28499968 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:12.206581+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f8dbe000/0x0/0x4ffc00000, data 0x233cc5e/0x24a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 28499968 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 ms_handle_reset con 0x560f401c3400 session 0x560f3fd56f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:13.206715+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1880802 data_alloc: 218103808 data_used: 7692288
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 28483584 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f89ae000/0x0/0x4ffc00000, data 0x233cc5e/0x24a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:14.206904+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 28483584 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 ms_handle_reset con 0x560f3f61c800 session 0x560f3fd4f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:15.207035+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 28237824 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:16.207179+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 28237824 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 251 ms_handle_reset con 0x560f3f61d400 session 0x560f3f47ad20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:17.207411+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 28188672 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:18.207528+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1891220 data_alloc: 218103808 data_used: 8257536
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 251 heartbeat osd_stat(store_statfs(0x4f89a5000/0x0/0x4ffc00000, data 0x23437db/0x24a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 28188672 heap: 153821184 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 252 ms_handle_reset con 0x560f461fb000 session 0x560f3f2a90e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 252 ms_handle_reset con 0x560f401c3c00 session 0x560f41ace3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 252 ms_handle_reset con 0x560f461fbc00 session 0x560f3f58d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:19.207663+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 252 ms_handle_reset con 0x560f461fbc00 session 0x560f41370b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 34078720 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 252 ms_handle_reset con 0x560f3f61c800 session 0x560f420bd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 252 ms_handle_reset con 0x560f3f61d400 session 0x560f402eb0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 252 ms_handle_reset con 0x560f401c3c00 session 0x560f3f2a8d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:20.209012+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126140416 unmapped: 34054144 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 252 heartbeat osd_stat(store_statfs(0x4f8139000/0x0/0x4ffc00000, data 0x2bad3ba/0x2d14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 253 ms_handle_reset con 0x560f461fb000 session 0x560f420bc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:21.209177+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 253 ms_handle_reset con 0x560f461fb000 session 0x560f3fe1d4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.455391884s of 10.160360336s, submitted: 77
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 253 ms_handle_reset con 0x560f3f61c800 session 0x560f3feb0780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126205952 unmapped: 33988608 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 254 ms_handle_reset con 0x560f3f61d400 session 0x560f410e3c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:22.209358+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126214144 unmapped: 33980416 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 254 ms_handle_reset con 0x560f3ed0e400 session 0x560f41eccf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41409000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 ms_handle_reset con 0x560f41409000 session 0x560f3f4eda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:23.209472+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2022230 data_alloc: 234881024 data_used: 12783616
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128180224 unmapped: 32014336 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:24.209609+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 ms_handle_reset con 0x560f401c3c00 session 0x560f3f2a8b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 ms_handle_reset con 0x560f461fbc00 session 0x560f3f58dc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128180224 unmapped: 32014336 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 ms_handle_reset con 0x560f43532000 session 0x560f3fd56b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 ms_handle_reset con 0x560f43533c00 session 0x560f41acfa40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:25.209724+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f812e000/0x0/0x4ffc00000, data 0x2bb2770/0x2d1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 35676160 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 ms_handle_reset con 0x560f3ed0e400 session 0x560f3f58c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:26.209834+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 34512896 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:27.210057+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 125706240 unmapped: 34488320 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:28.210251+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1843364 data_alloc: 218103808 data_used: 4435968
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126525440 unmapped: 33669120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f8031000/0x0/0x4ffc00000, data 0x1ca76eb/0x1c7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:29.210423+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 ms_handle_reset con 0x560f3f61c800 session 0x560f3fd57e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126844928 unmapped: 33349632 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:30.210582+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126844928 unmapped: 33349632 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:31.210692+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126894080 unmapped: 33300480 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 255 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.310259819s of 10.585846901s, submitted: 161
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:32.210891+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f8022000/0x0/0x4ffc00000, data 0x1cafaeb/0x1c84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 33366016 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 257 ms_handle_reset con 0x560f3ed0e400 session 0x560f420bc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:33.211038+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1850990 data_alloc: 218103808 data_used: 4440064
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 33366016 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:34.211255+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 257 heartbeat osd_stat(store_statfs(0x4f8021000/0x0/0x4ffc00000, data 0x1cb6123/0x1c8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 33366016 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:35.211383+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 33366016 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 257 ms_handle_reset con 0x560f401c3400 session 0x560f420bcf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:36.211493+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43532000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 257 ms_handle_reset con 0x560f3ea7cc00 session 0x560f41462780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 258 ms_handle_reset con 0x560f43532000 session 0x560f41cb8b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 33366016 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 258 ms_handle_reset con 0x560f3ea7cc00 session 0x560f4014a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:37.211645+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f801d000/0x0/0x4ffc00000, data 0x1cb7b96/0x1c91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5f4f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 33357824 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:38.211801+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1854637 data_alloc: 218103808 data_used: 4468736
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 33357824 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:39.212017+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 33357824 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 258 handle_osd_map epochs [259,260], i have 258, src has [1,260]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 260 ms_handle_reset con 0x560f3ed0e400 session 0x560f41acef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:40.212144+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 260 heartbeat osd_stat(store_statfs(0x4f9036000/0x0/0x4ffc00000, data 0x1cbb300/0x1c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 33357824 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 260 ms_handle_reset con 0x560f401c3c00 session 0x560f414625a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:41.212287+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 260 ms_handle_reset con 0x560f401c3400 session 0x560f41c1b2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126853120 unmapped: 33341440 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 260 ms_handle_reset con 0x560f3f61c800 session 0x560f41c1b4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.246017456s of 10.004901886s, submitted: 74
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 260 ms_handle_reset con 0x560f3ea7cc00 session 0x560f41ace000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:42.212394+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126853120 unmapped: 33341440 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:43.212518+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 261 ms_handle_reset con 0x560f43533c00 session 0x560f410e2960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1857047 data_alloc: 218103808 data_used: 4448256
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 262 ms_handle_reset con 0x560f3ed0e400 session 0x560f3fecc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 262 ms_handle_reset con 0x560f461fbc00 session 0x560f3f4ed2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 33267712 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 262 ms_handle_reset con 0x560f401c3c00 session 0x560f41e00000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 262 ms_handle_reset con 0x560f401c3400 session 0x560f410e25a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 262 ms_handle_reset con 0x560f3f61c800 session 0x560f41c1b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 262 ms_handle_reset con 0x560f3f61d400 session 0x560f4040cd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:44.212662+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 33775616 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:45.212855+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 262 ms_handle_reset con 0x560f401c3c00 session 0x560f41c1a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 33767424 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:46.213017+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0xb7c92a/0xcee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 33767424 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets getting new tickets!
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:47.213326+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _finish_auth 0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:47.289537+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 263 ms_handle_reset con 0x560f3ea7cc00 session 0x560f41acf0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 263 ms_handle_reset con 0x560f3ed0e400 session 0x560f41ecd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f9fd9000/0x0/0x4ffc00000, data 0xb7e6dd/0xcf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:48.213474+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1726900 data_alloc: 218103808 data_used: 3821568
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 263 heartbeat osd_stat(store_statfs(0x4f9fdd000/0x0/0x4ffc00000, data 0xb7e6cd/0xcf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 263 ms_handle_reset con 0x560f3f61c800 session 0x560f3fd4ed20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:49.213635+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 263 ms_handle_reset con 0x560f3f61d400 session 0x560f420bcf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:50.213907+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 33677312 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0xb801c8/0xcf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:51.214006+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 264 ms_handle_reset con 0x560f401c3400 session 0x560f420bc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 33677312 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.253735542s of 10.153100014s, submitted: 179
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:52.214169+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 264 ms_handle_reset con 0x560f401c3c00 session 0x560f41cb81e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f9fda000/0x0/0x4ffc00000, data 0xb8022a/0xcf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 33677312 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 264 ms_handle_reset con 0x560f3ed0e400 session 0x560f413714a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:53.214343+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1729559 data_alloc: 218103808 data_used: 3825664
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0xb8022a/0xcf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 33677312 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 265 ms_handle_reset con 0x560f3f61c800 session 0x560f3fd4e780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:54.214491+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 33677312 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:55.214613+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: mgrc ms_handle_reset ms_handle_reset con 0x560f41641000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/878361048
Nov 29 08:16:44 compute-0 ceph-osd[91083]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/878361048,v1:192.168.122.100:6801/878361048]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: get_auth_request con 0x560f401c3c00 auth_method 0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: mgrc handle_mgr_configure stats_period=5
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 heartbeat osd_stat(store_statfs(0x4f9fd4000/0x0/0x4ffc00000, data 0xb8387a/0xcf9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:56.214734+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42489400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 ms_handle_reset con 0x560f42489400 session 0x560f41eccf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:57.214928+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:58.215091+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1738577 data_alloc: 218103808 data_used: 3837952
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:59.215285+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:00.215430+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:01.215622+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 heartbeat osd_stat(store_statfs(0x4f9fd3000/0x0/0x4ffc00000, data 0xb838dc/0xcfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:02.215886+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126500864 unmapped: 33693696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.763562679s of 10.645417213s, submitted: 47
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:03.216069+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1738592 data_alloc: 218103808 data_used: 3837952
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 heartbeat osd_stat(store_statfs(0x4f9fd3000/0x0/0x4ffc00000, data 0xb838dc/0xcfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 33685504 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:04.216227+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 33685504 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:05.216385+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 33685504 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 heartbeat osd_stat(store_statfs(0x4f9fd4000/0x0/0x4ffc00000, data 0xb838dc/0xcfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:06.216629+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 33685504 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:07.218169+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 heartbeat osd_stat(store_statfs(0x4f9fd4000/0x0/0x4ffc00000, data 0xb838dc/0xcfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 33685504 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:08.218283+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1737696 data_alloc: 218103808 data_used: 3842048
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 33685504 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 ms_handle_reset con 0x560f426bbc00 session 0x560f3feb0780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 heartbeat osd_stat(store_statfs(0x4f9fd4000/0x0/0x4ffc00000, data 0xb838dc/0xcfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:09.218791+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126509056 unmapped: 33685504 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c4400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 ms_handle_reset con 0x560f401c4400 session 0x560f3f2a8d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:10.219691+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126533632 unmapped: 33660928 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:11.220019+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 267 ms_handle_reset con 0x560f411b7000 session 0x560f3f564b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126533632 unmapped: 33660928 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 268 ms_handle_reset con 0x560f3ed0e400 session 0x560f420bd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:12.220200+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41408400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 268 ms_handle_reset con 0x560f41408400 session 0x560f3f58d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.337656975s of 10.024435997s, submitted: 33
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 269 ms_handle_reset con 0x560f42ce9400 session 0x560f41ace3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126558208 unmapped: 33636352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:13.220665+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1748290 data_alloc: 218103808 data_used: 3846144
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bd000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 270 ms_handle_reset con 0x560f3ea7c400 session 0x560f3fd4fa40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126558208 unmapped: 33636352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f9fc9000/0x0/0x4ffc00000, data 0xb88b99/0xd02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:14.220918+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 270 ms_handle_reset con 0x560f401bd000 session 0x560f4040e5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126558208 unmapped: 33636352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 270 ms_handle_reset con 0x560f3ea7c400 session 0x560f3fe1c5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:15.221091+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126558208 unmapped: 33636352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 271 ms_handle_reset con 0x560f3ed0e400 session 0x560f3f47b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:16.221238+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 271 ms_handle_reset con 0x560f3f61d000 session 0x560f41df1e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126558208 unmapped: 33636352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:17.221411+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c7800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 271 ms_handle_reset con 0x560f401c7800 session 0x560f402eb4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126558208 unmapped: 33636352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:18.221635+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1752532 data_alloc: 218103808 data_used: 3850240
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 271 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0xb8c2bd/0xd07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126558208 unmapped: 33636352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:19.221785+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 271 ms_handle_reset con 0x560f3ea7c400 session 0x560f413703c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ed0e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 271 heartbeat osd_stat(store_statfs(0x4f9fc7000/0x0/0x4ffc00000, data 0xb8c2bd/0xd07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126558208 unmapped: 33636352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bd000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:20.221922+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 272 ms_handle_reset con 0x560f401bd000 session 0x560f41ace960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 33628160 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 273 ms_handle_reset con 0x560f3f61d000 session 0x560f420bd0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:21.222135+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 33628160 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:22.222268+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 33628160 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 273 ms_handle_reset con 0x560f3ed0e400 session 0x560f41e014a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:23.222410+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1763664 data_alloc: 218103808 data_used: 3862528
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 33628160 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:24.222596+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f9fbe000/0x0/0x4ffc00000, data 0xb8faa5/0xd0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 33628160 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:25.222772+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126566400 unmapped: 33628160 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:26.223029+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1bc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.675226212s of 13.745358467s, submitted: 73
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f9fbe000/0x0/0x4ffc00000, data 0xb8faa5/0xd0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:27.223237+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:28.223428+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1766013 data_alloc: 218103808 data_used: 3862528
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:29.223610+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:30.223820+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:31.224023+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:32.224196+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0xb91508/0xd11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:33.224424+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1764419 data_alloc: 218103808 data_used: 3862528
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:34.224634+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:35.224920+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f9fbe000/0x0/0x4ffc00000, data 0xb914a6/0xd10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:36.225211+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33619968 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.288328648s of 10.658530235s, submitted: 14
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:37.225348+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 274 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126599168 unmapped: 33595392 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:38.225631+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1768913 data_alloc: 218103808 data_used: 3878912
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126599168 unmapped: 33595392 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:39.226744+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5edc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126599168 unmapped: 33595392 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 275 heartbeat osd_stat(store_statfs(0x4f9fba000/0x0/0x4ffc00000, data 0xb93077/0xd13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 275 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:40.227013+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 33579008 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:41.227451+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 33570816 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:42.228063+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126623744 unmapped: 33570816 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:43.228222+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 276 ms_handle_reset con 0x560f3f5edc00 session 0x560f410e1c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1772424 data_alloc: 218103808 data_used: 3878912
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f9fb6000/0x0/0x4ffc00000, data 0xb94c72/0xd17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126631936 unmapped: 33562624 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:44.228350+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126631936 unmapped: 33562624 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:45.228541+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126631936 unmapped: 33562624 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:46.228677+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 276 ms_handle_reset con 0x560f42d1bc00 session 0x560f3feb03c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f9fb7000/0x0/0x4ffc00000, data 0xb94c72/0xd17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126631936 unmapped: 33562624 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:47.228824+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.313364506s of 10.156065941s, submitted: 31
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126648320 unmapped: 33546240 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:48.229018+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1777822 data_alloc: 218103808 data_used: 3887104
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126648320 unmapped: 33546240 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 277 ms_handle_reset con 0x560f42d1a800 session 0x560f41e1dc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:49.229152+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 33538048 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:50.229286+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 33529856 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 279 ms_handle_reset con 0x560f411b6c00 session 0x560f3fd56b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:51.229437+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40049800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 33259520 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:52.229633+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 279 heartbeat osd_stat(store_statfs(0x4f9f6b000/0x0/0x4ffc00000, data 0xbd9e4d/0xd61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 279 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 279 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 280 ms_handle_reset con 0x560f42d1ac00 session 0x560f3febd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 33234944 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:53.229781+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 280 ms_handle_reset con 0x560f40049800 session 0x560f3feac5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1793623 data_alloc: 218103808 data_used: 3887104
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 126959616 unmapped: 33234944 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:54.230015+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26165248 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:55.230187+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 26165248 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:56.230364+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 280 handle_osd_map epochs [281,281], i have 281, src has [1,281]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128253952 unmapped: 31940608 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 ms_handle_reset con 0x560f411b6c00 session 0x560f4040e5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 ms_handle_reset con 0x560f3f619000 session 0x560f3f2a85a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:57.230491+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f953c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 ms_handle_reset con 0x560f3f953c00 session 0x560f4014a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 ms_handle_reset con 0x560f4384c800 session 0x560f3fd56000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.078542233s of 10.329142570s, submitted: 36
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 ms_handle_reset con 0x560f41405000 session 0x560f413d5860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 heartbeat osd_stat(store_statfs(0x4f98c7000/0x0/0x4ffc00000, data 0x127b5a9/0x1406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 ms_handle_reset con 0x560f3ea81000 session 0x560f41cb9e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 ms_handle_reset con 0x560f3f619000 session 0x560f41e1dc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 32948224 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:58.230637+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1847650 data_alloc: 218103808 data_used: 3887104
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f953c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 ms_handle_reset con 0x560f3f953c00 session 0x560f41acf4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 32940032 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:59.230878+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 32940032 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:00.231084+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 ms_handle_reset con 0x560f411b6c00 session 0x560f420bc1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127270912 unmapped: 32923648 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:01.231400+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127270912 unmapped: 32923648 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:02.231608+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 ms_handle_reset con 0x560f41640000 session 0x560f40126f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127270912 unmapped: 32923648 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 ms_handle_reset con 0x560f3f619000 session 0x560f4115eb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f953c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:03.231744+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 ms_handle_reset con 0x560f3f953c00 session 0x560f41e003c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 ms_handle_reset con 0x560f411b6c00 session 0x560f41e00f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 heartbeat osd_stat(store_statfs(0x4f98c4000/0x0/0x4ffc00000, data 0x127d14b/0x140a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1853666 data_alloc: 218103808 data_used: 3887104
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 ms_handle_reset con 0x560f41606800 session 0x560f3fd4f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127295488 unmapped: 32899072 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 heartbeat osd_stat(store_statfs(0x4f98c4000/0x0/0x4ffc00000, data 0x127d14b/0x140a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:04.231875+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 ms_handle_reset con 0x560f3f61a800 session 0x560f3f58d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f953c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 283 ms_handle_reset con 0x560f41606800 session 0x560f410e34a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 32940032 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:05.232024+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 283 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x127ed2c/0x140e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 31956992 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:06.232150+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 283 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x127ed2c/0x140e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 31956992 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:07.232283+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 31956992 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:08.232496+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1901220 data_alloc: 234881024 data_used: 9719808
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:09.232651+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 31956992 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:10.232832+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 31956992 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140a400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 283 ms_handle_reset con 0x560f4140a400 session 0x560f41cb8780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.080422401s of 13.534340858s, submitted: 44
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:11.233077+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 31956992 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:12.233212+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 31956992 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 283 heartbeat osd_stat(store_statfs(0x4f98bb000/0x0/0x4ffc00000, data 0x12808c5/0x1411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,3])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 283 handle_osd_map epochs [284,284], i have 284, src has [1,284]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 283 handle_osd_map epochs [284,284], i have 284, src has [1,284]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 284 ms_handle_reset con 0x560f3f953c00 session 0x560f420bd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:13.233325+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 31948800 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1903480 data_alloc: 234881024 data_used: 9719808
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:14.233455+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 31948800 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:15.233629+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128253952 unmapped: 31940608 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:16.233761+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128253952 unmapped: 31940608 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:17.233935+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128262144 unmapped: 31932416 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42489c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 285 ms_handle_reset con 0x560f42489c00 session 0x560f3f564b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 285 heartbeat osd_stat(store_statfs(0x4f98ba000/0x0/0x4ffc00000, data 0x1282434/0x1413000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:18.234184+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128253952 unmapped: 31940608 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 285 heartbeat osd_stat(store_statfs(0x4f98ba000/0x0/0x4ffc00000, data 0x1282434/0x1413000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918962 data_alloc: 234881024 data_used: 9756672
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 285 ms_handle_reset con 0x560f45f24800 session 0x560f413703c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:19.234328+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132399104 unmapped: 27795456 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f953c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140a400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 286 ms_handle_reset con 0x560f41606800 session 0x560f41c1a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:20.234487+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133111808 unmapped: 27082752 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 286 heartbeat osd_stat(store_statfs(0x4f9358000/0x0/0x4ffc00000, data 0x17e302f/0x1976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 286 heartbeat osd_stat(store_statfs(0x4f9358000/0x0/0x4ffc00000, data 0x17e302f/0x1976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:21.234674+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133103616 unmapped: 27090944 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.839296341s of 10.433144569s, submitted: 45
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 288 ms_handle_reset con 0x560f4140a400 session 0x560f3febd0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:22.234821+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133120000 unmapped: 27074560 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:23.235031+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133120000 unmapped: 27074560 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1959664 data_alloc: 234881024 data_used: 9957376
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:24.235209+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133120000 unmapped: 27074560 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:25.235424+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133120000 unmapped: 27074560 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 288 ms_handle_reset con 0x560f3f953c00 session 0x560f3feb0780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 288 heartbeat osd_stat(store_statfs(0x4f934b000/0x0/0x4ffc00000, data 0x17ea647/0x1980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:26.235545+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133120000 unmapped: 27074560 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:27.235773+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132456448 unmapped: 27738112 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:28.235931+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132489216 unmapped: 27705344 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 289 heartbeat osd_stat(store_statfs(0x4f9344000/0x0/0x4ffc00000, data 0x17f20aa/0x1989000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1967140 data_alloc: 234881024 data_used: 10145792
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140fc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 289 ms_handle_reset con 0x560f4140fc00 session 0x560f410e23c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:29.236070+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132489216 unmapped: 27705344 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:30.236211+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41224000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132513792 unmapped: 27680768 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 290 ms_handle_reset con 0x560f3ff2c800 session 0x560f420bc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 290 ms_handle_reset con 0x560f41224000 session 0x560f4115e960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:31.236386+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132521984 unmapped: 27672576 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.606664658s of 10.199641228s, submitted: 32
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:32.236618+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132546560 unmapped: 27648000 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 293 ms_handle_reset con 0x560f3ea80c00 session 0x560f3feadc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:33.236804+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 27631616 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1979852 data_alloc: 234881024 data_used: 10145792
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:34.236999+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 293 ms_handle_reset con 0x560f4384c800 session 0x560f410e3680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 27623424 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 293 heartbeat osd_stat(store_statfs(0x4f9335000/0x0/0x4ffc00000, data 0x17f8f54/0x1996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 293 ms_handle_reset con 0x560f41606800 session 0x560f41e1d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 293 ms_handle_reset con 0x560f401b7000 session 0x560f3fd4fa40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 294 ms_handle_reset con 0x560f3f619400 session 0x560f3fd57680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:35.237168+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 27541504 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:36.237310+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132669440 unmapped: 27525120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 295 ms_handle_reset con 0x560f3ea80c00 session 0x560f41df1860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:37.237427+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 295 ms_handle_reset con 0x560f41606800 session 0x560f41ecc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 295 heartbeat osd_stat(store_statfs(0x4f932f000/0x0/0x4ffc00000, data 0x17fc758/0x199d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132669440 unmapped: 27525120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 295 ms_handle_reset con 0x560f4384c400 session 0x560f41df1c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bd000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:38.237589+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 27492352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 296 ms_handle_reset con 0x560f401bd000 session 0x560f410e2780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 296 ms_handle_reset con 0x560f4384c800 session 0x560f41e00960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1988262 data_alloc: 234881024 data_used: 10170368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:39.237714+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132702208 unmapped: 27492352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 296 heartbeat osd_stat(store_statfs(0x4f9331000/0x0/0x4ffc00000, data 0x17fe1f3/0x199c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:40.237906+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132710400 unmapped: 27484160 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:41.238121+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133767168 unmapped: 26427392 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 298 ms_handle_reset con 0x560f3f619400 session 0x560f41e1d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:42.238257+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133767168 unmapped: 26427392 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bb400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.645552635s of 11.045845032s, submitted: 136
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 299 ms_handle_reset con 0x560f401bb400 session 0x560f3f6bb680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 299 ms_handle_reset con 0x560f3ea80c00 session 0x560f413703c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:43.238379+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132751360 unmapped: 27443200 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1997302 data_alloc: 234881024 data_used: 10174464
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:44.238559+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 299 heartbeat osd_stat(store_statfs(0x4f9328000/0x0/0x4ffc00000, data 0x18035f0/0x19a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 299 handle_osd_map epochs [300,300], i have 300, src has [1,300]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 299 handle_osd_map epochs [300,300], i have 300, src has [1,300]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 27418624 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ed000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:45.238692+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132759552 unmapped: 27435008 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 301 ms_handle_reset con 0x560f3f5ed000 session 0x560f3fecc3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 301 ms_handle_reset con 0x560f3ea80c00 session 0x560f413810e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:46.238835+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132784128 unmapped: 27410432 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:47.239095+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132792320 unmapped: 27402240 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 301 handle_osd_map epochs [302,303], i have 301, src has [1,303]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:48.239285+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 27353088 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2009142 data_alloc: 234881024 data_used: 10436608
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f931e000/0x0/0x4ffc00000, data 0x180a480/0x19af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 ms_handle_reset con 0x560f3f61c000 session 0x560f420bd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:49.239854+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132849664 unmapped: 27344896 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:50.240065+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132849664 unmapped: 27344896 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 ms_handle_reset con 0x560f3f619000 session 0x560f41e1d2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 ms_handle_reset con 0x560f411b6c00 session 0x560f41e1da40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bb800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 ms_handle_reset con 0x560f401bbc00 session 0x560f41cb8780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:51.240271+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 ms_handle_reset con 0x560f401bbc00 session 0x560f410e34a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 ms_handle_reset con 0x560f426bb800 session 0x560f41370f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f931e000/0x0/0x4ffc00000, data 0x180a480/0x19af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129253376 unmapped: 30941184 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 ms_handle_reset con 0x560f42ce8800 session 0x560f41c1ab40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:52.240453+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129253376 unmapped: 30941184 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c4000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.905030251s of 10.024862289s, submitted: 147
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 ms_handle_reset con 0x560f401c4000 session 0x560f410e2960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:53.240591+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 30900224 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1882957 data_alloc: 218103808 data_used: 4206592
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41408400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f3f61ac00 session 0x560f41371c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f41408400 session 0x560f3fd4f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:54.240767+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 30883840 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 heartbeat osd_stat(store_statfs(0x4f9f21000/0x0/0x4ffc00000, data 0xc04f1b/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f3f61ac00 session 0x560f41df0000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f401bbc00 session 0x560f41e00f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c4000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:55.240905+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bb800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f401c4000 session 0x560f41463a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f426bb800 session 0x560f4115eb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 30875648 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bb800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f426bb800 session 0x560f41df0d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:56.241062+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 30867456 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f3f61ac00 session 0x560f40152780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f401bbc00 session 0x560f40126f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41404400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f41404400 session 0x560f410e12c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f41225000 session 0x560f3fd4f0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 ms_handle_reset con 0x560f3ff2c800 session 0x560f3f479e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:57.241174+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:58.241317+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 ms_handle_reset con 0x560f3f61ac00 session 0x560f3f4ecb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 ms_handle_reset con 0x560f401bbc00 session 0x560f41e01c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1886603 data_alloc: 218103808 data_used: 3956736
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:59.241472+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:00.241679+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f9f5e000/0x0/0x4ffc00000, data 0xbc69d0/0xd6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:01.241867+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:02.242064+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 ms_handle_reset con 0x560f41225000 session 0x560f3feccb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ba000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 ms_handle_reset con 0x560f401ba000 session 0x560f410e2f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 ms_handle_reset con 0x560f3f61ac00 session 0x560f41acef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:03.242207+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 ms_handle_reset con 0x560f3ff2c800 session 0x560f3feb1860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.674047470s of 10.558382988s, submitted: 71
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 ms_handle_reset con 0x560f401bbc00 session 0x560f41e1cd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1883587 data_alloc: 218103808 data_used: 3956736
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f9f62000/0x0/0x4ffc00000, data 0xbc694f/0xd6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:04.242355+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f9f62000/0x0/0x4ffc00000, data 0xbc694f/0xd6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:05.242477+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:06.242617+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f9f62000/0x0/0x4ffc00000, data 0xbc694f/0xd6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:07.242756+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:08.242887+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 30859264 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 ms_handle_reset con 0x560f4140cc00 session 0x560f402eb4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1889575 data_alloc: 218103808 data_used: 3964928
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:09.243028+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f9f5d000/0x0/0x4ffc00000, data 0xbc84dc/0xd70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:10.243228+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:11.243417+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 ms_handle_reset con 0x560f4384dc00 session 0x560f420bc3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 ms_handle_reset con 0x560f3f61ac00 session 0x560f3feb03c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 ms_handle_reset con 0x560f3ff2c800 session 0x560f413812c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:12.243570+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:13.243738+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1889575 data_alloc: 218103808 data_used: 3964928
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:14.243907+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 ms_handle_reset con 0x560f401bbc00 session 0x560f410e1c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:15.244064+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f9f5d000/0x0/0x4ffc00000, data 0xbc84dc/0xd70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:16.244232+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f9f5d000/0x0/0x4ffc00000, data 0xbc84dc/0xd70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:17.244364+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:18.244521+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1889575 data_alloc: 218103808 data_used: 3964928
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:19.244664+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:20.244841+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:21.245061+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f9f5d000/0x0/0x4ffc00000, data 0xbc84dc/0xd70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:22.245222+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:23.245546+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 30834688 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1889575 data_alloc: 218103808 data_used: 3964928
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:24.245664+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f9f5d000/0x0/0x4ffc00000, data 0xbc84dc/0xd70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 31162368 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:25.245767+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 31162368 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f9f5d000/0x0/0x4ffc00000, data 0xbc84dc/0xd70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:26.245877+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128106496 unmapped: 32088064 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:27.246022+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128106496 unmapped: 32088064 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:28.246180+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128106496 unmapped: 32088064 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1889575 data_alloc: 218103808 data_used: 3964928
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:29.246332+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128106496 unmapped: 32088064 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:30.246485+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128106496 unmapped: 32088064 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f9f5d000/0x0/0x4ffc00000, data 0xbc84dc/0xd70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:31.246640+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128106496 unmapped: 32088064 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.153665543s of 28.177976608s, submitted: 6
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:32.246775+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128090112 unmapped: 32104448 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:33.246937+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128090112 unmapped: 32104448 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1891660 data_alloc: 218103808 data_used: 3964928
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:34.247078+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f9f5a000/0x0/0x4ffc00000, data 0xbca0ad/0xd73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 307 ms_handle_reset con 0x560f3ec7a000 session 0x560f3fd56000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128090112 unmapped: 32104448 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 307 ms_handle_reset con 0x560f3ff2dc00 session 0x560f402ebe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:35.247244+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128090112 unmapped: 32104448 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 307 ms_handle_reset con 0x560f3f61ac00 session 0x560f4014a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:36.247352+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 31047680 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f9f5a000/0x0/0x4ffc00000, data 0xbca0ca/0xd74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:37.247499+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 308 ms_handle_reset con 0x560f3ec7a000 session 0x560f41e00d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 308 ms_handle_reset con 0x560f3ea7c400 session 0x560f3fe1cf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129441792 unmapped: 30752768 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:38.247694+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129441792 unmapped: 30752768 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 308 ms_handle_reset con 0x560f3ff2c800 session 0x560f3f2a85a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1906671 data_alloc: 218103808 data_used: 3977216
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:39.247842+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129441792 unmapped: 30752768 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:40.248055+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61a400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42489400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 308 ms_handle_reset con 0x560f42489400 session 0x560f41ace960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 308 ms_handle_reset con 0x560f3f61a400 session 0x560f40126f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42489400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 308 ms_handle_reset con 0x560f42489400 session 0x560f40152780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 31752192 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:41.248262+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 31752192 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42488000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.862378120s of 10.579767227s, submitted: 68
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:42.248382+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 308 heartbeat osd_stat(store_statfs(0x4f9f13000/0x0/0x4ffc00000, data 0xc0c11e/0xdbb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128442368 unmapped: 31752192 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 309 ms_handle_reset con 0x560f42488000 session 0x560f4115eb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:43.248536+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 31735808 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40047000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917740 data_alloc: 218103808 data_used: 3993600
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:44.248745+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 309 ms_handle_reset con 0x560f40047000 session 0x560f410e2960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128458752 unmapped: 31735808 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:45.248868+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 31727616 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 309 ms_handle_reset con 0x560f3f61c800 session 0x560f41c1ab40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:46.249038+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128466944 unmapped: 31727616 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:47.249149+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 309 heartbeat osd_stat(store_statfs(0x4f9f0f000/0x0/0x4ffc00000, data 0xc0dce3/0xdbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 309 ms_handle_reset con 0x560f3f61b400 session 0x560f41e1da40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 128475136 unmapped: 31719424 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:48.249365+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 30670848 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 309 heartbeat osd_stat(store_statfs(0x4f9f0d000/0x0/0x4ffc00000, data 0xc0dd16/0xdc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1921507 data_alloc: 218103808 data_used: 3993600
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:49.249565+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f3ea7d800 session 0x560f3fecc3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 30670848 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:50.249682+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 30670848 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:51.249935+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f41607000 session 0x560f41e1d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129499136 unmapped: 30695424 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 heartbeat osd_stat(store_statfs(0x4f9f08000/0x0/0x4ffc00000, data 0xc0f8b6/0xdc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f25800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:52.250129+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140f000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f4140f000 session 0x560f3f4794a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f4384cc00 session 0x560f41ecc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129515520 unmapped: 30679040 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f3ea7d800 session 0x560f3febdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f3f61b400 session 0x560f3f47ad20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:53.250313+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129515520 unmapped: 30679040 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140f000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f4140f000 session 0x560f3fd4fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1928533 data_alloc: 218103808 data_used: 4268032
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:54.250463+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129515520 unmapped: 30679040 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:55.250625+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 30670848 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:56.250858+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 heartbeat osd_stat(store_statfs(0x4f9f08000/0x0/0x4ffc00000, data 0xc0f8b6/0xdc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 30670848 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f41225000 session 0x560f410e2780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.558580399s of 14.851441383s, submitted: 45
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f45f25800 session 0x560f41acfc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 heartbeat osd_stat(store_statfs(0x4f9f08000/0x0/0x4ffc00000, data 0xc0f8b6/0xdc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:57.251012+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 ms_handle_reset con 0x560f41607000 session 0x560f41462960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f25800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f45f25800 session 0x560f3fd57680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 30646272 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f3ea7d800 session 0x560f3fd4fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:58.251153+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 30646272 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f41225000 session 0x560f3fd561e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f3f61b400 session 0x560f3f47ad20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:59.251333+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1933669 data_alloc: 218103808 data_used: 4280320
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f3ea7d800 session 0x560f3febdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129572864 unmapped: 30621696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f41225000 session 0x560f41ecc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f41607000 session 0x560f41e1d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:00.251489+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f25800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f45f25800 session 0x560f41e1da40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140f000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129572864 unmapped: 30621696 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f4140f000 session 0x560f4115eb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f45f24c00 session 0x560f3fe1d2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:01.251745+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f3ea7d800 session 0x560f420bc3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f41225000 session 0x560f41e01680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140f000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129597440 unmapped: 30597120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:02.251881+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f9f44000/0x0/0x4ffc00000, data 0xbd13d0/0xd87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129597440 unmapped: 30597120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 ms_handle_reset con 0x560f3f61ac00 session 0x560f3f4ed2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:03.252070+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129597440 unmapped: 30597120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:04.252218+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1933354 data_alloc: 218103808 data_used: 4018176
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 ms_handle_reset con 0x560f4140f000 session 0x560f40152780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129597440 unmapped: 30597120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:05.252350+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 129605632 unmapped: 30588928 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 ms_handle_reset con 0x560f3f61ac00 session 0x560f3f4edc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:06.252500+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 26910720 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:07.252684+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133283840 unmapped: 26910720 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.023476601s of 10.731188774s, submitted: 99
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 ms_handle_reset con 0x560f45f24c00 session 0x560f4115e960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 ms_handle_reset con 0x560f3ea7d800 session 0x560f3f2a83c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 ms_handle_reset con 0x560f41225000 session 0x560f40153860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:08.252905+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 heartbeat osd_stat(store_statfs(0x4f9b34000/0x0/0x4ffc00000, data 0xbd2f91/0xd89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 26894336 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:09.253088+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1943368 data_alloc: 218103808 data_used: 7495680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 ms_handle_reset con 0x560f3f61d800 session 0x560f41463680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 26894336 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 heartbeat osd_stat(store_statfs(0x4f9b36000/0x0/0x4ffc00000, data 0xbd2f81/0xd88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:10.253263+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 26894336 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:11.253457+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 ms_handle_reset con 0x560f3ea7d800 session 0x560f3feccd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 26894336 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42488000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:12.253586+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 26877952 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 313 ms_handle_reset con 0x560f42488000 session 0x560f4115fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:13.253772+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 26836992 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:14.254011+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f618400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1951804 data_alloc: 218103808 data_used: 7495680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 314 ms_handle_reset con 0x560f3f618400 session 0x560f41ace000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f9b2f000/0x0/0x4ffc00000, data 0xbd65ca/0xd8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133357568 unmapped: 26836992 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 314 ms_handle_reset con 0x560f42ce9400 session 0x560f410e05a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42488400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 314 ms_handle_reset con 0x560f42d1ac00 session 0x560f3f4ec960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 314 ms_handle_reset con 0x560f42488400 session 0x560f410e01e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:15.254188+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 314 ms_handle_reset con 0x560f3ea7d800 session 0x560f41cb8f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133693440 unmapped: 26501120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:16.254352+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133693440 unmapped: 26501120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:17.254508+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133693440 unmapped: 26501120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 314 handle_osd_map epochs [315,316], i have 314, src has [1,316]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.575884819s of 10.332136154s, submitted: 93
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f957e000/0x0/0x4ffc00000, data 0x118a5ba/0x1340000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 314 handle_osd_map epochs [315,315], i have 316, src has [1,316]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:18.254711+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133693440 unmapped: 26501120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:19.255520+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2003052 data_alloc: 218103808 data_used: 7507968
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133693440 unmapped: 26501120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:20.256741+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133693440 unmapped: 26501120 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:21.257449+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 316 ms_handle_reset con 0x560f4384c000 session 0x560f3f4ed0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9577000/0x0/0x4ffc00000, data 0x118dbb6/0x1346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133701632 unmapped: 26492928 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:22.258080+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133701632 unmapped: 26492928 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:23.260094+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133701632 unmapped: 26492928 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:24.260668+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2003052 data_alloc: 218103808 data_used: 7507968
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133709824 unmapped: 26484736 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 317 ms_handle_reset con 0x560f3ec7b800 session 0x560f3febcd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ed400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 317 ms_handle_reset con 0x560f3f5ed400 session 0x560f410e2d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:25.260915+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 317 heartbeat osd_stat(store_statfs(0x4f9572000/0x0/0x4ffc00000, data 0x118f7a5/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133709824 unmapped: 26484736 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 317 ms_handle_reset con 0x560f3ea7d800 session 0x560f3f565860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:26.261053+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 317 heartbeat osd_stat(store_statfs(0x4f9572000/0x0/0x4ffc00000, data 0x118f7a5/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133709824 unmapped: 26484736 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:27.261413+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42488400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 317 ms_handle_reset con 0x560f42488400 session 0x560f3fd572c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133709824 unmapped: 26484736 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:28.261566+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 317 heartbeat osd_stat(store_statfs(0x4f9572000/0x0/0x4ffc00000, data 0x118f7a5/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133709824 unmapped: 26484736 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.975066185s of 10.957282066s, submitted: 23
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:29.262144+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2011104 data_alloc: 218103808 data_used: 7507968
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 317 heartbeat osd_stat(store_statfs(0x4f9572000/0x0/0x4ffc00000, data 0x118f7a5/0x134b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133718016 unmapped: 26476544 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:30.262358+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 318 ms_handle_reset con 0x560f4384c000 session 0x560f410e2000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133726208 unmapped: 26468352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:31.262559+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 318 ms_handle_reset con 0x560f4140c000 session 0x560f3fd56f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133726208 unmapped: 26468352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:32.262675+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133726208 unmapped: 26468352 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:33.262860+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 318 ms_handle_reset con 0x560f41405800 session 0x560f420bcf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 318 ms_handle_reset con 0x560f41607800 session 0x560f3fd4ef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133767168 unmapped: 26427392 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:34.263009+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2017856 data_alloc: 218103808 data_used: 7516160
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 318 handle_osd_map epochs [319,319], i have 319, src has [1,319]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 133816320 unmapped: 26378240 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 319 ms_handle_reset con 0x560f41405800 session 0x560f41acfa40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:35.263140+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 319 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1192f16/0x1352000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42488400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 319 ms_handle_reset con 0x560f42488400 session 0x560f3f6bb680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134234112 unmapped: 25960448 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 319 ms_handle_reset con 0x560f4384c000 session 0x560f3f47b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:36.263354+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134463488 unmapped: 25731072 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 319 ms_handle_reset con 0x560f3f61c000 session 0x560f410e2d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:37.263546+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 319 heartbeat osd_stat(store_statfs(0x4f956c000/0x0/0x4ffc00000, data 0x1192f16/0x1352000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134471680 unmapped: 25722880 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:38.263668+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134471680 unmapped: 25722880 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:39.263956+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.767885208s of 10.167846680s, submitted: 61
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2059267 data_alloc: 234881024 data_used: 12263424
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 320 ms_handle_reset con 0x560f3ea7d800 session 0x560f420bc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 320 ms_handle_reset con 0x560f4140c000 session 0x560f3f47a5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 320 ms_handle_reset con 0x560f3f61c000 session 0x560f3febcd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134471680 unmapped: 25722880 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:40.264186+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 320 ms_handle_reset con 0x560f41405800 session 0x560f3f479860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 25698304 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:41.264508+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 25698304 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:42.264806+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40047000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 320 ms_handle_reset con 0x560f40047000 session 0x560f41463a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 25698304 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1194a62/0x1353000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:43.264999+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 25698304 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:44.265236+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2059516 data_alloc: 234881024 data_used: 12263424
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 321 ms_handle_reset con 0x560f3ea7d800 session 0x560f3ec74780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134512640 unmapped: 25681920 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:45.265465+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 321 heartbeat osd_stat(store_statfs(0x4f9568000/0x0/0x4ffc00000, data 0x11964d1/0x1355000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134512640 unmapped: 25681920 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:46.265667+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c2000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 321 ms_handle_reset con 0x560f401c2000 session 0x560f401525a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134520832 unmapped: 25673728 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:47.265847+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134520832 unmapped: 25673728 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:48.266030+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 322 ms_handle_reset con 0x560f3f619400 session 0x560f4115e000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134529024 unmapped: 25665536 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42489800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:49.266220+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.463410854s of 10.015022278s, submitted: 82
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2067964 data_alloc: 234881024 data_used: 12263424
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f9564000/0x0/0x4ffc00000, data 0x11980b2/0x1359000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132251648 unmapped: 27942912 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:50.266458+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132251648 unmapped: 27942912 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 324 ms_handle_reset con 0x560f3f5ec000 session 0x560f3fecd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 324 ms_handle_reset con 0x560f42489800 session 0x560f401532c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:51.266647+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 324 ms_handle_reset con 0x560f426bbc00 session 0x560f41e1cb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 324 ms_handle_reset con 0x560f401b8c00 session 0x560f4014bc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132784128 unmapped: 27410432 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:52.266860+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 324 ms_handle_reset con 0x560f4384d400 session 0x560f410e2780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132784128 unmapped: 27410432 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 324 heartbeat osd_stat(store_statfs(0x4f9b12000/0x0/0x4ffc00000, data 0xbe76ca/0xdab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:53.267322+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 325 ms_handle_reset con 0x560f41606400 session 0x560f41462960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132792320 unmapped: 27402240 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:54.267812+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1997152 data_alloc: 218103808 data_used: 8101888
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 ms_handle_reset con 0x560f411b6c00 session 0x560f3fd561e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132792320 unmapped: 27402240 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 ms_handle_reset con 0x560f41606400 session 0x560f3f479c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:55.267994+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 ms_handle_reset con 0x560f401b8c00 session 0x560f41ecc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 ms_handle_reset con 0x560f426bbc00 session 0x560f41e1d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 27598848 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:56.268337+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40049c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 ms_handle_reset con 0x560f40049c00 session 0x560f3fe1d2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 27590656 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:57.268597+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 27590656 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:58.268833+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41409400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f9b0b000/0x0/0x4ffc00000, data 0xbead7c/0xdb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 27590656 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:59.269032+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2001511 data_alloc: 218103808 data_used: 7573504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.039639473s of 10.434607506s, submitted: 43
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 ms_handle_reset con 0x560f3f61ac00 session 0x560f414630e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 27574272 heap: 160194560 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:00.269208+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 ms_handle_reset con 0x560f41409400 session 0x560f410e05a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 ms_handle_reset con 0x560f3f61ac00 session 0x560f4040e5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f9b0b000/0x0/0x4ffc00000, data 0xbeadef/0xdb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40049c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 ms_handle_reset con 0x560f40049c00 session 0x560f41e01680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132792320 unmapped: 31604736 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:01.269437+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f9455000/0x0/0x4ffc00000, data 0x12a0def/0x1469000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132792320 unmapped: 31604736 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:02.269875+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132792320 unmapped: 31604736 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:03.270029+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f9455000/0x0/0x4ffc00000, data 0x12a0def/0x1469000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132792320 unmapped: 31604736 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 327 ms_handle_reset con 0x560f3ec7b800 session 0x560f4115e960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:04.270289+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2062593 data_alloc: 218103808 data_used: 7585792
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132792320 unmapped: 31604736 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f9450000/0x0/0x4ffc00000, data 0x12a2862/0x146d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:05.270541+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c7800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132792320 unmapped: 31604736 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 327 ms_handle_reset con 0x560f401c7800 session 0x560f3fecd0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:06.270721+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c7800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 328 ms_handle_reset con 0x560f401c7800 session 0x560f410e23c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 31588352 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:07.270952+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 328 ms_handle_reset con 0x560f3ec7b800 session 0x560f41df1860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b7c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 329 ms_handle_reset con 0x560f411b7c00 session 0x560f3fd4fe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 329 ms_handle_reset con 0x560f4140c000 session 0x560f3ec75860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 31588352 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:08.271225+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 31588352 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:09.271461+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2071021 data_alloc: 218103808 data_used: 7589888
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 31588352 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:10.271683+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 329 heartbeat osd_stat(store_statfs(0x4f9448000/0x0/0x4ffc00000, data 0x12a644e/0x1474000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 31588352 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:11.271907+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181a400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.362947464s of 11.915024757s, submitted: 51
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 329 ms_handle_reset con 0x560f4181a400 session 0x560f4040e5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 31580160 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:12.272154+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c7800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 330 ms_handle_reset con 0x560f3ec7b800 session 0x560f414630e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132849664 unmapped: 31547392 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:13.272411+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f9445000/0x0/0x4ffc00000, data 0x12a802d/0x1478000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 331 ms_handle_reset con 0x560f401c7800 session 0x560f41df01e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 331 ms_handle_reset con 0x560f4181a800 session 0x560f4115fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b7c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 331 ms_handle_reset con 0x560f411b7c00 session 0x560f4040cd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 132857856 unmapped: 31539200 heap: 164397056 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:14.272572+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 331 ms_handle_reset con 0x560f4140c000 session 0x560f41ace000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f9441000/0x0/0x4ffc00000, data 0x12a9baa/0x147b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2078723 data_alloc: 218103808 data_used: 7593984
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c7800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 138125312 unmapped: 30474240 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:15.272719+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 332 ms_handle_reset con 0x560f3ec7b800 session 0x560f41380f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b7c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 332 ms_handle_reset con 0x560f411b7c00 session 0x560f41df1e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 332 ms_handle_reset con 0x560f401c7800 session 0x560f3ec74780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134012928 unmapped: 34586624 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:16.273071+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 332 ms_handle_reset con 0x560f4140c000 session 0x560f41e1d2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 34570240 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:17.273284+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 34570240 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:18.273555+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 332 heartbeat osd_stat(store_statfs(0x4f7c40000/0x0/0x4ffc00000, data 0x2aab77a/0x2c7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 34570240 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 332 ms_handle_reset con 0x560f41225c00 session 0x560f410e1e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:19.273712+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2251614 data_alloc: 218103808 data_used: 7618560
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 34570240 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:20.274107+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 332 heartbeat osd_stat(store_statfs(0x4f7c40000/0x0/0x4ffc00000, data 0x2aab77a/0x2c7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 34570240 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:21.274364+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.218190193s of 10.088994026s, submitted: 83
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f618800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42489400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 333 ms_handle_reset con 0x560f3f618800 session 0x560f3febcd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 34570240 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:22.274599+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 333 ms_handle_reset con 0x560f42489400 session 0x560f41df10e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40188800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 334 ms_handle_reset con 0x560f40188800 session 0x560f3f565860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 334 ms_handle_reset con 0x560f3ec7b800 session 0x560f41c1a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134029312 unmapped: 34570240 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:23.274809+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 335 ms_handle_reset con 0x560f42ce9c00 session 0x560f41c1a5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 34545664 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:24.275371+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2265797 data_alloc: 218103808 data_used: 7630848
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 34545664 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:25.275513+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 ms_handle_reset con 0x560f4140dc00 session 0x560f41df14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 ms_handle_reset con 0x560f3ec7b800 session 0x560f40152b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f7c34000/0x0/0x4ffc00000, data 0x2ab0e04/0x2c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 ms_handle_reset con 0x560f401c3000 session 0x560f3f47ba40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f618800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40188800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 ms_handle_reset con 0x560f40188800 session 0x560f402ea780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 136101888 unmapped: 32497664 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:26.275670+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 ms_handle_reset con 0x560f461fac00 session 0x560f3f58d860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134856704 unmapped: 33742848 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:27.275779+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 ms_handle_reset con 0x560f3ec7b800 session 0x560f41df0960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134856704 unmapped: 33742848 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:28.275916+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f7c33000/0x0/0x4ffc00000, data 0x2ab2686/0x2c8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40188800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 ms_handle_reset con 0x560f40188800 session 0x560f41eccf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c3000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134856704 unmapped: 33742848 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:29.276198+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 337 ms_handle_reset con 0x560f461fac00 session 0x560f41e014a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 337 ms_handle_reset con 0x560f4384d400 session 0x560f410e2b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 337 heartbeat osd_stat(store_statfs(0x4f7c33000/0x0/0x4ffc00000, data 0x2ab2686/0x2c8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2323244 data_alloc: 234881024 data_used: 10379264
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 337 ms_handle_reset con 0x560f3f619800 session 0x560f41acf680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 338 ms_handle_reset con 0x560f4140dc00 session 0x560f3feb0f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134881280 unmapped: 33718272 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 338 ms_handle_reset con 0x560f3f619800 session 0x560f41c1b680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 338 ms_handle_reset con 0x560f401c3000 session 0x560f410e01e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:30.276405+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40188800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134881280 unmapped: 33718272 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:31.276757+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 338 heartbeat osd_stat(store_statfs(0x4f7c2a000/0x0/0x4ffc00000, data 0x2ab5e62/0x2c92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 338 ms_handle_reset con 0x560f40188800 session 0x560f3fead860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 33685504 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:32.276877+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 33685504 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:33.277028+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.041107178s of 11.557394028s, submitted: 69
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 339 ms_handle_reset con 0x560f40048c00 session 0x560f3fd57c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f619800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f7c2c000/0x0/0x4ffc00000, data 0x2ab5e62/0x2c92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:34.277215+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 33677312 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 339 ms_handle_reset con 0x560f3f619800 session 0x560f420bd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2352752 data_alloc: 234881024 data_used: 13651968
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:35.277347+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 136159232 unmapped: 32440320 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40188800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 339 ms_handle_reset con 0x560f40188800 session 0x560f3fe1c780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:36.277626+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 136159232 unmapped: 32440320 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:37.277799+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 141254656 unmapped: 27344896 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f7c2a000/0x0/0x4ffc00000, data 0x2ab7a41/0x2c94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,14])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:38.278003+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 141254656 unmapped: 27344896 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f6308000/0x0/0x4ffc00000, data 0x322ba41/0x3408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f25400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ec00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 339 ms_handle_reset con 0x560f3f61ec00 session 0x560f3feb05a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:39.278134+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 142860288 unmapped: 25739264 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40047800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 340 ms_handle_reset con 0x560f40047800 session 0x560f3f4790e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2421782 data_alloc: 234881024 data_used: 14372864
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:40.278274+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 24838144 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 340 handle_osd_map epochs [341,341], i have 341, src has [1,341]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 341 ms_handle_reset con 0x560f3f61ac00 session 0x560f41c1ba40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 341 ms_handle_reset con 0x560f45f25400 session 0x560f402ead20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:41.278429+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 142344192 unmapped: 26255360 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:42.278623+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 142344192 unmapped: 26255360 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 342 ms_handle_reset con 0x560f3ea81c00 session 0x560f3f58d860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 342 ms_handle_reset con 0x560f3f5ec400 session 0x560f3f565860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:43.278799+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 142319616 unmapped: 26279936 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.749194145s of 10.691642761s, submitted: 181
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:44.278954+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 142336000 unmapped: 26263552 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f62f6000/0x0/0x4ffc00000, data 0x3245d54/0x3426000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2430255 data_alloc: 234881024 data_used: 14200832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:45.279129+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 142336000 unmapped: 26263552 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:46.279293+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146227200 unmapped: 22372352 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.5 total, 600.0 interval
                                           Cumulative writes: 16K writes, 62K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 5324 syncs, 3.09 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6905 writes, 23K keys, 6905 commit groups, 1.0 writes per commit group, ingest: 19.97 MB, 0.03 MB/s
                                           Interval WAL: 6905 writes, 2834 syncs, 2.44 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 343 heartbeat osd_stat(store_statfs(0x4f62f5000/0x0/0x4ffc00000, data 0x32477d3/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140a400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 343 ms_handle_reset con 0x560f401b6400 session 0x560f41ace1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c5800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 343 ms_handle_reset con 0x560f4140a400 session 0x560f41df01e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:47.279507+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 343 ms_handle_reset con 0x560f3ea81c00 session 0x560f414630e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146251776 unmapped: 22347776 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 344 ms_handle_reset con 0x560f3f5ec400 session 0x560f410e2f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 344 ms_handle_reset con 0x560f401b6400 session 0x560f41e1de00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f25400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:48.279660+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 344 ms_handle_reset con 0x560f45f25400 session 0x560f3feb1860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147038208 unmapped: 21561344 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 344 ms_handle_reset con 0x560f401c5800 session 0x560f401525a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 344 handle_osd_map epochs [344,345], i have 344, src has [1,345]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:49.279838+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 19800064 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2508851 data_alloc: 234881024 data_used: 14352384
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 345 heartbeat osd_stat(store_statfs(0x4f5a34000/0x0/0x4ffc00000, data 0x3aa2e31/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:50.280052+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148824064 unmapped: 19775488 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 345 heartbeat osd_stat(store_statfs(0x4f5a34000/0x0/0x4ffc00000, data 0x3aa2e31/0x3c88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 346 ms_handle_reset con 0x560f3ea81c00 session 0x560f3feac3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:51.280261+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148267008 unmapped: 20332544 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:52.280444+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148267008 unmapped: 20332544 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f5a94000/0x0/0x4ffc00000, data 0x3aa49a0/0x3c8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:53.280708+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148291584 unmapped: 20307968 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 347 ms_handle_reset con 0x560f461fbc00 session 0x560f3fe1d4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:54.280918+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148291584 unmapped: 20307968 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2506903 data_alloc: 234881024 data_used: 14368768
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.594077110s of 10.929092407s, submitted: 179
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:55.281055+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 348 ms_handle_reset con 0x560f3f61ac00 session 0x560f3f4eda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:56.281254+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 348 ms_handle_reset con 0x560f3ec7b800 session 0x560f420bd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 348 ms_handle_reset con 0x560f4384d400 session 0x560f41acef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f5a90000/0x0/0x4ffc00000, data 0x3aa8108/0x3c8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 348 ms_handle_reset con 0x560f461fac00 session 0x560f410e1c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:57.281401+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:58.283041+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:59.283191+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 350 ms_handle_reset con 0x560f41405800 session 0x560f4115f0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ae400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 350 ms_handle_reset con 0x560f401ae400 session 0x560f41462000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 350 heartbeat osd_stat(store_statfs(0x4f5a8c000/0x0/0x4ffc00000, data 0x3aa9d05/0x3c91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2514389 data_alloc: 234881024 data_used: 14372864
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:00.283429+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:01.284372+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401aec00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:02.284565+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 351 heartbeat osd_stat(store_statfs(0x4f5a87000/0x0/0x4ffc00000, data 0x3aaced7/0x3c96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 352 ms_handle_reset con 0x560f3ea7cc00 session 0x560f410e2d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 352 ms_handle_reset con 0x560f401aec00 session 0x560f3f47b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 352 ms_handle_reset con 0x560f3ea7cc00 session 0x560f3f478b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 352 heartbeat osd_stat(store_statfs(0x4f5a83000/0x0/0x4ffc00000, data 0x3aaec26/0x3c99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:03.284728+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:04.285071+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148520960 unmapped: 20078592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2523119 data_alloc: 234881024 data_used: 14364672
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.855843544s of 10.334692001s, submitted: 95
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:05.285216+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148537344 unmapped: 20062208 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:06.285628+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148537344 unmapped: 20062208 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 353 heartbeat osd_stat(store_statfs(0x4f5a7f000/0x0/0x4ffc00000, data 0x3ab06b1/0x3c9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:07.286132+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148537344 unmapped: 20062208 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:08.286332+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148537344 unmapped: 20062208 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40047800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 354 ms_handle_reset con 0x560f40047800 session 0x560f3fe1cf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:09.286477+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148545536 unmapped: 20054016 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2526515 data_alloc: 234881024 data_used: 14368768
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:10.286842+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148545536 unmapped: 20054016 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 354 ms_handle_reset con 0x560f461fac00 session 0x560f3fd57680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40189c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:11.287015+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148570112 unmapped: 20029440 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:12.287232+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c7800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 355 ms_handle_reset con 0x560f401b6000 session 0x560f420bda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148578304 unmapped: 20021248 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 355 heartbeat osd_stat(store_statfs(0x4f5a7c000/0x0/0x4ffc00000, data 0x3ab71be/0x3ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 355 ms_handle_reset con 0x560f3f61b000 session 0x560f41e1cb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 356 ms_handle_reset con 0x560f401c7800 session 0x560f41acfa40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:13.287388+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 356 ms_handle_reset con 0x560f40189c00 session 0x560f40152d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148594688 unmapped: 20004864 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:14.287697+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148602880 unmapped: 19996672 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2540222 data_alloc: 234881024 data_used: 14376960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:15.287918+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 19980288 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.967829704s of 11.025494576s, submitted: 46
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:16.288072+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 357 ms_handle_reset con 0x560f3f61b000 session 0x560f3fd4e780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 19972096 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:17.288317+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148635648 unmapped: 19963904 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 357 heartbeat osd_stat(store_statfs(0x4f5a72000/0x0/0x4ffc00000, data 0x3abc4db/0x3cab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c4c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 357 ms_handle_reset con 0x560f401c4c00 session 0x560f3fd572c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:18.288471+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 358 ms_handle_reset con 0x560f461fb800 session 0x560f3f4794a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148643840 unmapped: 19955712 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 358 ms_handle_reset con 0x560f3f61b000 session 0x560f3ec741e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:19.288715+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148643840 unmapped: 19955712 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40189c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 358 ms_handle_reset con 0x560f40189c00 session 0x560f4115f680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2543718 data_alloc: 234881024 data_used: 14389248
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:20.288879+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 19939328 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:21.289232+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148725760 unmapped: 19873792 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:22.289423+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f5a57000/0x0/0x4ffc00000, data 0x3b05066/0x3cc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148750336 unmapped: 19849216 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 358 ms_handle_reset con 0x560f45f24400 session 0x560f3fd4f0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 358 ms_handle_reset con 0x560f3ea81c00 session 0x560f3fd4eb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f5a57000/0x0/0x4ffc00000, data 0x3b05066/0x3cc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:23.289623+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148750336 unmapped: 19849216 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:24.289801+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 19832832 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2565094 data_alloc: 234881024 data_used: 14397440
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:25.289993+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 19824640 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:26.290149+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149291008 unmapped: 19308544 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41409c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 359 ms_handle_reset con 0x560f41409c00 session 0x560f41e1c5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:27.290311+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149291008 unmapped: 19308544 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:28.290483+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f5a52000/0x0/0x4ffc00000, data 0x3b06af5/0x3ccb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149291008 unmapped: 19308544 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.758509636s of 12.907063484s, submitted: 69
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:29.290621+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f5a52000/0x0/0x4ffc00000, data 0x3b06af5/0x3ccb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149323776 unmapped: 19275776 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41409c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 360 ms_handle_reset con 0x560f41409c00 session 0x560f41e1c1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2568532 data_alloc: 234881024 data_used: 15056896
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:30.290812+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149323776 unmapped: 19275776 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 360 ms_handle_reset con 0x560f3ea81c00 session 0x560f41371680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 360 ms_handle_reset con 0x560f3f61b000 session 0x560f41370b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:31.290989+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 19259392 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c2000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140e800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:32.291103+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 19243008 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:33.291242+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 360 ms_handle_reset con 0x560f401b9800 session 0x560f4040f680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 19243008 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:34.291392+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 19243008 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f5a4e000/0x0/0x4ffc00000, data 0x3b08568/0x3ccf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2572319 data_alloc: 234881024 data_used: 15069184
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:35.291558+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42489800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 360 ms_handle_reset con 0x560f42489800 session 0x560f420bc1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149389312 unmapped: 19210240 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:36.291765+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b9800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 361 ms_handle_reset con 0x560f3f61b000 session 0x560f413d41e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 19193856 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 362 ms_handle_reset con 0x560f401b9800 session 0x560f41ace780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:37.292242+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 362 ms_handle_reset con 0x560f4384c400 session 0x560f4115ef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 362 ms_handle_reset con 0x560f3ea81c00 session 0x560f41e00b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148054016 unmapped: 20545536 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 362 heartbeat osd_stat(store_statfs(0x4f57d2000/0x0/0x4ffc00000, data 0x3d7f1b9/0x3f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:38.292438+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 20406272 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.937971115s of 10.034090996s, submitted: 34
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 362 ms_handle_reset con 0x560f401b6800 session 0x560f3fd57a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:39.292604+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 20406272 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2618448 data_alloc: 234881024 data_used: 16281600
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 363 ms_handle_reset con 0x560f401b6800 session 0x560f4115f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:40.292817+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 20406272 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:41.293005+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 363 ms_handle_reset con 0x560f3f61b000 session 0x560f410e3860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148193280 unmapped: 20406272 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114d000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 364 ms_handle_reset con 0x560f4114d000 session 0x560f3f6bab40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 364 ms_handle_reset con 0x560f3ea81c00 session 0x560f41c1ab40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:42.293183+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 364 heartbeat osd_stat(store_statfs(0x4f57cf000/0x0/0x4ffc00000, data 0x3d80d98/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148209664 unmapped: 20389888 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bcc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:43.293368+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 364 ms_handle_reset con 0x560f4384c000 session 0x560f410e0000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149544960 unmapped: 19054592 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 365 ms_handle_reset con 0x560f401bcc00 session 0x560f41df0b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 365 ms_handle_reset con 0x560f3ea81c00 session 0x560f41df0780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 365 heartbeat osd_stat(store_statfs(0x4f57ca000/0x0/0x4ffc00000, data 0x3d82e6c/0x3f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:44.293577+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 19931136 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2625106 data_alloc: 234881024 data_used: 16285696
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:45.293726+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 365 ms_handle_reset con 0x560f3f61b000 session 0x560f410e14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148676608 unmapped: 19922944 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114d000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c6c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 366 ms_handle_reset con 0x560f4114d000 session 0x560f3feacd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:46.293868+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149889024 unmapped: 18710528 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 367 ms_handle_reset con 0x560f401c6c00 session 0x560f3f478960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 367 ms_handle_reset con 0x560f401b6800 session 0x560f41c1b2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:47.294066+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149897216 unmapped: 18702336 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:48.294193+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149946368 unmapped: 18653184 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f57c2000/0x0/0x4ffc00000, data 0x3d87c7a/0x3f5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 368 heartbeat osd_stat(store_statfs(0x4f57c2000/0x0/0x4ffc00000, data 0x3d87c7a/0x3f5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.816073418s of 10.028948784s, submitted: 43
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:49.294369+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 368 ms_handle_reset con 0x560f3f61b000 session 0x560f3feb14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149962752 unmapped: 18636800 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bcc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2639555 data_alloc: 234881024 data_used: 16703488
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:50.294518+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 368 heartbeat osd_stat(store_statfs(0x4f57be000/0x0/0x4ffc00000, data 0x3d89a01/0x3f5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 368 ms_handle_reset con 0x560f401bcc00 session 0x560f4040fa40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 368 ms_handle_reset con 0x560f3ea81c00 session 0x560f3febd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 18604032 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:51.294696+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149995520 unmapped: 18604032 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 368 ms_handle_reset con 0x560f41606000 session 0x560f402eb0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:52.294846+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152723456 unmapped: 15876096 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:53.295074+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152764416 unmapped: 15835136 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 368 ms_handle_reset con 0x560f3f61b000 session 0x560f410e0000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:54.295365+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bcc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 369 ms_handle_reset con 0x560f401b6800 session 0x560f3f6bab40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150601728 unmapped: 17997824 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2665371 data_alloc: 234881024 data_used: 16707584
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:55.295537+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 370 ms_handle_reset con 0x560f401bcc00 session 0x560f41c1a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 370 ms_handle_reset con 0x560f3ea81c00 session 0x560f3f4eda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150609920 unmapped: 17989632 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f5586000/0x0/0x4ffc00000, data 0x3fbe161/0x4196000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:56.295724+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150609920 unmapped: 17989632 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40047c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:57.295900+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f5586000/0x0/0x4ffc00000, data 0x3fbe161/0x4196000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 370 ms_handle_reset con 0x560f40047c00 session 0x560f41df1680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150609920 unmapped: 17989632 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 371 ms_handle_reset con 0x560f3ea81c00 session 0x560f4014bc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:58.296081+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150528000 unmapped: 18071552 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:59.296242+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.151638985s of 10.293497086s, submitted: 65
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 18055168 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2672709 data_alloc: 234881024 data_used: 16703488
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:00.296402+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150544384 unmapped: 18055168 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:01.296572+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 372 heartbeat osd_stat(store_statfs(0x4f5582000/0x0/0x4ffc00000, data 0x3fc1805/0x419b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42489400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 372 ms_handle_reset con 0x560f42489400 session 0x560f3fd4f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 18006016 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:02.296736+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42489000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 373 ms_handle_reset con 0x560f42489000 session 0x560f3feacb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150700032 unmapped: 17899520 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 373 ms_handle_reset con 0x560f3f61ac00 session 0x560f4040e1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 373 ms_handle_reset con 0x560f3ec7b800 session 0x560f3febd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:03.296864+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 373 ms_handle_reset con 0x560f3ea81c00 session 0x560f413714a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 374 ms_handle_reset con 0x560f42d1a800 session 0x560f3fd4eb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150716416 unmapped: 17883136 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:04.297342+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150716416 unmapped: 17883136 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2680349 data_alloc: 234881024 data_used: 16711680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:05.297477+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 374 handle_osd_map epochs [375,376], i have 374, src has [1,376]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150724608 unmapped: 17874944 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 376 ms_handle_reset con 0x560f3f61ac00 session 0x560f3fd57e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:06.297735+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 17825792 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 376 ms_handle_reset con 0x560f4384d800 session 0x560f3f58da40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:07.298147+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 376 heartbeat osd_stat(store_statfs(0x4f57f0000/0x0/0x4ffc00000, data 0x3b244f9/0x3d01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 376 ms_handle_reset con 0x560f41225400 session 0x560f3ec74780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150798336 unmapped: 17801216 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:08.298470+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 377 ms_handle_reset con 0x560f42ce9c00 session 0x560f3f58c3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 377 ms_handle_reset con 0x560f3f618800 session 0x560f410e30e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 377 ms_handle_reset con 0x560f42d1a800 session 0x560f410e2d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150847488 unmapped: 17752064 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 377 ms_handle_reset con 0x560f3f61ac00 session 0x560f3f47ad20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4342cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 377 ms_handle_reset con 0x560f4384d800 session 0x560f4115e960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f618800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 377 ms_handle_reset con 0x560f3f618800 session 0x560f41e1cd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:09.299055+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 377 ms_handle_reset con 0x560f4342cc00 session 0x560f3fd4e780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150888448 unmapped: 17711104 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61ac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.257085800s of 10.292577744s, submitted: 189
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 378 ms_handle_reset con 0x560f3f61ac00 session 0x560f41e01a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 378 ms_handle_reset con 0x560f42ce9c00 session 0x560f41462780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 378 ms_handle_reset con 0x560f3ea81c00 session 0x560f402ead20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2648461 data_alloc: 234881024 data_used: 16654336
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:10.299440+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150904832 unmapped: 17694720 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 378 ms_handle_reset con 0x560f3ea81c00 session 0x560f3fecc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:11.299882+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f618800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 378 heartbeat osd_stat(store_statfs(0x4f67bb000/0x0/0x4ffc00000, data 0x2c9cc93/0x2ead000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 379 ms_handle_reset con 0x560f3f618800 session 0x560f4014ad20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 22265856 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 379 ms_handle_reset con 0x560f45f24400 session 0x560f41acf4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:12.300130+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 22265856 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43569000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:13.300288+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 379 ms_handle_reset con 0x560f401c2000 session 0x560f41e012c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 380 ms_handle_reset con 0x560f4140e800 session 0x560f410e1860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 22249472 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 380 ms_handle_reset con 0x560f401b8c00 session 0x560f41ace960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 380 ms_handle_reset con 0x560f43569000 session 0x560f3fd4e780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 380 ms_handle_reset con 0x560f3ea81c00 session 0x560f3f4eda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:14.300496+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146391040 unmapped: 22208512 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f618800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2478080 data_alloc: 218103808 data_used: 8237056
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:15.300666+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146399232 unmapped: 22200320 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 382 ms_handle_reset con 0x560f3f618800 session 0x560f3ec74780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 382 heartbeat osd_stat(store_statfs(0x4f645a000/0x0/0x4ffc00000, data 0x2ca208c/0x2eb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:16.301043+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c2000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146432000 unmapped: 22167552 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 383 ms_handle_reset con 0x560f401c2000 session 0x560f3fd574a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 383 ms_handle_reset con 0x560f3ea81c00 session 0x560f410e0b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:17.301218+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146432000 unmapped: 22167552 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 383 heartbeat osd_stat(store_statfs(0x4f6457000/0x0/0x4ffc00000, data 0x2ca53a9/0x2eb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f618800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:18.301483+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 383 ms_handle_reset con 0x560f401b8c00 session 0x560f41acfa40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146432000 unmapped: 22167552 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43569000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 384 ms_handle_reset con 0x560f43569000 session 0x560f41e1cb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:19.301670+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 384 ms_handle_reset con 0x560f3f618800 session 0x560f420bcb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.403969765s of 10.002555847s, submitted: 169
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 25223168 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 385 ms_handle_reset con 0x560f4140dc00 session 0x560f3f58c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f8367000/0x0/0x4ffc00000, data 0xc50aed/0xe61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2232770 data_alloc: 218103808 data_used: 4063232
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:20.302050+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 25223168 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:21.302279+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 385 ms_handle_reset con 0x560f3ea81c00 session 0x560f40152b40
Nov 29 08:16:44 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b7c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 25223168 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 385 ms_handle_reset con 0x560f411b7c00 session 0x560f41380b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:22.302460+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146636800 unmapped: 21962752 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:23.302689+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146636800 unmapped: 21962752 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:24.302839+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f84ad000/0x0/0x4ffc00000, data 0xc50aed/0xe61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 385 handle_osd_map epochs [386,386], i have 386, src has [1,386]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146636800 unmapped: 21962752 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 386 ms_handle_reset con 0x560f45f24800 session 0x560f41e1c3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:25.303113+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2238966 data_alloc: 218103808 data_used: 7819264
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 146636800 unmapped: 21962752 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:26.303342+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41407000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147988480 unmapped: 20611072 heap: 168599552 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 386 ms_handle_reset con 0x560f41407000 session 0x560f413803c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61f400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 386 ms_handle_reset con 0x560f3f61f400 session 0x560f4040d2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 386 ms_handle_reset con 0x560f3ea81c00 session 0x560f41ace1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 386 ms_handle_reset con 0x560f401b7000 session 0x560f41462960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b7c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:27.303544+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 386 ms_handle_reset con 0x560f411b7c00 session 0x560f3f2a8000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41407000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 387 ms_handle_reset con 0x560f41407000 session 0x560f410e25a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147816448 unmapped: 24985600 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:28.303800+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 388 ms_handle_reset con 0x560f45f24800 session 0x560f3febcd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 388 ms_handle_reset con 0x560f3ea81c00 session 0x560f41df01e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 388 ms_handle_reset con 0x560f4140b000 session 0x560f410e2780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147832832 unmapped: 24969216 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:29.304056+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147808256 unmapped: 24993792 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 389 ms_handle_reset con 0x560f401b7000 session 0x560f401525a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f7e3d000/0x0/0x4ffc00000, data 0x12b6d21/0x14d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:30.304329+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2309226 data_alloc: 218103808 data_used: 7831552
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147808256 unmapped: 24993792 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:31.304632+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 389 heartbeat osd_stat(store_statfs(0x4f7e3d000/0x0/0x4ffc00000, data 0x12b6d21/0x14d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147808256 unmapped: 24993792 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.419912338s of 12.149502754s, submitted: 110
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 389 ms_handle_reset con 0x560f42d1a000 session 0x560f3fead0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:32.304788+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140fc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147832832 unmapped: 24969216 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 390 ms_handle_reset con 0x560f4140fc00 session 0x560f41371a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:33.305088+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 391 ms_handle_reset con 0x560f41606800 session 0x560f41acf860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 24961024 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f7e35000/0x0/0x4ffc00000, data 0x12ba42b/0x14d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:34.305356+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 24961024 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:35.305550+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 391 ms_handle_reset con 0x560f3ea81c00 session 0x560f4115f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2319651 data_alloc: 218103808 data_used: 7843840
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 24961024 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:36.305750+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f7e35000/0x0/0x4ffc00000, data 0x12ba42b/0x14d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 391 handle_osd_map epochs [392,392], i have 392, src has [1,392]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41409000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 392 ms_handle_reset con 0x560f41409000 session 0x560f4040cd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 24952832 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f7e32000/0x0/0x4ffc00000, data 0x12bc01f/0x14db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:37.305888+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41407800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 147857408 unmapped: 24944640 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 392 ms_handle_reset con 0x560f41407800 session 0x560f401530e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:38.306040+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 148258816 unmapped: 24543232 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:39.306179+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426ba800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42db9000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 392 ms_handle_reset con 0x560f42db9000 session 0x560f4040e3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149078016 unmapped: 23724032 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:40.306349+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2364479 data_alloc: 234881024 data_used: 13189120
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 23715840 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41407800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 393 ms_handle_reset con 0x560f3ea81c00 session 0x560f3f6bb680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:41.306551+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 393 ms_handle_reset con 0x560f41407800 session 0x560f3fecc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 23715840 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 394 ms_handle_reset con 0x560f426ba800 session 0x560f410e0960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:42.306743+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 394 heartbeat osd_stat(store_statfs(0x4f7e2b000/0x0/0x4ffc00000, data 0x12bf7b3/0x14e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149094400 unmapped: 23707648 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:43.307093+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 394 heartbeat osd_stat(store_statfs(0x4f7e2b000/0x0/0x4ffc00000, data 0x12bf7b3/0x14e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.500937462s of 11.647067070s, submitted: 42
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 395 handle_osd_map epochs [395,395], i have 395, src has [1,395]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 395 ms_handle_reset con 0x560f3f5ec000 session 0x560f41acef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ba000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 23789568 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 395 ms_handle_reset con 0x560f401ba000 session 0x560f3f47a960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:44.307218+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149037056 unmapped: 23764992 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:45.307475+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2376120 data_alloc: 234881024 data_used: 13205504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149037056 unmapped: 23764992 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 397 ms_handle_reset con 0x560f3ea81c00 session 0x560f3fd574a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:46.307691+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 397 ms_handle_reset con 0x560f426bac00 session 0x560f410e30e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149037056 unmapped: 23764992 heap: 172802048 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 397 ms_handle_reset con 0x560f41225800 session 0x560f41cb8b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c5000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:47.307917+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 23699456 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f6e24000/0x0/0x4ffc00000, data 0x22c499e/0x24ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:48.308087+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 398 ms_handle_reset con 0x560f401c5000 session 0x560f3f478b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41409c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 398 ms_handle_reset con 0x560f4114d800 session 0x560f41cb83c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 398 ms_handle_reset con 0x560f41409c00 session 0x560f41ace780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149143552 unmapped: 27860992 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:49.308258+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149544960 unmapped: 27459584 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:50.308425+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f63c0000/0x0/0x4ffc00000, data 0x2d2658b/0x2f4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2578982 data_alloc: 234881024 data_used: 13320192
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 400 ms_handle_reset con 0x560f3ea81c00 session 0x560f3fd565a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 400 ms_handle_reset con 0x560f4114d800 session 0x560f3f2a8b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 27394048 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:51.308624+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150077440 unmapped: 26927104 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43568800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:52.308819+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 401 ms_handle_reset con 0x560f43568800 session 0x560f3ec745a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce8000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150134784 unmapped: 26869760 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f624f000/0x0/0x4ffc00000, data 0x2e9565b/0x30bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:53.308949+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.656598091s of 10.098872185s, submitted: 209
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 401 ms_handle_reset con 0x560f42ce8000 session 0x560f3fead860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150142976 unmapped: 26861568 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:54.309153+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150142976 unmapped: 26861568 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:55.309334+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2586084 data_alloc: 234881024 data_used: 13307904
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150142976 unmapped: 26861568 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:56.309497+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150142976 unmapped: 26861568 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:57.309620+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150298624 unmapped: 26705920 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:58.309818+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150462464 unmapped: 26542080 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 402 ms_handle_reset con 0x560f42d1a000 session 0x560f4040f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 402 ms_handle_reset con 0x560f43533000 session 0x560f41e1c780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f6217000/0x0/0x4ffc00000, data 0x2ecccd3/0x30f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:59.310008+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150462464 unmapped: 26542080 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:00.310217+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2591026 data_alloc: 234881024 data_used: 13328384
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 ms_handle_reset con 0x560f42ce9000 session 0x560f41ace960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150470656 unmapped: 26533888 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:01.310408+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 ms_handle_reset con 0x560f41225c00 session 0x560f401270e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150470656 unmapped: 26533888 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:02.310675+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f6215000/0x0/0x4ffc00000, data 0x2ece74b/0x30f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150470656 unmapped: 26533888 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f6215000/0x0/0x4ffc00000, data 0x2ece74b/0x30f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:03.310848+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 150470656 unmapped: 26533888 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:04.311010+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 151339008 unmapped: 25665536 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:05.311121+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2630984 data_alloc: 234881024 data_used: 15577088
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:06.311531+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:07.311744+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f6215000/0x0/0x4ffc00000, data 0x2ece74b/0x30f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:08.311900+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:09.312422+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:10.312904+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2630984 data_alloc: 234881024 data_used: 15577088
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:11.313417+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f6215000/0x0/0x4ffc00000, data 0x2ece74b/0x30f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:12.313627+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:13.314035+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:14.314351+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 24715264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:15.314561+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2631144 data_alloc: 234881024 data_used: 15581184
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.403467178s of 21.785074234s, submitted: 55
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 164110336 unmapped: 12894208 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:16.314686+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f5dc7000/0x0/0x4ffc00000, data 0x2ece74b/0x30f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 165634048 unmapped: 11370496 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:17.314839+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159121408 unmapped: 17883136 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:18.315078+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159383552 unmapped: 17620992 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:19.315234+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159383552 unmapped: 17620992 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:20.315388+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 ms_handle_reset con 0x560f4114dc00 session 0x560f410e2b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2745196 data_alloc: 234881024 data_used: 15552512
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 19726336 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:21.315559+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f526e000/0x0/0x4ffc00000, data 0x3e7774b/0x40a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 19726336 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:22.315735+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:23.315945+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:24.316157+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 ms_handle_reset con 0x560f41225000 session 0x560f3f4ecb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 ms_handle_reset con 0x560f3ea80800 session 0x560f41ecd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:25.316279+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2748144 data_alloc: 234881024 data_used: 15659008
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 ms_handle_reset con 0x560f4181a800 session 0x560f41462b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:26.316475+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f526e000/0x0/0x4ffc00000, data 0x3e7774b/0x40a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:27.316754+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:28.317043+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f526e000/0x0/0x4ffc00000, data 0x3e7774b/0x40a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:29.317260+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f526e000/0x0/0x4ffc00000, data 0x3e7774b/0x40a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:30.317475+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f526e000/0x0/0x4ffc00000, data 0x3e7774b/0x40a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2748144 data_alloc: 234881024 data_used: 15659008
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:31.317661+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157286400 unmapped: 19718144 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.047733307s of 16.735830307s, submitted: 103
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:32.317875+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f526e000/0x0/0x4ffc00000, data 0x3e7774b/0x40a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [2])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157310976 unmapped: 19693568 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:33.318073+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:34.318245+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:35.318414+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2761244 data_alloc: 234881024 data_used: 16240640
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:36.318625+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f526c000/0x0/0x4ffc00000, data 0x3e7774b/0x40a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:37.318793+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:38.319054+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:39.319234+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:40.319373+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2761404 data_alloc: 234881024 data_used: 16244736
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:41.319609+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f526c000/0x0/0x4ffc00000, data 0x3e7774b/0x40a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:42.319797+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:43.319951+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:44.320134+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:45.320310+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2761404 data_alloc: 234881024 data_used: 16244736
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:46.320460+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 19595264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:47.320623+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f526c000/0x0/0x4ffc00000, data 0x3e7774b/0x40a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.859582901s of 15.902038574s, submitted: 6
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157417472 unmapped: 19587072 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:48.320731+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157417472 unmapped: 19587072 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f5266000/0x0/0x4ffc00000, data 0x3e7c74b/0x40a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:49.320881+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c4c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 ms_handle_reset con 0x560f401c4c00 session 0x560f3feac3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157720576 unmapped: 19283968 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:50.321082+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41ecb400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ae000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2764396 data_alloc: 234881024 data_used: 16240640
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157720576 unmapped: 19283968 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:51.321259+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157720576 unmapped: 19283968 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:52.321426+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157720576 unmapped: 19283968 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:53.321607+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f5242000/0x0/0x4ffc00000, data 0x3ea074b/0x40c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 19259392 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:54.321753+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 19259392 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:55.321913+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2765676 data_alloc: 234881024 data_used: 16314368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 19259392 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:56.322074+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 19259392 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:57.322274+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f5242000/0x0/0x4ffc00000, data 0x3ea074b/0x40c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157777920 unmapped: 19226624 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:58.322418+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157777920 unmapped: 19226624 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:59.322619+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.543364525s of 11.602374077s, submitted: 5
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157810688 unmapped: 19193856 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 403 handle_osd_map epochs [404,404], i have 404, src has [1,404]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:00.322757+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f41405000 session 0x560f3fecd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2769946 data_alloc: 234881024 data_used: 16437248
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f5245000/0x0/0x4ffc00000, data 0x3ea074b/0x40c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f41405000 session 0x560f41c1ab40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f3ea80800 session 0x560f4040cb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157818880 unmapped: 19185664 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c4c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f401c4c00 session 0x560f41cb9680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41225000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f41225000 session 0x560f41df0b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:01.322914+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f4181a800 session 0x560f402eb0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c5c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f952000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f3f952000 session 0x560f41cb90e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f401c5c00 session 0x560f420bcb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157884416 unmapped: 19120128 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:02.323026+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157884416 unmapped: 19120128 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:03.323175+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158236672 unmapped: 18767872 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:04.323292+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158425088 unmapped: 18579456 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:05.323358+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f502f000/0x0/0x4ffc00000, data 0x40b52c8/0x42df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2808193 data_alloc: 234881024 data_used: 17661952
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f502f000/0x0/0x4ffc00000, data 0x40b52c8/0x42df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41ecb800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f41ecb800 session 0x560f41c1ba40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158425088 unmapped: 18579456 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:06.323500+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f461fb000 session 0x560f3f4783c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158425088 unmapped: 18579456 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:07.323640+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f40048c00 session 0x560f3f47b2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f40048c00 session 0x560f40152780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158441472 unmapped: 18563072 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:08.323791+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f952000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c5c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158482432 unmapped: 18522112 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:09.323920+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 17719296 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f502e000/0x0/0x4ffc00000, data 0x40b52d8/0x42e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:10.324039+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2825379 data_alloc: 234881024 data_used: 19820544
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160653312 unmapped: 16351232 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:11.324199+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f618c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f3f618c00 session 0x560f41c1b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160653312 unmapped: 16351232 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:12.324349+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160653312 unmapped: 16351232 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:13.324509+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f502e000/0x0/0x4ffc00000, data 0x40b52d8/0x42e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426ba800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f426ba800 session 0x560f3f2a85a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160686080 unmapped: 16318464 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:14.324666+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.640279770s of 14.907845497s, submitted: 38
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160718848 unmapped: 16285696 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:15.325175+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2824851 data_alloc: 234881024 data_used: 19820544
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5edc00
Nov 29 08:16:44 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19307 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f3f5edc00 session 0x560f41ecda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bcc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f502e000/0x0/0x4ffc00000, data 0x40b52d8/0x42e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160792576 unmapped: 16211968 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f41ecb400 session 0x560f40153c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:16.325302+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f401ae000 session 0x560f4115e3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f401bcc00 session 0x560f3feb14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160890880 unmapped: 16113664 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:17.325429+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f401bbc00 session 0x560f413d5860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f502e000/0x0/0x4ffc00000, data 0x40b52d8/0x42e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160890880 unmapped: 16113664 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:18.325550+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f5052000/0x0/0x4ffc00000, data 0x40912d8/0x42bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160923648 unmapped: 16080896 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:19.325846+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160948224 unmapped: 16056320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:20.325989+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2823142 data_alloc: 234881024 data_used: 19709952
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 163323904 unmapped: 13680640 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:21.326144+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 164118528 unmapped: 12886016 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:22.326293+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f4adb000/0x0/0x4ffc00000, data 0x46022d8/0x482d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 165101568 unmapped: 11902976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:23.326405+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 165101568 unmapped: 11902976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:24.326640+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f42ce9000 session 0x560f420bd4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 165101568 unmapped: 11902976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:25.326803+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.744164467s of 10.714839935s, submitted: 70
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f4140d800 session 0x560f41e1c3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2558578 data_alloc: 234881024 data_used: 13946880
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f4abc000/0x0/0x4ffc00000, data 0x46192d8/0x4844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:26.327034+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159457280 unmapped: 17547264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f7250000/0x0/0x4ffc00000, data 0x1e942c8/0x20be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:27.327293+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159457280 unmapped: 17547264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:28.327507+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159457280 unmapped: 17547264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:29.327679+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159457280 unmapped: 17547264 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f7250000/0x0/0x4ffc00000, data 0x1e942c8/0x20be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:30.327945+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159465472 unmapped: 17539072 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f7250000/0x0/0x4ffc00000, data 0x1e942c8/0x20be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c4c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 ms_handle_reset con 0x560f401c4c00 session 0x560f3f58c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2562508 data_alloc: 234881024 data_used: 13955072
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:31.328205+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159571968 unmapped: 17432576 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:32.328354+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159596544 unmapped: 17408000 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:33.328548+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159596544 unmapped: 17408000 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f724e000/0x0/0x4ffc00000, data 0x1e9433a/0x20c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:34.328715+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159596544 unmapped: 17408000 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:35.328847+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159596544 unmapped: 17408000 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2567903 data_alloc: 234881024 data_used: 15454208
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:36.329021+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159596544 unmapped: 17408000 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.117895126s of 11.173646927s, submitted: 16
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:37.329361+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160849920 unmapped: 16154624 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f717b000/0x0/0x4ffc00000, data 0x1f6733a/0x2193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:38.329490+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160849920 unmapped: 16154624 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f717b000/0x0/0x4ffc00000, data 0x1f6733a/0x2193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:39.329665+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160849920 unmapped: 16154624 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:40.329858+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160849920 unmapped: 16154624 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2580155 data_alloc: 234881024 data_used: 15601664
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:41.330048+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160022528 unmapped: 16982016 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f717b000/0x0/0x4ffc00000, data 0x1f6733a/0x2193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:42.330266+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160022528 unmapped: 16982016 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x1f6933a/0x2195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:43.330589+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160022528 unmapped: 16982016 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x1f6933a/0x2195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:44.330785+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160030720 unmapped: 16973824 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 405 ms_handle_reset con 0x560f40048000 session 0x560f410e14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:45.331103+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160030720 unmapped: 16973824 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2587394 data_alloc: 234881024 data_used: 15740928
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f716a000/0x0/0x4ffc00000, data 0x1f75eb7/0x21a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 405 ms_handle_reset con 0x560f3f61c800 session 0x560f3fecd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 405 ms_handle_reset con 0x560f41405000 session 0x560f3febd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:46.331238+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160047104 unmapped: 16957440 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 405 ms_handle_reset con 0x560f3ea80c00 session 0x560f420bd0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.521946907s of 10.176931381s, submitted: 56
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 406 ms_handle_reset con 0x560f40046800 session 0x560f3f4ecb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:47.331400+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160096256 unmapped: 16908288 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:48.331567+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160129024 unmapped: 16875520 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 407 ms_handle_reset con 0x560f41607400 session 0x560f401270e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 407 ms_handle_reset con 0x560f461fb400 session 0x560f41ace960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bb400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:49.331708+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160137216 unmapped: 16867328 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 407 ms_handle_reset con 0x560f426bb400 session 0x560f3f2a8b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 407 ms_handle_reset con 0x560f401b8800 session 0x560f41c1b680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:50.332066+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160210944 unmapped: 16793600 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 407 ms_handle_reset con 0x560f401b8800 session 0x560f402ead20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 407 heartbeat osd_stat(store_statfs(0x4f721c000/0x0/0x4ffc00000, data 0x1ebd613/0x20ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2590687 data_alloc: 234881024 data_used: 15659008
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 ms_handle_reset con 0x560f401c7000 session 0x560f3f6bab40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:51.332318+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161243136 unmapped: 15761408 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61e000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 ms_handle_reset con 0x560f3f61e000 session 0x560f3fecc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 ms_handle_reset con 0x560f40046800 session 0x560f41c1b2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:52.332475+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161259520 unmapped: 15745024 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:53.332645+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161259520 unmapped: 15745024 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 ms_handle_reset con 0x560f4114dc00 session 0x560f4014b4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 ms_handle_reset con 0x560f41225c00 session 0x560f41cb92c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:54.332814+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161275904 unmapped: 15728640 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 heartbeat osd_stat(store_statfs(0x4f721b000/0x0/0x4ffc00000, data 0x1ec0246/0x20f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 ms_handle_reset con 0x560f3ea7c000 session 0x560f3fd4e780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:55.333027+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161292288 unmapped: 15712256 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2584873 data_alloc: 234881024 data_used: 15548416
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:56.333235+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161292288 unmapped: 15712256 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:57.333388+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161292288 unmapped: 15712256 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61a400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.073731422s of 11.596386909s, submitted: 64
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:58.333509+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161308672 unmapped: 15695872 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:59.333699+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161497088 unmapped: 15507456 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40189c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 ms_handle_reset con 0x560f40189c00 session 0x560f3fecd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bd000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 ms_handle_reset con 0x560f3f61a400 session 0x560f41381860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:00.333826+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 ms_handle_reset con 0x560f401bd000 session 0x560f41c1ab40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156811264 unmapped: 20193280 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7233000/0x0/0x4ffc00000, data 0x1ea8246/0x20db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2455216 data_alloc: 218103808 data_used: 7749632
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:01.334048+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156811264 unmapped: 20193280 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 ms_handle_reset con 0x560f43533000 session 0x560f41ace000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:02.334271+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156811264 unmapped: 20193280 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:03.334513+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156811264 unmapped: 20193280 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:04.334791+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:05.335254+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c94000/0x0/0x4ffc00000, data 0x1445ca9/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2456800 data_alloc: 218103808 data_used: 7733248
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:06.335422+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:07.335565+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c94000/0x0/0x4ffc00000, data 0x1445ca9/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:08.335750+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c94000/0x0/0x4ffc00000, data 0x1445ca9/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:09.335913+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c94000/0x0/0x4ffc00000, data 0x1445ca9/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:10.336140+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c94000/0x0/0x4ffc00000, data 0x1445ca9/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2456960 data_alloc: 218103808 data_used: 7737344
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:11.336344+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c94000/0x0/0x4ffc00000, data 0x1445ca9/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 ms_handle_reset con 0x560f3ea81c00 session 0x560f3f58dc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:12.336497+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:13.336693+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 20152320 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41404000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.153271675s of 15.672075272s, submitted: 62
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 ms_handle_reset con 0x560f4140c800 session 0x560f41ace780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 ms_handle_reset con 0x560f41404000 session 0x560f40153a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:14.336859+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156729344 unmapped: 20275200 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:15.337054+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156729344 unmapped: 20275200 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2458426 data_alloc: 218103808 data_used: 7786496
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c92000/0x0/0x4ffc00000, data 0x1445d1b/0x167c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:16.337179+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156729344 unmapped: 20275200 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:17.337353+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156729344 unmapped: 20275200 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:18.337479+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156729344 unmapped: 20275200 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:19.337643+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157827072 unmapped: 19177472 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:20.337782+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157827072 unmapped: 19177472 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 ms_handle_reset con 0x560f461fb000 session 0x560f41acf2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c08000/0x0/0x4ffc00000, data 0x14ced1b/0x1705000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2467645 data_alloc: 218103808 data_used: 7835648
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:21.337914+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157827072 unmapped: 19177472 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 ms_handle_reset con 0x560f41607c00 session 0x560f3feac3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 ms_handle_reset con 0x560f3ea7cc00 session 0x560f3ec74780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:22.338128+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157253632 unmapped: 19750912 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:23.338338+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157253632 unmapped: 19750912 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7c93000/0x0/0x4ffc00000, data 0x1445ca9/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c2400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.917365074s of 10.004048347s, submitted: 23
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:24.338512+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157270016 unmapped: 19734528 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 410 ms_handle_reset con 0x560f401c2400 session 0x560f41ecc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:25.338653+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 410 ms_handle_reset con 0x560f41607400 session 0x560f4040e1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 19726336 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2463844 data_alloc: 218103808 data_used: 7839744
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41408c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 410 ms_handle_reset con 0x560f41408c00 session 0x560f3feb1860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:26.338792+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157294592 unmapped: 19709952 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 410 heartbeat osd_stat(store_statfs(0x4f7c9b000/0x0/0x4ffc00000, data 0x143b7e7/0x1671000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 411 ms_handle_reset con 0x560f3ea7cc00 session 0x560f4014b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 411 ms_handle_reset con 0x560f42d1b000 session 0x560f4115f2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 411 ms_handle_reset con 0x560f43533800 session 0x560f3ff21c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 411 ms_handle_reset con 0x560f3ff2dc00 session 0x560f3fd561e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:27.338938+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ae000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 411 ms_handle_reset con 0x560f401ae000 session 0x560f41c1be00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157335552 unmapped: 19668992 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:28.339115+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 411 ms_handle_reset con 0x560f3ff2dc00 session 0x560f3feacd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157335552 unmapped: 19668992 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 412 ms_handle_reset con 0x560f3ea7cc00 session 0x560f3fecda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 412 ms_handle_reset con 0x560f42d1b000 session 0x560f413703c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:29.339307+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157302784 unmapped: 19701760 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f952c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 412 ms_handle_reset con 0x560f3f952c00 session 0x560f4014a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:30.339494+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 19677184 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f7ca5000/0x0/0x4ffc00000, data 0x1432f04/0x1669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 412 handle_osd_map epochs [413,413], i have 413, src has [1,413]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 ms_handle_reset con 0x560f401c7000 session 0x560f3febdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 ms_handle_reset con 0x560f43533800 session 0x560f41463e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2467814 data_alloc: 218103808 data_used: 7745536
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 ms_handle_reset con 0x560f3ea7cc00 session 0x560f3f58c780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f952c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:31.339668+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 ms_handle_reset con 0x560f3f952c00 session 0x560f3f58d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157319168 unmapped: 19685376 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:32.339930+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157319168 unmapped: 19685376 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 ms_handle_reset con 0x560f3f952000 session 0x560f3fead0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 ms_handle_reset con 0x560f401c5c00 session 0x560f3f5641e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f7cba000/0x0/0x4ffc00000, data 0x141cc09/0x1652000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:33.340087+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 ms_handle_reset con 0x560f3ea7cc00 session 0x560f420bc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 21127168 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:34.340314+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 21127168 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:35.340553+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 21127168 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f8459000/0x0/0x4ffc00000, data 0xc80bf9/0xeb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.364267349s of 11.876122475s, submitted: 151
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2378339 data_alloc: 218103808 data_used: 4628480
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:36.340762+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 21127168 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:37.340911+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 21127168 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:38.341104+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140e800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 414 ms_handle_reset con 0x560f4140e800 session 0x560f4014ba40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 21127168 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181bc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426a4000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 415 ms_handle_reset con 0x560f426a4000 session 0x560f4014b4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:39.341314+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 21127168 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 416 ms_handle_reset con 0x560f4181bc00 session 0x560f402ea1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:40.341496+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f844d000/0x0/0x4ffc00000, data 0xc85e60/0xebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 21127168 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2390421 data_alloc: 218103808 data_used: 4628480
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 417 ms_handle_reset con 0x560f426bbc00 session 0x560f3fd4e960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:41.341694+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 417 ms_handle_reset con 0x560f3ea7cc00 session 0x560f41cb85a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140e800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 417 ms_handle_reset con 0x560f4140e800 session 0x560f41e014a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 21127168 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bb400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 417 ms_handle_reset con 0x560f426bb400 session 0x560f41ecd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:42.341880+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 ms_handle_reset con 0x560f4181a800 session 0x560f41eccb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:43.342069+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 ms_handle_reset con 0x560f41405000 session 0x560f41c1a960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 ms_handle_reset con 0x560f3ea7cc00 session 0x560f3fd4f680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:44.342264+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:45.342464+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2394919 data_alloc: 218103808 data_used: 4636672
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:46.342598+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f8446000/0x0/0x4ffc00000, data 0xc8948a/0xec6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:47.342738+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f8446000/0x0/0x4ffc00000, data 0xc8948a/0xec6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4384c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 ms_handle_reset con 0x560f4384c800 session 0x560f3feccb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42db8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.276943207s of 12.390696526s, submitted: 58
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:48.342943+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 ms_handle_reset con 0x560f42db8c00 session 0x560f41cb8780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 ms_handle_reset con 0x560f3f61a800 session 0x560f41463a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:49.343149+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 ms_handle_reset con 0x560f40046800 session 0x560f3f58c3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:50.343277+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2391539 data_alloc: 218103808 data_used: 4636672
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 418 handle_osd_map epochs [419,420], i have 418, src has [1,420]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:51.343424+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 420 ms_handle_reset con 0x560f3ea7cc00 session 0x560f401525a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 420 heartbeat osd_stat(store_statfs(0x4f844a000/0x0/0x4ffc00000, data 0xc8946a/0xec4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:52.343579+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:53.343772+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:54.344036+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:55.344218+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 420 heartbeat osd_stat(store_statfs(0x4f8443000/0x0/0x4ffc00000, data 0xc8ca9e/0xeca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2399215 data_alloc: 218103808 data_used: 4644864
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:56.344472+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:57.344682+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:58.344885+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:59.345057+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:00.345261+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 420 heartbeat osd_stat(store_statfs(0x4f8443000/0x0/0x4ffc00000, data 0xc8ca9e/0xeca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.167138100s of 12.212039948s, submitted: 23
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2401661 data_alloc: 218103808 data_used: 4644864
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:01.345488+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:02.345639+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:03.345821+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:04.346013+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f8440000/0x0/0x4ffc00000, data 0xc8e501/0xecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:05.346225+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2401661 data_alloc: 218103808 data_used: 4644864
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:06.346424+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f8440000/0x0/0x4ffc00000, data 0xc8e501/0xecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:07.346654+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140e400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 421 ms_handle_reset con 0x560f4140e400 session 0x560f41ecd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:08.346872+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:09.347105+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155885568 unmapped: 21118976 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 421 ms_handle_reset con 0x560f40046800 session 0x560f3febd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:10.347314+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41641000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 422 ms_handle_reset con 0x560f401b8c00 session 0x560f410e14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155893760 unmapped: 21110784 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2406767 data_alloc: 218103808 data_used: 4657152
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 422 ms_handle_reset con 0x560f41641000 session 0x560f4040e3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:11.347500+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 422 ms_handle_reset con 0x560f4114d800 session 0x560f410e1860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f843c000/0x0/0x4ffc00000, data 0xc9008e/0xed1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 422 handle_osd_map epochs [423,423], i have 423, src has [1,423]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.073146820s of 11.101778984s, submitted: 15
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155901952 unmapped: 21102592 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:12.347641+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155844608 unmapped: 21159936 heap: 177004544 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41404000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140e800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 423 ms_handle_reset con 0x560f4181b000 session 0x560f4014b4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 423 ms_handle_reset con 0x560f4140e800 session 0x560f401530e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 423 ms_handle_reset con 0x560f40048000 session 0x560f41ecc960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:13.347781+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 21159936 heap: 189595648 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:14.347939+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155893760 unmapped: 67297280 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:15.348133+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61f000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 164331520 unmapped: 58859520 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3113874 data_alloc: 218103808 data_used: 4661248
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:16.348305+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 164347904 unmapped: 58843136 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f2039000/0x0/0x4ffc00000, data 0x7091c6e/0x72d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f3f61f000 session 0x560f4115fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:17.348499+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155967488 unmapped: 67223552 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:18.348673+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 155983872 unmapped: 67207168 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 heartbeat osd_stat(store_statfs(0x4edc34000/0x0/0x4ffc00000, data 0xb49385e/0xb6da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f40048000 session 0x560f410e2000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:19.348810+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161243136 unmapped: 61947904 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:20.349860+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 165486592 unmapped: 57704448 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050558 data_alloc: 218103808 data_used: 4673536
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:21.350109+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.657065392s of 10.144426346s, submitted: 94
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 169762816 unmapped: 53428224 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 heartbeat osd_stat(store_statfs(0x4ea034000/0x0/0x4ffc00000, data 0xf09385e/0xf2da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:22.350444+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157294592 unmapped: 65896448 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:23.351489+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157458432 unmapped: 65732608 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f401b7400 session 0x560f420bc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f41404000 session 0x560f420bcb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:24.351641+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157507584 unmapped: 65683456 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:25.351857+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4342c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f45f24c00 session 0x560f410e34a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157515776 unmapped: 65675264 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f3ff2c800 session 0x560f3fd4f4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f3ff2c800 session 0x560f420bd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f40048000 session 0x560f41ecc3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 heartbeat osd_stat(store_statfs(0x4e6034000/0x0/0x4ffc00000, data 0x1309385e/0x132da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556092 data_alloc: 218103808 data_used: 4673536
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:26.352021+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f401b7400 session 0x560f3fd56780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41404000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f41404000 session 0x560f3fd565a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 ms_handle_reset con 0x560f45f24c00 session 0x560f410e2b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 425 ms_handle_reset con 0x560f45f24c00 session 0x560f3f6ba000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426a4400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 425 ms_handle_reset con 0x560f426a4400 session 0x560f3ff21a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 66658304 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 425 ms_handle_reset con 0x560f4342c800 session 0x560f3fead0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:27.352171+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 66658304 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:28.352461+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 425 ms_handle_reset con 0x560f4114d400 session 0x560f3febdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156540928 unmapped: 66650112 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:29.352687+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bbc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 425 ms_handle_reset con 0x560f401bbc00 session 0x560f4014a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156540928 unmapped: 66650112 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 425 ms_handle_reset con 0x560f4114d400 session 0x560f3fecda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426a4400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:30.352816+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4342c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 426 ms_handle_reset con 0x560f4342c800 session 0x560f3fe1c000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 426 ms_handle_reset con 0x560f45f24c00 session 0x560f41e010e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f952000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 426 ms_handle_reset con 0x560f3f952000 session 0x560f410e3c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156590080 unmapped: 66600960 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 427 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4559455 data_alloc: 218103808 data_used: 4698112
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea81c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:31.353051+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 427 ms_handle_reset con 0x560f4181a000 session 0x560f4115f2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 427 heartbeat osd_stat(store_statfs(0x4e53dd000/0x0/0x4ffc00000, data 0x138d3f27/0x13b20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 156631040 unmapped: 66560000 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.188851357s of 10.513000488s, submitted: 80
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 428 ms_handle_reset con 0x560f3ea81c00 session 0x560f410e05a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:32.353153+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 428 ms_handle_reset con 0x560f426a4400 session 0x560f3feacd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 157327360 unmapped: 65863680 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:33.353474+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 428 heartbeat osd_stat(store_statfs(0x4e53d5000/0x0/0x4ffc00000, data 0x138d7621/0x13b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158359552 unmapped: 64831488 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4342c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 428 ms_handle_reset con 0x560f4342c800 session 0x560f41c1a1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:34.353594+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 429 ms_handle_reset con 0x560f45f24c00 session 0x560f4014a1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158244864 unmapped: 64946176 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 429 ms_handle_reset con 0x560f401b6400 session 0x560f41c1bc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:35.353782+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 ms_handle_reset con 0x560f401b7400 session 0x560f3fd565a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 64937984 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4620658 data_alloc: 234881024 data_used: 12115968
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:36.354037+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 64937984 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b7400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140f000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 ms_handle_reset con 0x560f4140f000 session 0x560f41c1b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:37.354166+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 ms_handle_reset con 0x560f3ea80000 session 0x560f413d4f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bc800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 163823616 unmapped: 59367424 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:38.354302+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159711232 unmapped: 63479808 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:39.354422+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 heartbeat osd_stat(store_statfs(0x4e37d2000/0x0/0x4ffc00000, data 0x154dad1b/0x1572c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,1,3])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 159883264 unmapped: 63307776 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:40.354567+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 160260096 unmapped: 62930944 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:41.354759+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5125737 data_alloc: 234881024 data_used: 12136448
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 164642816 unmapped: 58548224 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.410944462s of 10.017019272s, submitted: 169
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:42.354885+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 161693696 unmapped: 61497344 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:43.355087+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 heartbeat osd_stat(store_statfs(0x4de3d2000/0x0/0x4ffc00000, data 0x1a8dad1b/0x1ab2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,1,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 167616512 unmapped: 55574528 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:44.355325+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 168738816 unmapped: 54452224 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:45.355438+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 167862272 unmapped: 55328768 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:46.355651+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5852025 data_alloc: 234881024 data_used: 12574720
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 heartbeat osd_stat(store_statfs(0x4da5c2000/0x0/0x4ffc00000, data 0x1e6e9d1b/0x1e93b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 173752320 unmapped: 49438720 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 heartbeat osd_stat(store_statfs(0x4d819a000/0x0/0x4ffc00000, data 0x20b0fd1b/0x20d61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,2])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:47.355942+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 169852928 unmapped: 53338112 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:48.356203+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 52961280 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 ms_handle_reset con 0x560f401bc800 session 0x560f41acf860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 ms_handle_reset con 0x560f411b7400 session 0x560f413810e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:49.356435+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea80000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 431 ms_handle_reset con 0x560f3ea80000 session 0x560f41ecd0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170254336 unmapped: 52936704 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:50.356692+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170254336 unmapped: 52936704 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 431 heartbeat osd_stat(store_statfs(0x4d33ef000/0x0/0x4ffc00000, data 0x2471b8ec/0x2496e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:51.356944+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6530694 data_alloc: 234881024 data_used: 12410880
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 52928512 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 432 ms_handle_reset con 0x560f401b7400 session 0x560f402ead20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:52.357202+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.501482487s of 10.588182449s, submitted: 308
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 52928512 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:53.357321+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 52928512 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:54.357509+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 52928512 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401afc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 432 ms_handle_reset con 0x560f401afc00 session 0x560f3feadc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:55.357647+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 432 heartbeat osd_stat(store_statfs(0x4d33e8000/0x0/0x4ffc00000, data 0x247204bd/0x24974000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40188c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 432 ms_handle_reset con 0x560f40188c00 session 0x560f3f47ba40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 49176576 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:56.357791+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6536748 data_alloc: 234881024 data_used: 16277504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 174014464 unmapped: 49176576 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:57.357996+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 174039040 unmapped: 49152000 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:58.358359+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 174039040 unmapped: 49152000 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:59.358557+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181b400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 432 ms_handle_reset con 0x560f4181b400 session 0x560f4115f0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175390720 unmapped: 47800320 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:00.358715+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175398912 unmapped: 47792128 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 433 heartbeat osd_stat(store_statfs(0x4d28b4000/0x0/0x4ffc00000, data 0x25253f20/0x254a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:01.358947+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6629546 data_alloc: 234881024 data_used: 16285696
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 433 heartbeat osd_stat(store_statfs(0x4d28b4000/0x0/0x4ffc00000, data 0x25253f20/0x254a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 433 ms_handle_reset con 0x560f41640800 session 0x560f420bc5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175407104 unmapped: 47783936 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 433 heartbeat osd_stat(store_statfs(0x4d28b4000/0x0/0x4ffc00000, data 0x25253f20/0x254a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:02.359248+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ba400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.826986313s of 10.061385155s, submitted: 36
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175276032 unmapped: 47915008 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 434 ms_handle_reset con 0x560f41606800 session 0x560f4115fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:03.359555+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 434 ms_handle_reset con 0x560f401ba400 session 0x560f41ecc960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175333376 unmapped: 47857664 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:04.359716+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175333376 unmapped: 47857664 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:05.359882+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175341568 unmapped: 47849472 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:06.360044+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6707150 data_alloc: 234881024 data_used: 16285696
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175341568 unmapped: 47849472 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:07.360188+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d1f1f000/0x0/0x4ffc00000, data 0x25be7a9d/0x25e3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175341568 unmapped: 47849472 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:08.361137+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175341568 unmapped: 47849472 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5edc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d1f1c000/0x0/0x4ffc00000, data 0x25be961a/0x25e41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:09.361513+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175276032 unmapped: 47915008 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:10.361669+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f42d1b000 session 0x560f3fd57e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426a4c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f426a4c00 session 0x560f3ff212c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ba400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f401ba400 session 0x560f4040e1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d1f1c000/0x0/0x4ffc00000, data 0x25be961a/0x25e41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175276032 unmapped: 47915008 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f41606800 session 0x560f420bda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:11.361823+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f41640800 session 0x560f4115f680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6768327 data_alloc: 234881024 data_used: 16285696
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42d1b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f42d1b000 session 0x560f40152b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41404400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f41404400 session 0x560f41e01860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ba400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f401ba400 session 0x560f41e01a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f401b6400 session 0x560f41e00d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175374336 unmapped: 47816704 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:12.362042+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d179a000/0x0/0x4ffc00000, data 0x2636b62a/0x265c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175374336 unmapped: 47816704 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:13.362273+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.057924271s of 10.764103889s, submitted: 27
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f3f5edc00 session 0x560f41cb85a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f3ff2dc00 session 0x560f3fe1cf00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175374336 unmapped: 47816704 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:14.362414+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f4181a800 session 0x560f41e1cd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f3ea80800 session 0x560f3fd4e000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f43533c00 session 0x560f41e1c960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5edc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f3f5edc00 session 0x560f41381860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 175374336 unmapped: 47816704 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f3ff2dc00 session 0x560f3febd0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:15.362975+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ba400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140f000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f401b6400 session 0x560f41df1c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170991616 unmapped: 52199424 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:16.363121+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6573769 data_alloc: 218103808 data_used: 8617984
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d25cb000/0x0/0x4ffc00000, data 0x250ec607/0x25344000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170983424 unmapped: 52207616 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:17.363280+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 170983424 unmapped: 52207616 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:18.363446+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 172007424 unmapped: 51183616 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:19.363613+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ba800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f401ba800 session 0x560f3fd4ef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 172277760 unmapped: 50913280 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f3ec7b000 session 0x560f4115f680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:20.363791+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 172285952 unmapped: 50905088 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f401bac00 session 0x560f402ead20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41408400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:21.364139+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f41408400 session 0x560f413810e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6629949 data_alloc: 234881024 data_used: 16490496
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d2a1a000/0x0/0x4ffc00000, data 0x250ec607/0x25344000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 172294144 unmapped: 50896896 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:22.364263+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d2a1a000/0x0/0x4ffc00000, data 0x250ec607/0x25344000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 172294144 unmapped: 50896896 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:23.364438+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 172294144 unmapped: 50896896 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:24.364651+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 172294144 unmapped: 50896896 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:25.364834+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d2a1a000/0x0/0x4ffc00000, data 0x250ec607/0x25344000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 172294144 unmapped: 50896896 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:26.365158+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6629949 data_alloc: 234881024 data_used: 16490496
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 172294144 unmapped: 50896896 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:27.365590+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.139585495s of 13.817829132s, submitted: 72
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f41640400 session 0x560f41c1bc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f41640400 session 0x560f3feacd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 171728896 unmapped: 51462144 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:28.365849+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d2a1a000/0x0/0x4ffc00000, data 0x250ec607/0x25344000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 171728896 unmapped: 51462144 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:29.366013+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 44023808 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:30.366249+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 179585024 unmapped: 43606016 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:31.366623+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6762621 data_alloc: 234881024 data_used: 17973248
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d2042000/0x0/0x4ffc00000, data 0x25e2a607/0x25d1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178782208 unmapped: 44408832 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:32.367305+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d1f9f000/0x0/0x4ffc00000, data 0x25ece607/0x25dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178782208 unmapped: 44408832 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:33.367613+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178782208 unmapped: 44408832 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:34.367761+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178782208 unmapped: 44408832 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:35.367898+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d1f9f000/0x0/0x4ffc00000, data 0x25ece607/0x25dbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178782208 unmapped: 44408832 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:36.368016+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6763101 data_alloc: 234881024 data_used: 17985536
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178921472 unmapped: 44269568 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:37.368171+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178937856 unmapped: 44253184 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:38.368396+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178937856 unmapped: 44253184 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:39.368645+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178937856 unmapped: 44253184 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:40.368798+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178937856 unmapped: 44253184 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:41.369014+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6763293 data_alloc: 234881024 data_used: 17985536
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d1f7a000/0x0/0x4ffc00000, data 0x25ef3607/0x25de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 44244992 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:42.369135+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 44244992 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:43.369434+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 44244992 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:44.369665+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 44244992 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d1f7a000/0x0/0x4ffc00000, data 0x25ef3607/0x25de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:45.369860+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4d1f7a000/0x0/0x4ffc00000, data 0x25ef3607/0x25de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178954240 unmapped: 44236800 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:46.370083+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6763293 data_alloc: 234881024 data_used: 17985536
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178954240 unmapped: 44236800 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f401b6000 session 0x560f4014a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:47.370236+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426bac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.301284790s of 19.904682159s, submitted: 113
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 188882944 unmapped: 34308096 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:48.370386+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f426bac00 session 0x560f3febdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43569400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f43569400 session 0x560f3f4ec960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180674560 unmapped: 42516480 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:49.370546+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180674560 unmapped: 42516480 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:50.370719+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4cf776000/0x0/0x4ffc00000, data 0x286f6669/0x285e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180674560 unmapped: 42516480 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:51.370914+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7046935 data_alloc: 234881024 data_used: 17985536
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f401ba400 session 0x560f41e014a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f4140f000 session 0x560f41e1da40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f3ea7d400 session 0x560f3f4eda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4cf776000/0x0/0x4ffc00000, data 0x286f6669/0x285e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180674560 unmapped: 42516480 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:52.371130+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f401b6000 session 0x560f420bd2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 49053696 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:53.371271+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 49053696 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:54.371433+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 49053696 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:55.371619+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4cfefa000/0x0/0x4ffc00000, data 0x27f74649/0x27e64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 49053696 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:56.371845+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6934598 data_alloc: 234881024 data_used: 10113024
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 heartbeat osd_stat(store_statfs(0x4cfefa000/0x0/0x4ffc00000, data 0x27f74649/0x27e64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 174137344 unmapped: 49053696 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:57.372246+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40189c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 ms_handle_reset con 0x560f40189c00 session 0x560f3fd4f860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.380769730s of 10.480396271s, submitted: 92
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 436 heartbeat osd_stat(store_statfs(0x4cfefa000/0x0/0x4ffc00000, data 0x27f74649/0x27e64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 436 ms_handle_reset con 0x560f40048800 session 0x560f41c1b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178462720 unmapped: 44728320 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:58.372556+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 436 ms_handle_reset con 0x560f40048400 session 0x560f3fd4fa40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 436 heartbeat osd_stat(store_statfs(0x4ce081000/0x0/0x4ffc00000, data 0x291d71c6/0x28b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 436 handle_osd_map epochs [437,437], i have 437, src has [1,437]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178520064 unmapped: 44670976 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:59.373201+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 437 ms_handle_reset con 0x560f3ea7d400 session 0x560f3f58c960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40189c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b6000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140f000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178290688 unmapped: 44900352 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:00.373352+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 438 ms_handle_reset con 0x560f401b6000 session 0x560f420bda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178307072 unmapped: 44883968 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:01.373926+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7088648 data_alloc: 234881024 data_used: 10121216
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 438 heartbeat osd_stat(store_statfs(0x4ce07a000/0x0/0x4ffc00000, data 0x291da914/0x28b42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178315264 unmapped: 44875776 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:02.374243+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 178323456 unmapped: 44867584 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:03.374359+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 439 ms_handle_reset con 0x560f42ce9c00 session 0x560f420bc960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 42532864 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:04.374465+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 439 heartbeat osd_stat(store_statfs(0x4ce078000/0x0/0x4ffc00000, data 0x28fd24e5/0x28b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 42532864 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:05.374587+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426ba800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 439 ms_handle_reset con 0x560f426ba800 session 0x560f3f4ed2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 439 heartbeat osd_stat(store_statfs(0x4cf6df000/0x0/0x4ffc00000, data 0x275e94e5/0x274de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 42532864 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:06.374834+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6963652 data_alloc: 234881024 data_used: 18309120
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:07.375099+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 42532864 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:08.375388+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180658176 unmapped: 42532864 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.756187439s of 11.115140915s, submitted: 122
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140f400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:09.375736+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 197263360 unmapped: 25927680 heap: 223191040 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 440 heartbeat osd_stat(store_statfs(0x4cd6dd000/0x0/0x4ffc00000, data 0x295eb07e/0x294e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:10.375898+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180592640 unmapped: 51003392 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:11.376067+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 180723712 unmapped: 50872320 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7420824 data_alloc: 234881024 data_used: 18321408
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:12.376209+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 185122816 unmapped: 46473216 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:13.376393+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 189628416 unmapped: 41967616 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:14.376596+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 202629120 unmapped: 28966912 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 441 heartbeat osd_stat(store_statfs(0x4c527a000/0x0/0x4ffc00000, data 0x315ecae1/0x314e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [1])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:15.376796+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 206880768 unmapped: 24715264 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:16.376950+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 203096064 unmapped: 28499968 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8595388 data_alloc: 234881024 data_used: 18984960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:17.377108+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 199516160 unmapped: 32079872 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:18.377253+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 199909376 unmapped: 31686656 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.279709339s of 10.023746490s, submitted: 203
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 441 ms_handle_reset con 0x560f401b8000 session 0x560f41cb8b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:19.377450+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 441 ms_handle_reset con 0x560f4140f400 session 0x560f41ecd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194682880 unmapped: 36913152 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:20.377622+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 441 ms_handle_reset con 0x560f401bac00 session 0x560f3febc1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193789952 unmapped: 37806080 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 441 heartbeat osd_stat(store_statfs(0x4bb2be000/0x0/0x4ffc00000, data 0x3ba08ae1/0x3b900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 441 ms_handle_reset con 0x560f3ea7d800 session 0x560f3feb14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:21.377834+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 37797888 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7169659 data_alloc: 234881024 data_used: 19066880
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:22.378003+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193806336 unmapped: 37789696 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 442 ms_handle_reset con 0x560f40189c00 session 0x560f3fd4f0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 442 ms_handle_reset con 0x560f4140f000 session 0x560f413803c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 442 ms_handle_reset con 0x560f401b8000 session 0x560f3f4ecd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 442 ms_handle_reset con 0x560f3ea7d800 session 0x560f413801e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:23.378153+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193814528 unmapped: 37781504 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:24.378322+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193814528 unmapped: 37781504 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:25.378502+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193847296 unmapped: 37748736 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 444 heartbeat osd_stat(store_statfs(0x4ceab8000/0x0/0x4ffc00000, data 0x2802e293/0x28105000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 444 ms_handle_reset con 0x560f401bac00 session 0x560f3f2a8b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:26.378707+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193871872 unmapped: 37724160 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140f400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6973242 data_alloc: 234881024 data_used: 17797120
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:27.378891+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193871872 unmapped: 37724160 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41407000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 445 ms_handle_reset con 0x560f4140f400 session 0x560f4014ba40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 445 heartbeat osd_stat(store_statfs(0x4d0084000/0x0/0x4ffc00000, data 0x268d3ac1/0x26b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:28.379074+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193912832 unmapped: 37683200 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea7d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 446 ms_handle_reset con 0x560f3ea7d800 session 0x560f41e1da40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.035833359s of 10.004002571s, submitted: 203
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:29.379288+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193929216 unmapped: 37666816 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 447 ms_handle_reset con 0x560f41407000 session 0x560f3f478b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 447 ms_handle_reset con 0x560f401b8000 session 0x560f4014a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:30.379471+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194035712 unmapped: 37560320 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:31.379671+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195084288 unmapped: 36511744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 448 ms_handle_reset con 0x560f401bac00 session 0x560f401265a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bd000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4509310 data_alloc: 234881024 data_used: 17793024
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 449 ms_handle_reset con 0x560f401bd000 session 0x560f41cb81e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:32.379989+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191029248 unmapped: 40566784 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:33.380684+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191029248 unmapped: 40566784 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f2879000/0x0/0x4ffc00000, data 0x40da861/0x4343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:34.380948+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191029248 unmapped: 40566784 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:35.381133+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191029248 unmapped: 40566784 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f2879000/0x0/0x4ffc00000, data 0x40da861/0x4343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:36.381297+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191029248 unmapped: 40566784 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3269518 data_alloc: 234881024 data_used: 17793024
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:37.381499+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191037440 unmapped: 40558592 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:38.382077+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191037440 unmapped: 40558592 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f2879000/0x0/0x4ffc00000, data 0x40da861/0x4343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:39.382738+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191037440 unmapped: 40558592 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:40.382990+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f2879000/0x0/0x4ffc00000, data 0x40da861/0x4343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.795000076s of 11.390185356s, submitted: 201
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191037440 unmapped: 40558592 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f2877000/0x0/0x4ffc00000, data 0x40dc2e4/0x4346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:41.383248+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f2877000/0x0/0x4ffc00000, data 0x40dc2e4/0x4346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191037440 unmapped: 40558592 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f2877000/0x0/0x4ffc00000, data 0x40dc2e4/0x4346000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3271820 data_alloc: 234881024 data_used: 17793024
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:42.383371+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 450 ms_handle_reset con 0x560f3ff2cc00 session 0x560f3ff21a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 40550400 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:43.383536+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 40550400 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41ecb000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41224c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 450 ms_handle_reset con 0x560f41224c00 session 0x560f41e001e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 451 handle_osd_map epochs [451,451], i have 451, src has [1,451]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:44.383793+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 451 ms_handle_reset con 0x560f41ecb000 session 0x560f410e12c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ed000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191070208 unmapped: 40525824 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4114dc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:45.384003+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ae400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 451 ms_handle_reset con 0x560f3ff2c800 session 0x560f3fd56960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191070208 unmapped: 40525824 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 451 ms_handle_reset con 0x560f401ae400 session 0x560f3f4eda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:46.384152+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191078400 unmapped: 40517632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 451 ms_handle_reset con 0x560f3ff2c800 session 0x560f3f6bbe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3278528 data_alloc: 234881024 data_used: 17776640
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:47.384270+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191078400 unmapped: 40517632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f2874000/0x0/0x4ffc00000, data 0x40dde61/0x4349000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f2870000/0x0/0x4ffc00000, data 0x40df9de/0x434c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:48.384385+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 452 ms_handle_reset con 0x560f40048c00 session 0x560f3fead860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191250432 unmapped: 40345600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:49.384588+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191250432 unmapped: 40345600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41406400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:50.384720+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 ms_handle_reset con 0x560f41406400 session 0x560f3fd57680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191258624 unmapped: 40337408 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.675137520s of 10.800234795s, submitted: 29
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 ms_handle_reset con 0x560f45f24400 session 0x560f4040da40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41224400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 ms_handle_reset con 0x560f41224400 session 0x560f3fd56780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:51.384910+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191266816 unmapped: 40329216 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3281756 data_alloc: 234881024 data_used: 17829888
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:52.385067+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191266816 unmapped: 40329216 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bc800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 ms_handle_reset con 0x560f401bc800 session 0x560f41acf680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41406400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:53.385300+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 ms_handle_reset con 0x560f41406400 session 0x560f3fd4fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191250432 unmapped: 40345600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f286e000/0x0/0x4ffc00000, data 0x40e155b/0x434f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:54.385475+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 ms_handle_reset con 0x560f45f24400 session 0x560f3fd4e1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401af800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 ms_handle_reset con 0x560f401af800 session 0x560f3fd56d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191143936 unmapped: 40452096 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f286e000/0x0/0x4ffc00000, data 0x40e155b/0x434f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:55.385713+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191143936 unmapped: 40452096 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140a800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 ms_handle_reset con 0x560f4140a800 session 0x560f3f4785a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401af800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401bc800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 ms_handle_reset con 0x560f401bc800 session 0x560f3f5641e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41406400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:56.385920+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 453 handle_osd_map epochs [453,454], i have 453, src has [1,454]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 454 ms_handle_reset con 0x560f41406400 session 0x560f401525a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191209472 unmapped: 40386560 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 454 ms_handle_reset con 0x560f401af800 session 0x560f3fd561e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284458 data_alloc: 234881024 data_used: 17838080
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:57.386103+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 454 heartbeat osd_stat(store_statfs(0x4f245b000/0x0/0x4ffc00000, data 0x40e312c/0x4352000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191250432 unmapped: 40345600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 454 heartbeat osd_stat(store_statfs(0x4f245b000/0x0/0x4ffc00000, data 0x40e312c/0x4352000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61e000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 454 ms_handle_reset con 0x560f3f61e000 session 0x560f413805a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:58.386277+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 191291392 unmapped: 40304640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:59.386450+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192946176 unmapped: 38649856 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:00.386577+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 456 ms_handle_reset con 0x560f40048400 session 0x560f41cb9c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192978944 unmapped: 38617088 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:01.386791+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192995328 unmapped: 38600704 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.259676933s of 10.669497490s, submitted: 87
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 456 ms_handle_reset con 0x560f401b7000 session 0x560f41df05a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328480 data_alloc: 234881024 data_used: 18903040
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 456 heartbeat osd_stat(store_statfs(0x4f2054000/0x0/0x4ffc00000, data 0x44e677c/0x4758000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:02.387008+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 38551552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 456 heartbeat osd_stat(store_statfs(0x4f2054000/0x0/0x4ffc00000, data 0x44e677c/0x4758000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:03.387154+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 456 handle_osd_map epochs [457,457], i have 457, src has [1,457]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 457 ms_handle_reset con 0x560f40046800 session 0x560f402ea1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193093632 unmapped: 38502400 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:04.387299+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193093632 unmapped: 38502400 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140b400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:05.387667+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 457 ms_handle_reset con 0x560f4140b400 session 0x560f41cb85a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193175552 unmapped: 38420480 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:06.387806+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 458 ms_handle_reset con 0x560f411b6800 session 0x560f41e00f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193200128 unmapped: 38395904 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3337779 data_alloc: 234881024 data_used: 18923520
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:07.388072+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193200128 unmapped: 38395904 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 458 heartbeat osd_stat(store_statfs(0x4f204d000/0x0/0x4ffc00000, data 0x44e9f72/0x4760000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c5000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 458 ms_handle_reset con 0x560f401c5000 session 0x560f41acfe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 458 ms_handle_reset con 0x560f411b6400 session 0x560f4115e780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:08.388302+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 458 heartbeat osd_stat(store_statfs(0x4f204d000/0x0/0x4ffc00000, data 0x44e9f72/0x4760000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193282048 unmapped: 38313984 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 458 ms_handle_reset con 0x560f40046800 session 0x560f410e1860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 458 ms_handle_reset con 0x560f401b7000 session 0x560f4115fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:09.388491+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193347584 unmapped: 38248448 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:10.388675+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 459 ms_handle_reset con 0x560f411b6800 session 0x560f41e00d20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140b400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 459 ms_handle_reset con 0x560f4140b400 session 0x560f41e1c960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193372160 unmapped: 38223872 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:11.388924+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 459 heartbeat osd_stat(store_statfs(0x4f2049000/0x0/0x4ffc00000, data 0x44eb9e5/0x4764000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 459 ms_handle_reset con 0x560f40046800 session 0x560f3ec74780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193486848 unmapped: 38109184 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 459 ms_handle_reset con 0x560f411b6400 session 0x560f3f47a5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c5800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.868754387s of 10.013456345s, submitted: 53
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3342026 data_alloc: 234881024 data_used: 18935808
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:12.389130+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 460 handle_osd_map epochs [460,460], i have 460, src has [1,460]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 460 ms_handle_reset con 0x560f401c5800 session 0x560f40153c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193527808 unmapped: 38068224 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 460 ms_handle_reset con 0x560f401b7000 session 0x560f3ff21c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:13.389325+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41224c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 460 ms_handle_reset con 0x560f41224c00 session 0x560f41e01c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:14.389515+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 ms_handle_reset con 0x560f40046800 session 0x560f41cb9860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 ms_handle_reset con 0x560f401b7000 session 0x560f41df0f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:15.389714+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:16.389915+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c5800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 ms_handle_reset con 0x560f401c5800 session 0x560f41e1d860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ae800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 ms_handle_reset con 0x560f401ae800 session 0x560f41cb90e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3346045 data_alloc: 234881024 data_used: 18944000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f2045000/0x0/0x4ffc00000, data 0x44ef0c3/0x4768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:17.390069+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f2046000/0x0/0x4ffc00000, data 0x44ef0b3/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 ms_handle_reset con 0x560f43533000 session 0x560f41c1b4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:18.390255+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f2046000/0x0/0x4ffc00000, data 0x44ef0c3/0x4768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:19.390427+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 ms_handle_reset con 0x560f40046800 session 0x560f3f47b2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:20.390587+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ae800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 461 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 ms_handle_reset con 0x560f401b7000 session 0x560f3feb05a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 ms_handle_reset con 0x560f401ae800 session 0x560f41df0000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:21.390876+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 ms_handle_reset con 0x560f401b8000 session 0x560f41c1b0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 ms_handle_reset con 0x560f3ec7b000 session 0x560f3f4783c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3356625 data_alloc: 234881024 data_used: 18956288
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:22.391067+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:23.391275+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 ms_handle_reset con 0x560f40046800 session 0x560f4115f0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ae800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 ms_handle_reset con 0x560f401ae800 session 0x560f3fd565a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.365769386s of 11.950494766s, submitted: 59
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 ms_handle_reset con 0x560f3f5ed000 session 0x560f3f479a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 heartbeat osd_stat(store_statfs(0x4f203d000/0x0/0x4ffc00000, data 0x44f26cf/0x476f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 ms_handle_reset con 0x560f4114dc00 session 0x560f41e00780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:24.391443+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 ms_handle_reset con 0x560f401b7000 session 0x560f3fecc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ed000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 464 ms_handle_reset con 0x560f3f5ed000 session 0x560f41ace000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 464 ms_handle_reset con 0x560f40046800 session 0x560f3feb14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 38043648 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:25.391636+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 38043648 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:26.391846+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 38043648 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3358675 data_alloc: 234881024 data_used: 19083264
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:27.392026+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 38043648 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:28.392260+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193560576 unmapped: 38035456 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:29.392446+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 465 ms_handle_reset con 0x560f3f61c400 session 0x560f41e00b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193503232 unmapped: 38092800 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 465 ms_handle_reset con 0x560f3f61d800 session 0x560f3f4ed0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f2039000/0x0/0x4ffc00000, data 0x44f5e29/0x4774000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:30.392574+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193503232 unmapped: 38092800 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:31.392750+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193503232 unmapped: 38092800 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b7800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 ms_handle_reset con 0x560f411b7800 session 0x560f420bdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ed000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 ms_handle_reset con 0x560f3f5ed000 session 0x560f3feb0b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 ms_handle_reset con 0x560f3f61c400 session 0x560f41ecd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332319 data_alloc: 234881024 data_used: 18710528
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 ms_handle_reset con 0x560f3f61d800 session 0x560f4040de00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:32.392915+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2439000/0x0/0x4ffc00000, data 0x40f781a/0x4375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:33.393158+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:34.393475+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:35.393912+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:36.395213+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332319 data_alloc: 234881024 data_used: 18710528
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:37.395650+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2439000/0x0/0x4ffc00000, data 0x40f781a/0x4375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:38.395875+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:39.396210+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:40.396446+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140e800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.044950485s of 17.386676788s, submitted: 61
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 ms_handle_reset con 0x560f4140e800 session 0x560f41e1c3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:41.397143+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334147 data_alloc: 234881024 data_used: 18710528
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:42.397729+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140bc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:43.397944+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 ms_handle_reset con 0x560f3f5ec800 session 0x560f41cb8b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f2438000/0x0/0x4ffc00000, data 0x40f782a/0x4376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:44.398557+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ed000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 467 ms_handle_reset con 0x560f3f5ec800 session 0x560f410e2000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193519616 unmapped: 38076416 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:45.398776+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 468 ms_handle_reset con 0x560f3f5ed000 session 0x560f4040cb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 468 ms_handle_reset con 0x560f3f61c400 session 0x560f420bd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 468 ms_handle_reset con 0x560f4140bc00 session 0x560f420bcd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193527808 unmapped: 38068224 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:46.399065+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61d800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193536000 unmapped: 38060032 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3344693 data_alloc: 234881024 data_used: 18722816
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:47.399224+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 468 handle_osd_map epochs [468,469], i have 468, src has [1,469]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f346d000/0x0/0x4ffc00000, data 0x40fcfea/0x4380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 469 ms_handle_reset con 0x560f3f61d800 session 0x560f410e34a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 469 ms_handle_reset con 0x560f3f5ec800 session 0x560f4014a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193593344 unmapped: 38002688 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:48.399368+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ed000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193593344 unmapped: 38002688 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:49.399681+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 469 handle_osd_map epochs [469,470], i have 469, src has [1,470]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 470 ms_handle_reset con 0x560f3f5ed000 session 0x560f410e12c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 470 ms_handle_reset con 0x560f3f61c400 session 0x560f3f5641e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193609728 unmapped: 37986304 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f346b000/0x0/0x4ffc00000, data 0x40fe6e2/0x4382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:50.400069+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193609728 unmapped: 37986304 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:51.400247+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193609728 unmapped: 37986304 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:52.400386+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3349428 data_alloc: 234881024 data_used: 18726912
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193609728 unmapped: 37986304 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:53.400512+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f346c000/0x0/0x4ffc00000, data 0x40fe6d2/0x4381000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193609728 unmapped: 37986304 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:54.400654+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193609728 unmapped: 37986304 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.820096970s of 14.058879852s, submitted: 69
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:55.400791+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193634304 unmapped: 37961728 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 471 ms_handle_reset con 0x560f411b6800 session 0x560f3fd4e1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:56.400909+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193634304 unmapped: 37961728 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:57.401028+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354184 data_alloc: 234881024 data_used: 18726912
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193634304 unmapped: 37961728 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 471 ms_handle_reset con 0x560f40048c00 session 0x560f3f4785a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:58.416466+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ed000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61c400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 472 ms_handle_reset con 0x560f3f61c400 session 0x560f3f4eda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193667072 unmapped: 37928960 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 473 ms_handle_reset con 0x560f3f5ed000 session 0x560f410e3e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:59.416666+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f3463000/0x0/0x4ffc00000, data 0x4101dbe/0x438a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40048c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 473 ms_handle_reset con 0x560f40048c00 session 0x560f3fd57c20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 473 ms_handle_reset con 0x560f3f5ec800 session 0x560f420bd4a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193683456 unmapped: 37912576 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:00.417031+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 37838848 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f3459000/0x0/0x4ffc00000, data 0x4105910/0x4392000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 474 handle_osd_map epochs [474,475], i have 474, src has [1,475]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 474 handle_osd_map epochs [475,475], i have 475, src has [1,475]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:01.417310+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 475 ms_handle_reset con 0x560f411b6800 session 0x560f40153860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 475 ms_handle_reset con 0x560f3f5ec800 session 0x560f41463680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 38240256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:02.417439+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 476 ms_handle_reset con 0x560f41607400 session 0x560f41371680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373893 data_alloc: 234881024 data_used: 18739200
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b7400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 476 ms_handle_reset con 0x560f411b7400 session 0x560f410e3860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 38207488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:03.417616+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 38207488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:04.417832+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 38207488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:05.418050+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 38207488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:06.418196+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f3459000/0x0/0x4ffc00000, data 0x4108aa4/0x4393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 38207488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:07.418325+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373021 data_alloc: 234881024 data_used: 18735104
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 38207488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:08.418478+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 38207488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f3459000/0x0/0x4ffc00000, data 0x4108aa4/0x4393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:09.418618+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 38207488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:10.418769+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f3459000/0x0/0x4ffc00000, data 0x4108aa4/0x4393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.357872963s of 15.735631943s, submitted: 117
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193396736 unmapped: 38199296 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:11.419030+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193396736 unmapped: 38199296 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:12.419174+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375323 data_alloc: 234881024 data_used: 18735104
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193396736 unmapped: 38199296 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:13.419349+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193396736 unmapped: 38199296 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:14.419510+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193396736 unmapped: 38199296 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:15.419698+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193404928 unmapped: 38191104 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f3453000/0x0/0x4ffc00000, data 0x410c0bc/0x4399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ecc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:16.419799+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 478 ms_handle_reset con 0x560f3f5ecc00 session 0x560f3fd563c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f3453000/0x0/0x4ffc00000, data 0x410c0bc/0x4399000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 478 handle_osd_map epochs [479,479], i have 479, src has [1,479]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193413120 unmapped: 38182912 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:17.419930+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381271 data_alloc: 234881024 data_used: 18735104
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:18.420108+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 479 ms_handle_reset con 0x560f3f5ec400 session 0x560f41371a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:19.420248+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 479 heartbeat osd_stat(store_statfs(0x4f3451000/0x0/0x4ffc00000, data 0x410dc8d/0x439c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:20.420437+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:21.420613+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.458189011s of 10.517991066s, submitted: 43
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 480 ms_handle_reset con 0x560f3f5ec400 session 0x560f3f478f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:22.420778+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384917 data_alloc: 234881024 data_used: 18735104
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 481 ms_handle_reset con 0x560f41640c00 session 0x560f3fd4fe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:23.420932+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f344b000/0x0/0x4ffc00000, data 0x4111407/0x43a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193429504 unmapped: 38166528 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:24.421071+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426a5000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 481 ms_handle_reset con 0x560f426a5000 session 0x560f413801e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193454080 unmapped: 38141952 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 481 handle_osd_map epochs [481,482], i have 481, src has [1,482]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:25.421219+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193462272 unmapped: 38133760 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 483 ms_handle_reset con 0x560f40046800 session 0x560f41c1a5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:26.421392+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193495040 unmapped: 38100992 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4181b800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 483 ms_handle_reset con 0x560f4181b800 session 0x560f410e0b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 484 ms_handle_reset con 0x560f3f5ec400 session 0x560f4115ed20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:27.421568+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402008 data_alloc: 234881024 data_used: 18743296
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 38043648 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:28.421719+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 484 ms_handle_reset con 0x560f40046800 session 0x560f3f478f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f343e000/0x0/0x4ffc00000, data 0x4116610/0x43ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 484 ms_handle_reset con 0x560f41640c00 session 0x560f410e3860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 39354368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:29.421887+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f3442000/0x0/0x4ffc00000, data 0x4116610/0x43ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426a5000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 484 ms_handle_reset con 0x560f426a5000 session 0x560f3f5641e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192241664 unmapped: 39354368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 484 handle_osd_map epochs [484,485], i have 484, src has [1,485]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:30.422079+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 39329792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:31.422441+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 39329792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f411b6c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.578519821s of 10.852032661s, submitted: 80
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:32.422621+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 486 ms_handle_reset con 0x560f411b6c00 session 0x560f4014a3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406382 data_alloc: 234881024 data_used: 18751488
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 39329792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:33.422794+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 486 ms_handle_reset con 0x560f3f5ec400 session 0x560f410e34a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f343b000/0x0/0x4ffc00000, data 0x4119c7a/0x43b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 39329792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:34.422928+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 487 ms_handle_reset con 0x560f40046800 session 0x560f420bd860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 487 ms_handle_reset con 0x560f41640c00 session 0x560f4040cb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 39329792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:35.423082+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 39329792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426a5000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 488 ms_handle_reset con 0x560f426a5000 session 0x560f410e2000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:36.423256+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f3434000/0x0/0x4ffc00000, data 0x411d288/0x43b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 39321600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42db8000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:37.423388+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 488 ms_handle_reset con 0x560f42db8000 session 0x560f41e1c3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413174 data_alloc: 234881024 data_used: 18755584
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f3434000/0x0/0x4ffc00000, data 0x411d288/0x43b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 488 handle_osd_map epochs [489,489], i have 489, src has [1,489]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 39321600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f3434000/0x0/0x4ffc00000, data 0x411d288/0x43b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:38.423567+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f5ec400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 489 ms_handle_reset con 0x560f3f5ec400 session 0x560f4040de00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192274432 unmapped: 39321600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:39.423875+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 489 ms_handle_reset con 0x560f40046800 session 0x560f3feb0b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f43533800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 490 ms_handle_reset con 0x560f43533800 session 0x560f420bdc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192290816 unmapped: 39305216 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 490 ms_handle_reset con 0x560f41640c00 session 0x560f41e00b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:40.424154+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f3430000/0x0/0x4ffc00000, data 0x411ee21/0x43bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 490 handle_osd_map epochs [491,491], i have 491, src has [1,491]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192315392 unmapped: 39280640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f426a5000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 491 ms_handle_reset con 0x560f426a5000 session 0x560f3feb14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:41.424319+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192315392 unmapped: 39280640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:42.424568+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421168 data_alloc: 234881024 data_used: 18771968
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 192315392 unmapped: 39280640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:43.424740+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 491 handle_osd_map epochs [491,492], i have 491, src has [1,492]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.998947144s of 11.126818657s, submitted: 50
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 492 ms_handle_reset con 0x560f40046800 session 0x560f41ace000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193396736 unmapped: 38199296 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:44.425095+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 492 handle_osd_map epochs [492,493], i have 492, src has [1,493]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3426000/0x0/0x4ffc00000, data 0x4125beb/0x43c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:45.425328+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:46.425567+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.5 total, 600.0 interval
                                           Cumulative writes: 23K writes, 95K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 23K writes, 8328 syncs, 2.86 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7321 writes, 32K keys, 7321 commit groups, 1.0 writes per commit group, ingest: 25.86 MB, 0.04 MB/s
                                           Interval WAL: 7321 writes, 3004 syncs, 2.44 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 493 ms_handle_reset con 0x560f41640c00 session 0x560f3f47a960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3426000/0x0/0x4ffc00000, data 0x4125beb/0x43c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:47.425785+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427788 data_alloc: 234881024 data_used: 18771968
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193421312 unmapped: 38174720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:48.426060+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 494 ms_handle_reset con 0x560f41640000 session 0x560f4040da40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193437696 unmapped: 38158336 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:49.426282+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 494 handle_osd_map epochs [494,495], i have 494, src has [1,495]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193486848 unmapped: 38109184 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f3421000/0x0/0x4ffc00000, data 0x4129385/0x43cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:50.426501+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 495 handle_osd_map epochs [496,496], i have 496, src has [1,496]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193495040 unmapped: 38100992 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:51.426763+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 496 ms_handle_reset con 0x560f3f61b000 session 0x560f3fd4f680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 38084608 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:52.426915+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441954 data_alloc: 234881024 data_used: 18776064
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f461fac00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193527808 unmapped: 38068224 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:53.427120+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.988153458s of 10.108022690s, submitted: 49
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 497 ms_handle_reset con 0x560f461fac00 session 0x560f3fd4ed20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 38043648 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 497 ms_handle_reset con 0x560f3f61b000 session 0x560f401532c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:54.427290+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 497 handle_osd_map epochs [497,498], i have 497, src has [1,498]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 498 ms_handle_reset con 0x560f40046800 session 0x560f3feb1e00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 498 heartbeat osd_stat(store_statfs(0x4f3419000/0x0/0x4ffc00000, data 0x412ca2f/0x43d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193609728 unmapped: 37986304 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:55.427672+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 498 ms_handle_reset con 0x560f41640000 session 0x560f420bda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 498 ms_handle_reset con 0x560f41640c00 session 0x560f3f565860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194756608 unmapped: 36839424 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:56.427881+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40188400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 498 ms_handle_reset con 0x560f40188400 session 0x560f41371a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194756608 unmapped: 36839424 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:57.428130+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3446320 data_alloc: 234881024 data_used: 18788352
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 498 heartbeat osd_stat(store_statfs(0x4f3418000/0x0/0x4ffc00000, data 0x412e5ae/0x43d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194756608 unmapped: 36839424 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:58.428314+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 498 heartbeat osd_stat(store_statfs(0x4f3418000/0x0/0x4ffc00000, data 0x412e5ae/0x43d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 498 handle_osd_map epochs [498,499], i have 498, src has [1,499]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 499 ms_handle_reset con 0x560f3f61b000 session 0x560f3feb0f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 36823040 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:59.429403+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40046800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 499 ms_handle_reset con 0x560f40046800 session 0x560f3fd4ef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 499 ms_handle_reset con 0x560f41640000 session 0x560f41eccb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194772992 unmapped: 36823040 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:00.429569+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41640c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 499 handle_osd_map epochs [501,501], i have 499, src has [1,501]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 499 handle_osd_map epochs [500,501], i have 499, src has [1,501]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 501 ms_handle_reset con 0x560f41640c00 session 0x560f41e1c780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 501 ms_handle_reset con 0x560f401b8400 session 0x560f41380f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194822144 unmapped: 36773888 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:01.429752+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194838528 unmapped: 36757504 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:02.429937+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 501 heartbeat osd_stat(store_statfs(0x4f340d000/0x0/0x4ffc00000, data 0x41337d7/0x43de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458344 data_alloc: 234881024 data_used: 18796544
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194838528 unmapped: 36757504 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:03.430120+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194838528 unmapped: 36757504 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:04.430270+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194838528 unmapped: 36757504 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:05.430434+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194846720 unmapped: 36749312 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.504506111s of 12.747334480s, submitted: 93
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 501 ms_handle_reset con 0x560f3ea0d400 session 0x560f4135fc20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:06.430566+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 501 heartbeat osd_stat(store_statfs(0x4f340d000/0x0/0x4ffc00000, data 0x41337d7/0x43de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 501 heartbeat osd_stat(store_statfs(0x4f340d000/0x0/0x4ffc00000, data 0x41337d7/0x43de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194854912 unmapped: 36741120 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:07.430728+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456930 data_alloc: 234881024 data_used: 18796544
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 502 ms_handle_reset con 0x560f3ec7a000 session 0x560f41e00960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 36675584 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:08.430899+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41406c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 502 ms_handle_reset con 0x560f41406c00 session 0x560f4040e3c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b8000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 503 ms_handle_reset con 0x560f401b8000 session 0x560f41ecc1e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195067904 unmapped: 36528128 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:09.431055+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f340b000/0x0/0x4ffc00000, data 0x41353d2/0x43e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41ecb400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 503 ms_handle_reset con 0x560f41ecb400 session 0x560f4040e960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195076096 unmapped: 36519936 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 503 handle_osd_map epochs [503,504], i have 503, src has [1,504]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 504 ms_handle_reset con 0x560f3ea0d400 session 0x560f3fecc780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:10.431244+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195158016 unmapped: 36438016 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:11.431490+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 504 ms_handle_reset con 0x560f3ec7a000 session 0x560f410e2f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195190784 unmapped: 36405248 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:12.431645+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478015 data_alloc: 234881024 data_used: 18812928
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 504 heartbeat osd_stat(store_statfs(0x4f3406000/0x0/0x4ffc00000, data 0x41389d0/0x43e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 504 handle_osd_map epochs [505,505], i have 504, src has [1,505]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195223552 unmapped: 36372480 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 505 ms_handle_reset con 0x560f41405400 session 0x560f3f58d0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:13.431812+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41641800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 505 ms_handle_reset con 0x560f41641800 session 0x560f41380b40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195362816 unmapped: 36233216 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 506 ms_handle_reset con 0x560f3ff2cc00 session 0x560f41c1ba40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:14.432007+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195379200 unmapped: 36216832 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 506 ms_handle_reset con 0x560f3ea0d400 session 0x560f41381860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:15.432133+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 506 ms_handle_reset con 0x560f3ec7a000 session 0x560f4115fe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 506 heartbeat osd_stat(store_statfs(0x4f3400000/0x0/0x4ffc00000, data 0x413c14a/0x43ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195362816 unmapped: 36233216 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:16.432329+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f4140a400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.081028938s of 10.472676277s, submitted: 107
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 506 ms_handle_reset con 0x560f4140a400 session 0x560f3f47b860
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195395584 unmapped: 36200448 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:17.432510+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3486604 data_alloc: 234881024 data_used: 18825216
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c4000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 506 heartbeat osd_stat(store_statfs(0x4f33ff000/0x0/0x4ffc00000, data 0x413c1bc/0x43ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195403776 unmapped: 36192256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 506 handle_osd_map epochs [506,507], i have 506, src has [1,507]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:18.432631+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 507 ms_handle_reset con 0x560f401c4000 session 0x560f3feccd20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41405800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 507 ms_handle_reset con 0x560f41405800 session 0x560f3ff21a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195420160 unmapped: 36175872 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 507 handle_osd_map epochs [507,508], i have 507, src has [1,508]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:19.432776+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 508 ms_handle_reset con 0x560f3ea0d400 session 0x560f41e1de00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195444736 unmapped: 36151296 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 508 handle_osd_map epochs [508,509], i have 508, src has [1,509]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 509 ms_handle_reset con 0x560f3ec7a000 session 0x560f41ecc960
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c4000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:20.432945+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 509 ms_handle_reset con 0x560f401c4000 session 0x560f3fead0e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 509 heartbeat osd_stat(store_statfs(0x4f33f6000/0x0/0x4ffc00000, data 0x4141383/0x43f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195469312 unmapped: 36126720 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:21.433207+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61b000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 509 ms_handle_reset con 0x560f3f61b000 session 0x560f41ecd680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195493888 unmapped: 36102144 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:22.433350+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495789 data_alloc: 234881024 data_used: 18837504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195493888 unmapped: 36102144 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:23.433586+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401ae800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 509 handle_osd_map epochs [510,510], i have 509, src has [1,510]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 510 ms_handle_reset con 0x560f401ae800 session 0x560f41acef00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 36077568 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:24.433742+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 510 ms_handle_reset con 0x560f3ea0d400 session 0x560f41eccb40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ec7a000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 510 heartbeat osd_stat(store_statfs(0x4f33f3000/0x0/0x4ffc00000, data 0x4142f1c/0x43f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 36077568 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:25.433908+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 510 ms_handle_reset con 0x560f3ec7a000 session 0x560f3feb0f00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 195518464 unmapped: 36077568 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 510 heartbeat osd_stat(store_statfs(0x4f33f3000/0x0/0x4ffc00000, data 0x4142f1c/0x43f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 510 handle_osd_map epochs [511,511], i have 510, src has [1,511]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 510 handle_osd_map epochs [511,511], i have 511, src has [1,511]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:26.434050+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194019328 unmapped: 37576704 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:27.434221+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41606800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.634702682s of 10.821389198s, submitted: 63
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 511 ms_handle_reset con 0x560f41606800 session 0x560f41371a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504675 data_alloc: 234881024 data_used: 18837504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194027520 unmapped: 37568512 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:28.434330+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61e000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194035712 unmapped: 37560320 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:29.434457+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 511 handle_osd_map epochs [512,512], i have 511, src has [1,512]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 512 ms_handle_reset con 0x560f3f61e000 session 0x560f420bda40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401b7400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 512 ms_handle_reset con 0x560f401b7400 session 0x560f401532c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194035712 unmapped: 37560320 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 nova_compute[256729]: 2025-11-29 08:16:44.815 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 512 handle_osd_map epochs [513,513], i have 512, src has [1,513]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:30.434598+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194043904 unmapped: 37552128 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 513 handle_osd_map epochs [513,514], i have 513, src has [1,514]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 ms_handle_reset con 0x560f3ea0d400 session 0x560f3fd4f680
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:31.434788+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 heartbeat osd_stat(store_statfs(0x4f33e6000/0x0/0x4ffc00000, data 0x4149d6c/0x4407000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:32.435014+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 ms_handle_reset con 0x560f41607c00 session 0x560f41ace000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511828 data_alloc: 234881024 data_used: 18837504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 ms_handle_reset con 0x560f3f61cc00 session 0x560f3feb14a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:33.435186+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:34.435342+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 heartbeat osd_stat(store_statfs(0x4f33e8000/0x0/0x4ffc00000, data 0x4149cfa/0x4405000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:35.435512+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:36.435701+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:37.435883+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511828 data_alloc: 234881024 data_used: 18837504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:38.436109+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:39.436277+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 heartbeat osd_stat(store_statfs(0x4f33e8000/0x0/0x4ffc00000, data 0x4149cfa/0x4405000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 heartbeat osd_stat(store_statfs(0x4f33e8000/0x0/0x4ffc00000, data 0x4149cfa/0x4405000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:40.436441+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:41.436661+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194060288 unmapped: 37535744 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:42.436796+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 heartbeat osd_stat(store_statfs(0x4f33e8000/0x0/0x4ffc00000, data 0x4149cfa/0x4405000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 handle_osd_map epochs [515,515], i have 514, src has [1,515]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 514 handle_osd_map epochs [515,515], i have 515, src has [1,515]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.834046364s of 14.968808174s, submitted: 40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514802 data_alloc: 234881024 data_used: 18837504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:43.437019+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:44.437195+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:45.437347+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e5000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:46.437601+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:47.437782+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514802 data_alloc: 234881024 data_used: 18837504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:48.438062+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:49.438318+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e5000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:50.438419+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:51.438653+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e5000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:52.438858+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514802 data_alloc: 234881024 data_used: 18837504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:53.439094+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:54.439240+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194068480 unmapped: 37527552 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:55.439424+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e5000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194076672 unmapped: 37519360 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:56.440023+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194076672 unmapped: 37519360 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:57.440166+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514802 data_alloc: 234881024 data_used: 18837504
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194076672 unmapped: 37519360 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:58.440315+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.364093781s of 16.515644073s, submitted: 10
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194084864 unmapped: 37511168 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:59.440456+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e5000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194084864 unmapped: 37511168 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:00.440613+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:01.440829+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194084864 unmapped: 37511168 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:02.441618+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194142208 unmapped: 37453824 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 ms_handle_reset con 0x560f3ea81000 session 0x560f3fe1c5a0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f401c5000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515362 data_alloc: 234881024 data_used: 18874368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:03.441805+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:04.441940+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e6000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:05.442106+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:06.442262+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:07.442444+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515362 data_alloc: 234881024 data_used: 18874368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:08.442618+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:09.442770+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:10.442902+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e6000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:11.443108+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:12.443232+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515362 data_alloc: 234881024 data_used: 18874368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:13.443375+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e6000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:14.443542+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:15.443665+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:16.443786+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e6000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:17.443914+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e6000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515362 data_alloc: 234881024 data_used: 18874368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e6000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:18.444020+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e6000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:19.444147+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:20.444330+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:21.444544+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:22.444692+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515362 data_alloc: 234881024 data_used: 18874368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:23.444933+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:24.445115+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f33e6000/0x0/0x4ffc00000, data 0x414b75d/0x4408000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:25.445300+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194150400 unmapped: 37445632 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:26.445480+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194158592 unmapped: 37437440 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f42ce9c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 27.274656296s of 27.623311996s, submitted: 108
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 ms_handle_reset con 0x560f42ce9c00 session 0x560f410e1a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:27.445677+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194166784 unmapped: 37429248 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3520446 data_alloc: 234881024 data_used: 18874368
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41224000
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 515 handle_osd_map epochs [516,516], i have 515, src has [1,516]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 ms_handle_reset con 0x560f41224000 session 0x560f3fe1d2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:28.445873+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194166784 unmapped: 37429248 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 heartbeat osd_stat(store_statfs(0x4f33e0000/0x0/0x4ffc00000, data 0x414d34c/0x440d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:29.446109+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194166784 unmapped: 37429248 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ea0d400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 ms_handle_reset con 0x560f3ea0d400 session 0x560f414630e0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:30.446269+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194191360 unmapped: 37404672 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 heartbeat osd_stat(store_statfs(0x4f33e0000/0x0/0x4ffc00000, data 0x414d34c/0x440d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3f61cc00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 ms_handle_reset con 0x560f3f61cc00 session 0x560f41acf2c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f41607c00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:31.446440+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 ms_handle_reset con 0x560f41607c00 session 0x560f402ebe00
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194215936 unmapped: 37380096 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f40189400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 ms_handle_reset con 0x560f40189400 session 0x560f41cb8780
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f3ff2c800
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:32.446749+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194215936 unmapped: 37380096 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3523746 data_alloc: 234881024 data_used: 18890752
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 ms_handle_reset con 0x560f45f24400 session 0x560f4014ad20
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: handle_auth_request added challenge on 0x560f45f24400
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:33.446938+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194215936 unmapped: 37380096 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 handle_osd_map epochs [517,517], i have 516, src has [1,517]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 516 handle_osd_map epochs [517,517], i have 517, src has [1,517]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 517 heartbeat osd_stat(store_statfs(0x4f33e1000/0x0/0x4ffc00000, data 0x414d34c/0x440d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 517 ms_handle_reset con 0x560f45f24400 session 0x560f41cb92c0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:34.447186+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194240512 unmapped: 37355520 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 517 ms_handle_reset con 0x560f3ff2c800 session 0x560f3f479a40
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:35.447392+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194248704 unmapped: 37347328 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:36.447591+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194248704 unmapped: 37347328 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:37.447766+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194248704 unmapped: 37347328 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3524696 data_alloc: 234881024 data_used: 18890752
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:38.447920+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 517 heartbeat osd_stat(store_statfs(0x4f33e0000/0x0/0x4ffc00000, data 0x414eeab/0x440e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194248704 unmapped: 37347328 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:39.448131+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194248704 unmapped: 37347328 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:40.448295+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194248704 unmapped: 37347328 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 517 handle_osd_map epochs [518,518], i have 517, src has [1,518]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.557234764s of 14.399567604s, submitted: 86
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _renew_subs
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 517 handle_osd_map epochs [518,518], i have 518, src has [1,518]
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:41.448465+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194256896 unmapped: 37339136 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:42.448648+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194265088 unmapped: 37330944 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:43.448776+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194265088 unmapped: 37330944 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:44.448920+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194265088 unmapped: 37330944 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:45.449024+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194265088 unmapped: 37330944 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:46.449154+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194265088 unmapped: 37330944 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:47.449346+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194265088 unmapped: 37330944 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:48.449554+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194265088 unmapped: 37330944 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:49.449747+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194265088 unmapped: 37330944 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:50.449905+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194273280 unmapped: 37322752 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:51.450087+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194273280 unmapped: 37322752 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:52.450259+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194273280 unmapped: 37322752 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:53.450438+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194273280 unmapped: 37322752 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:54.450599+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194273280 unmapped: 37322752 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:55.450775+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194273280 unmapped: 37322752 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:56.450939+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194273280 unmapped: 37322752 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:57.451148+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194273280 unmapped: 37322752 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:58.451327+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194289664 unmapped: 37306368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:59.451544+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194289664 unmapped: 37306368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:00.451759+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194289664 unmapped: 37306368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:01.452035+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194289664 unmapped: 37306368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:02.452266+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194289664 unmapped: 37306368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:03.452444+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194289664 unmapped: 37306368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:04.452618+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194289664 unmapped: 37306368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:05.452809+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194289664 unmapped: 37306368 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:06.453046+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194297856 unmapped: 37298176 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:07.453240+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194297856 unmapped: 37298176 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:08.453441+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194297856 unmapped: 37298176 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:09.453589+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194297856 unmapped: 37298176 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:10.453845+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194297856 unmapped: 37298176 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:11.454055+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194297856 unmapped: 37298176 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:12.454264+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194297856 unmapped: 37298176 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:13.454446+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194306048 unmapped: 37289984 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:14.454619+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 37281792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:15.454811+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 37281792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:16.454998+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 37281792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:17.455199+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 37281792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:18.455380+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 37281792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:19.455595+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 37281792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:20.455790+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194314240 unmapped: 37281792 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:21.456457+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194322432 unmapped: 37273600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:22.456589+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194322432 unmapped: 37273600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:23.457197+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194322432 unmapped: 37273600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:24.457330+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194322432 unmapped: 37273600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:25.457465+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194322432 unmapped: 37273600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:26.457614+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194322432 unmapped: 37273600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:27.457949+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194322432 unmapped: 37273600 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:28.458381+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 37265408 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:29.458628+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 37265408 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:30.458837+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 37249024 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:31.459115+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 37249024 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:32.459288+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 37249024 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:33.459484+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 37249024 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:34.459646+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 37249024 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:35.459858+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 37249024 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:36.460112+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 37249024 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:37.460338+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194363392 unmapped: 37232640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:38.460594+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194363392 unmapped: 37232640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:39.460866+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194363392 unmapped: 37232640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:40.461118+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194363392 unmapped: 37232640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:41.461384+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194363392 unmapped: 37232640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:42.461618+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194363392 unmapped: 37232640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:43.461778+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194363392 unmapped: 37232640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:44.462022+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194363392 unmapped: 37232640 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:45.462222+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 37216256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:46.462421+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 37216256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:47.462604+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 37216256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:48.462758+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 37216256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:49.463025+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 37216256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:50.463213+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 37216256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:51.463437+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 37216256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:52.463670+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 37216256 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:53.463934+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 37208064 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:54.464177+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 37208064 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:55.464369+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 37208064 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:56.465345+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 37208064 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:57.466035+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 37208064 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:58.466537+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 37208064 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:59.466708+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 37208064 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:00.467044+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 37208064 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:01.467240+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194404352 unmapped: 37191680 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:02.467425+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 37183488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:03.467898+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 37183488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:04.468337+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 37183488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:05.468477+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 37183488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:06.468656+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 37183488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:07.468808+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 37183488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:08.469009+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 37183488 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:09.469170+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194420736 unmapped: 37175296 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f33dc000/0x0/0x4ffc00000, data 0x415090e/0x4411000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:10.469277+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194420736 unmapped: 37175296 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:11.469411+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194486272 unmapped: 37109760 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: do_command 'config diff' '{prefix=config diff}'
Nov 29 08:16:44 compute-0 ceph-osd[91083]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 08:16:44 compute-0 ceph-osd[91083]: do_command 'config show' '{prefix=config show}'
Nov 29 08:16:44 compute-0 ceph-osd[91083]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 08:16:44 compute-0 ceph-osd[91083]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 08:16:44 compute-0 ceph-osd[91083]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 08:16:44 compute-0 ceph-osd[91083]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 08:16:44 compute-0 ceph-osd[91083]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:12.469546+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 37249024 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:44 compute-0 ceph-osd[91083]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:44 compute-0 ceph-osd[91083]: bluestore.MempoolThread(0x560f3dd27b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3528870 data_alloc: 234881024 data_used: 18898944
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:13.469665+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 37797888 heap: 231596032 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: tick
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_tickets
Nov 29 08:16:44 compute-0 ceph-osd[91083]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:14.469791+0000)
Nov 29 08:16:44 compute-0 ceph-osd[91083]: do_command 'log dump' '{prefix=log dump}'
Nov 29 08:16:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 08:16:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226981403' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mon[75050]: from='client.19293 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mon[75050]: pgmap v2447: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:45 compute-0 ceph-mon[75050]: from='client.19297 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mon[75050]: from='client.19299 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1117533323' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1016917989' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3226981403' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19311 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 08:16:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928370823' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:45 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19315 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:45 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 08:16:45 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3456392644' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19319 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mon[75050]: from='client.19303 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mon[75050]: from='client.19307 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mon[75050]: from='client.19311 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1928370823' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3456392644' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 08:16:46 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771980454' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 08:16:46 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19323 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:47 compute-0 ceph-mon[75050]: pgmap v2448: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:47 compute-0 ceph-mon[75050]: from='client.19315 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:47 compute-0 ceph-mon[75050]: from='client.19319 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:47 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2771980454' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 08:16:47 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19330 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:47 compute-0 ceph-mgr[75345]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 08:16:47 compute-0 ceph-14ff1f30-5059-58f1-9a23-69871bb275a1-mgr-compute-0-kzdpag[75341]: 2025-11-29T08:16:47.217+0000 7f4b59335640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 08:16:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 08:16:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2964355547' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 08:16:47 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 08:16:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192733016' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 08:16:47 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 08:16:47 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210967822' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 08:16:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455345251' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75050]: from='client.19323 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2964355547' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1192733016' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1210967822' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1455345251' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 08:16:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4063297335' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 08:16:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3841514165' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 08:16:48 compute-0 crontab[313546]: (root) LIST (root)
Nov 29 08:16:48 compute-0 nova_compute[256729]: 2025-11-29 08:16:48.538 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 08:16:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319729763' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 08:16:48 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 08:16:48 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3373669030' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 08:16:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115395291' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: from='client.19330 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: pgmap v2449: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4063297335' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3841514165' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/319729763' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3373669030' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4115395291' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 08:16:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694480650' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:39.240191+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 28901376 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:40.240313+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3157 syncs, 3.66 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4854 writes, 17K keys, 4854 commit groups, 1.0 writes per commit group, ingest: 10.20 MB, 0.02 MB/s
                                           Interval WAL: 4854 writes, 1950 syncs, 2.49 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1650030 data_alloc: 234881024 data_used: 17321984
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 28884992 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f8d3b000/0x0/0x4ffc00000, data 0x23f5518/0x2522000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:41.240514+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 28884992 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:42.240661+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 28884992 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:43.240807+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 28884992 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f8d3b000/0x0/0x4ffc00000, data 0x23f5518/0x2522000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:44.240978+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 28884992 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:45.241230+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1650030 data_alloc: 234881024 data_used: 17321984
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 28884992 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94dc7f400 session 0x55f94ca3f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:46.241357+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94dcda000 session 0x55f94ca3e780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 28868608 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:47.241479+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f8d3b000/0x0/0x4ffc00000, data 0x23f5518/0x2522000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 28868608 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.812785149s of 13.835424423s, submitted: 11
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94dc7e400 session 0x55f94ca3eb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:48.241599+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 28860416 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:49.241715+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f8d3b000/0x0/0x4ffc00000, data 0x23f557a/0x2523000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 26345472 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:50.241821+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94dc83c00 session 0x55f94ca3fc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d7ca000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1757441 data_alloc: 234881024 data_used: 19226624
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 126312448 unmapped: 19496960 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94d7ca000 session 0x55f94bd2f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:51.241992+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94dc7e400 session 0x55f94cf4d860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94dc7f400 session 0x55f94caf23c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 128294912 unmapped: 17514496 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:52.242174+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 128294912 unmapped: 17514496 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:53.242352+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94dc83c00 session 0x55f94d579c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 128294912 unmapped: 17514496 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:54.242511+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 128294912 unmapped: 17514496 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:55.242753+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f7c77000/0x0/0x4ffc00000, data 0x34b1518/0x35de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1815403 data_alloc: 234881024 data_used: 19451904
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 128311296 unmapped: 17498112 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:56.242909+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 20127744 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94dcda000 session 0x55f94d726000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82bc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:57.248394+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f7c7e000/0x0/0x4ffc00000, data 0x34b3518/0x35e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 125681664 unmapped: 20127744 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:58.248556+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.493490219s of 10.365571976s, submitted: 135
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94d82bc00 session 0x55f94d727c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 21585920 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:59.248728+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 21585920 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:00.248876+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1764819 data_alloc: 234881024 data_used: 19451904
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 21585920 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 ms_handle_reset con 0x55f94dc7e400 session 0x55f94cac4000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:01.249137+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 21585920 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f8172000/0x0/0x4ffc00000, data 0x2fbe57a/0x30ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:02.249297+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 21585920 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f8171000/0x0/0x4ffc00000, data 0x2fbf57a/0x30ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:03.249470+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 heartbeat osd_stat(store_statfs(0x4f8171000/0x0/0x4ffc00000, data 0x2fbf57a/0x30ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 21585920 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:04.249655+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 21585920 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 207 ms_handle_reset con 0x55f94dc7f400 session 0x55f94d727860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:05.249825+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f816d000/0x0/0x4ffc00000, data 0x2fc10f7/0x30f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1770442 data_alloc: 234881024 data_used: 19464192
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 21585920 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 207 ms_handle_reset con 0x55f94dc83c00 session 0x55f94be3b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:06.250028+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 21577728 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f816e000/0x0/0x4ffc00000, data 0x2fc1137/0x30f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:07.250204+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 207 ms_handle_reset con 0x55f94dcda000 session 0x55f94be3b0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf1b000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 208 ms_handle_reset con 0x55f94cf1b000 session 0x55f94be3b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 21577728 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:08.250373+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 208 heartbeat osd_stat(store_statfs(0x4f816a000/0x0/0x4ffc00000, data 0x2fc2d08/0x30f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 208 ms_handle_reset con 0x55f94dc7e400 session 0x55f94cd1b0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.209820747s of 10.456501007s, submitted: 33
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 21577728 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 208 ms_handle_reset con 0x55f94dc7f400 session 0x55f94cd1b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:09.250561+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 21569536 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:10.250716+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1771808 data_alloc: 234881024 data_used: 19468288
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 21569536 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:11.250888+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 208 ms_handle_reset con 0x55f94dc83c00 session 0x55f94ca3cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 21569536 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:12.251091+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 21569536 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:13.251315+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 208 heartbeat osd_stat(store_statfs(0x4f816a000/0x0/0x4ffc00000, data 0x2fc2cd8/0x30f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 21569536 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:14.251479+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 208 heartbeat osd_stat(store_statfs(0x4f816a000/0x0/0x4ffc00000, data 0x2fc2cd8/0x30f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124248064 unmapped: 21561344 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:15.251877+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1779560 data_alloc: 234881024 data_used: 19484672
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124248064 unmapped: 21561344 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:16.252074+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 heartbeat osd_stat(store_statfs(0x4f8166000/0x0/0x4ffc00000, data 0x2fc473b/0x30f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 21569536 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:17.252226+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 21569536 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:18.252380+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 heartbeat osd_stat(store_statfs(0x4f8167000/0x0/0x4ffc00000, data 0x2fc473b/0x30f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 21569536 heap: 145809408 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:19.252574+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94d397c00 session 0x55f94d710000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94d396400 session 0x55f94b8632c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94d397c00 session 0x55f94be3ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94dc7e400 session 0x55f94ca35680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.771372795s of 10.845426559s, submitted: 25
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138797056 unmapped: 11214848 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:20.252721+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94dc7f400 session 0x55f94d595e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94bd7d800 session 0x55f94d5f63c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94ca2f000 session 0x55f94b248d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94bd7d800 session 0x55f94d6732c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94ca2f000 session 0x55f94d7261e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 ms_handle_reset con 0x55f94d397c00 session 0x55f94be3da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1884496 data_alloc: 234881024 data_used: 19488768
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 25632768 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:21.252883+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 209 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 210 ms_handle_reset con 0x55f94dc7e400 session 0x55f94b2483c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 25624576 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:22.253031+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 210 ms_handle_reset con 0x55f94dc7f400 session 0x55f94ca35a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 ms_handle_reset con 0x55f94bd7d800 session 0x55f94d579e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 25624576 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 ms_handle_reset con 0x55f94dc83c00 session 0x55f94cac5c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f7318000/0x0/0x4ffc00000, data 0x3e1031a/0x3f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:23.253155+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 ms_handle_reset con 0x55f94ca2f000 session 0x55f94dc5b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 25600000 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:24.253289+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 ms_handle_reset con 0x55f94d397c00 session 0x55f94b88b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 ms_handle_reset con 0x55f94dc7e400 session 0x55f94cf4dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 25600000 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 ms_handle_reset con 0x55f94bd7d800 session 0x55f94d7265a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:25.253484+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 ms_handle_reset con 0x55f94ca2f000 session 0x55f94b249860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1895615 data_alloc: 234881024 data_used: 19496960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 25600000 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 heartbeat osd_stat(store_statfs(0x4f7314000/0x0/0x4ffc00000, data 0x3e11f5b/0x3f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:26.253693+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 212 ms_handle_reset con 0x55f94b118800 session 0x55f94b942000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 25600000 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:27.253855+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 213 ms_handle_reset con 0x55f94b8d2400 session 0x55f94caf3a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 213 ms_handle_reset con 0x55f94d397c00 session 0x55f94adc6f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138436608 unmapped: 11575296 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:28.254047+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 214 ms_handle_reset con 0x55f94b118800 session 0x55f94d594d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138469376 unmapped: 11542528 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:29.254273+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 214 ms_handle_reset con 0x55f94b8d2400 session 0x55f94d710780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.062978745s of 10.126620293s, submitted: 75
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138526720 unmapped: 11485184 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 214 ms_handle_reset con 0x55f94bd7d800 session 0x55f94bd332c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:30.254503+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2010679 data_alloc: 251658240 data_used: 34492416
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 11444224 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:31.254666+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f730c000/0x0/0x4ffc00000, data 0x3e171c4/0x3f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d833c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 215 ms_handle_reset con 0x55f94d833c00 session 0x55f94caf2780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 215 ms_handle_reset con 0x55f94dccb000 session 0x55f94bd7ed20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138592256 unmapped: 11419648 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 215 ms_handle_reset con 0x55f94dccac00 session 0x55f94caf2960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:32.254936+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 216 ms_handle_reset con 0x55f94ca2f000 session 0x55f94bd7ef00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138592256 unmapped: 11419648 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:33.255137+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 216 ms_handle_reset con 0x55f94b118800 session 0x55f94cf4d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 217 ms_handle_reset con 0x55f94b8d2000 session 0x55f94b862d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138592256 unmapped: 11419648 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:34.255267+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138592256 unmapped: 11419648 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:35.255420+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2022766 data_alloc: 251658240 data_used: 34500608
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 11411456 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 218 ms_handle_reset con 0x55f94b8d2400 session 0x55f94be7b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:36.255634+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138616832 unmapped: 11395072 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:37.255788+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 218 heartbeat osd_stat(store_statfs(0x4f7300000/0x0/0x4ffc00000, data 0x3e1e056/0x3f5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138616832 unmapped: 11395072 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:38.255923+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 218 ms_handle_reset con 0x55f94dcda000 session 0x55f94cf4d2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138616832 unmapped: 11395072 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:39.256032+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 218 ms_handle_reset con 0x55f94b118000 session 0x55f94be3d860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 218 ms_handle_reset con 0x55f94ca2d400 session 0x55f94b23c1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.758443832s of 10.032121658s, submitted: 48
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 135405568 unmapped: 14606336 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:40.256172+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 219 ms_handle_reset con 0x55f94dccac00 session 0x55f94b124960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1843650 data_alloc: 251658240 data_used: 27762688
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136871936 unmapped: 13139968 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:41.256303+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138248192 unmapped: 11763712 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:42.256405+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 219 heartbeat osd_stat(store_statfs(0x4f7ba2000/0x0/0x4ffc00000, data 0x357bad9/0x36bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137084928 unmapped: 12926976 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:43.256544+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 219 heartbeat osd_stat(store_statfs(0x4f7ba2000/0x0/0x4ffc00000, data 0x357bad9/0x36bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137084928 unmapped: 12926976 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:44.256735+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 219 heartbeat osd_stat(store_statfs(0x4f7ba2000/0x0/0x4ffc00000, data 0x357bad9/0x36bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137084928 unmapped: 12926976 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:45.256941+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9e800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1919540 data_alloc: 251658240 data_used: 28057600
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137019392 unmapped: 12992512 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:46.257116+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 220 ms_handle_reset con 0x55f94dc9f800 session 0x55f94be3ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 13713408 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:47.257287+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 13697024 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 221 ms_handle_reset con 0x55f94b118000 session 0x55f94be3b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:48.258049+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 221 heartbeat osd_stat(store_statfs(0x4f7b98000/0x0/0x4ffc00000, data 0x3582229/0x36c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136970240 unmapped: 13041664 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:49.258201+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 222 ms_handle_reset con 0x55f94dc9e800 session 0x55f94d578960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 222 ms_handle_reset con 0x55f94dccac00 session 0x55f94da1ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 222 ms_handle_reset con 0x55f94ca2d400 session 0x55f94d726000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 222 ms_handle_reset con 0x55f94dcda000 session 0x55f94ca3c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137011200 unmapped: 13000704 heap: 150011904 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:50.258342+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 222 heartbeat osd_stat(store_statfs(0x4f7b94000/0x0/0x4ffc00000, data 0x3583da8/0x36c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.608369827s of 10.931840897s, submitted: 194
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2149689 data_alloc: 251658240 data_used: 28078080
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174858240 unmapped: 12967936 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:51.258475+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145547264 unmapped: 42278912 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:52.258612+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145580032 unmapped: 42246144 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:53.258751+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137240576 unmapped: 50585600 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:54.258924+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 42156032 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:55.259156+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 223 heartbeat osd_stat(store_statfs(0x4ebb96000/0x0/0x4ffc00000, data 0xf583da8/0xf6c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3673479 data_alloc: 251658240 data_used: 28086272
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150978560 unmapped: 36847616 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:56.259290+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 142639104 unmapped: 45187072 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:57.259418+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 142704640 unmapped: 45121536 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:58.259639+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 223 heartbeat osd_stat(store_statfs(0x4e5393000/0x0/0x4ffc00000, data 0x15d85827/0x15ecb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 45096960 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:59.259806+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 142794752 unmapped: 45031424 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:00.259916+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4103687 data_alloc: 251658240 data_used: 28086272
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 142802944 unmapped: 45023232 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:01.260119+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.698339701s of 10.696339607s, submitted: 89
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9e800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 142860288 unmapped: 44965888 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:02.260245+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 224 heartbeat osd_stat(store_statfs(0x4e3f90000/0x0/0x4ffc00000, data 0x1718728a/0x172ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 142893056 unmapped: 44933120 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:03.260381+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138706944 unmapped: 49119232 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:04.260519+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 224 ms_handle_reset con 0x55f94dc9e800 session 0x55f94bd2f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147161088 unmapped: 40665088 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:05.260741+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4222516 data_alloc: 251658240 data_used: 28094464
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138772480 unmapped: 49053696 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:06.260941+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 224 heartbeat osd_stat(store_statfs(0x4e378f000/0x0/0x4ffc00000, data 0x179872ec/0x17acf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,3])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 143007744 unmapped: 44818432 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:07.261837+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 143081472 unmapped: 44744704 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:08.262006+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:09.262120+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 143163392 unmapped: 44662784 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 224 ms_handle_reset con 0x55f94bde1800 session 0x55f94ca40f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:10.262260+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 44359680 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 225 ms_handle_reset con 0x55f94ca2d400 session 0x55f94be3dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 225 ms_handle_reset con 0x55f94e190c00 session 0x55f94bd09a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4547388 data_alloc: 251658240 data_used: 28110848
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 225 ms_handle_reset con 0x55f94b118000 session 0x55f94b124d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:11.262393+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 47349760 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1.122196555s of 10.016182899s, submitted: 100
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 226 ms_handle_reset con 0x55f94e190800 session 0x55f94cd1af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 226 heartbeat osd_stat(store_statfs(0x4e0987000/0x0/0x4ffc00000, data 0x1a78b0db/0x1a8d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 226 ms_handle_reset con 0x55f94b118000 session 0x55f94cf4c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 226 ms_handle_reset con 0x55f94dcda000 session 0x55f94cd1b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 226 ms_handle_reset con 0x55f94dccac00 session 0x55f94ca3e5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 226 ms_handle_reset con 0x55f94bde1800 session 0x55f94d593860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:12.262545+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 47259648 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 226 ms_handle_reset con 0x55f94b118000 session 0x55f94cf4da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:13.262712+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 139902976 unmapped: 47923200 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 227 ms_handle_reset con 0x55f94dcda000 session 0x55f94b249c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:14.263087+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 227 ms_handle_reset con 0x55f94e190800 session 0x55f94adc7a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 139960320 unmapped: 47865856 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 227 ms_handle_reset con 0x55f94dccac00 session 0x55f94caf3680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:15.263289+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 139968512 unmapped: 47857664 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9e800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 228 ms_handle_reset con 0x55f94dc9e800 session 0x55f94b862d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f7980000/0x0/0x4ffc00000, data 0x37904e6/0x38dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2072252 data_alloc: 251658240 data_used: 28127232
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 228 ms_handle_reset con 0x55f94ca2d400 session 0x55f94ca3f2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:16.263428+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 139993088 unmapped: 47833088 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f7980000/0x0/0x4ffc00000, data 0x37904e6/0x38dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:17.263546+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 140050432 unmapped: 47775744 heap: 187826176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 229 ms_handle_reset con 0x55f94b118000 session 0x55f94bd7e3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 229 ms_handle_reset con 0x55f94dccac00 session 0x55f94d5f6b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 229 ms_handle_reset con 0x55f94e190800 session 0x55f94b025680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 229 ms_handle_reset con 0x55f94e190c00 session 0x55f94dc5bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 229 ms_handle_reset con 0x55f94b118000 session 0x55f94dc95680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 229 ms_handle_reset con 0x55f94dcda000 session 0x55f94dc95a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:18.263711+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 46563328 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:19.263887+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152567808 unmapped: 46522368 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 229 ms_handle_reset con 0x55f94dccac00 session 0x55f94b24d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 230 ms_handle_reset con 0x55f94ca2d400 session 0x55f94dc5af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:20.264056+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 46489600 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 230 ms_handle_reset con 0x55f94e190800 session 0x55f94bd092c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2388957 data_alloc: 251658240 data_used: 37761024
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 230 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:21.264215+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152248320 unmapped: 46841856 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.473032475s of 10.084156036s, submitted: 344
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 231 ms_handle_reset con 0x55f94e190400 session 0x55f94b249a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 231 heartbeat osd_stat(store_statfs(0x4f5177000/0x0/0x4ffc00000, data 0x5b859cd/0x5cd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:22.264356+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148119552 unmapped: 50970624 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 231 ms_handle_reset con 0x55f94b118000 session 0x55f94d726000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:23.264498+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148119552 unmapped: 50970624 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:24.264678+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148119552 unmapped: 50970624 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:25.264876+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148119552 unmapped: 50970624 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 231 ms_handle_reset con 0x55f94ca2d400 session 0x55f94b025680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2386607 data_alloc: 251658240 data_used: 37769216
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:26.265026+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148119552 unmapped: 50970624 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 ms_handle_reset con 0x55f94dcda000 session 0x55f94b124000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 ms_handle_reset con 0x55f94dccac00 session 0x55f94bd7e3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:27.265177+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147800064 unmapped: 51290112 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 ms_handle_reset con 0x55f94ca2d400 session 0x55f94da7de00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 heartbeat osd_stat(store_statfs(0x4f5175000/0x0/0x4ffc00000, data 0x5b874a4/0x5cd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:28.265390+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147832832 unmapped: 51257344 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 heartbeat osd_stat(store_statfs(0x4f5176000/0x0/0x4ffc00000, data 0x5b874a4/0x5cd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 ms_handle_reset con 0x55f94dccac00 session 0x55f94be7b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:29.265556+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147865600 unmapped: 51224576 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 ms_handle_reset con 0x55f94e190400 session 0x55f94d7110e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 ms_handle_reset con 0x55f94dcda000 session 0x55f94cf4c1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 ms_handle_reset con 0x55f94e190000 session 0x55f94ca403c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:30.265698+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148332544 unmapped: 50757632 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 ms_handle_reset con 0x55f94b118000 session 0x55f94be3d860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2388560 data_alloc: 251658240 data_used: 37777408
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:31.265845+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148348928 unmapped: 50741248 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94dcda000 session 0x55f94ca3e5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94e190400 session 0x55f94b249680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.822149992s of 10.006813049s, submitted: 73
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94f96f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94f96f000 session 0x55f94b249c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f5172000/0x0/0x4ffc00000, data 0x5b88f07/0x5cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4,2])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94f96e800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94f96e800 session 0x55f94bd2ed20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:32.266048+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159809536 unmapped: 39280640 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94dccac00 session 0x55f94d5f7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94ca2d400 session 0x55f94b124d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94dcda000 session 0x55f94dc5ba40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94b118000 session 0x55f94be7a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:33.266185+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94f96e800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94f96e800 session 0x55f94ca3d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94e190400 session 0x55f94d5f63c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94b118000 session 0x55f94a3dfa40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148889600 unmapped: 50200576 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94ca2d400 session 0x55f94dbcf860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:34.266382+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148905984 unmapped: 50184192 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:35.266609+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148905984 unmapped: 50184192 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94dcda000 session 0x55f94b23c1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94dccac00 session 0x55f94dbce5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2484265 data_alloc: 251658240 data_used: 37777408
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:36.266742+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148914176 unmapped: 50176000 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94ca2d400 session 0x55f94b23c5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f45a3000/0x0/0x4ffc00000, data 0x6754feb/0x68ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:37.266902+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f45a4000/0x0/0x4ffc00000, data 0x6754fdb/0x68aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148914176 unmapped: 50176000 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94dcda000 session 0x55f94b23d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:38.267039+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94b118000 session 0x55f94dbce960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94e190400 session 0x55f94b23bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148938752 unmapped: 50151424 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94f96f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94f96e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:39.267212+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149774336 unmapped: 49315840 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:40.267373+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94f96e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149774336 unmapped: 49315840 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94f96f000 session 0x55f94caa25a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 ms_handle_reset con 0x55f94b118000 session 0x55f94d6b50e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2554594 data_alloc: 251658240 data_used: 37785600
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:41.267492+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149962752 unmapped: 49127424 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f3e1f000/0x0/0x4ffc00000, data 0x6ed6fbc/0x702e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f3e1f000/0x0/0x4ffc00000, data 0x6ed6fbc/0x702e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:42.267638+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165773312 unmapped: 33316864 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:43.267826+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 33308672 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f3e1f000/0x0/0x4ffc00000, data 0x6ed6fbc/0x702e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:44.268006+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165814272 unmapped: 33275904 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 heartbeat osd_stat(store_statfs(0x4f3e1f000/0x0/0x4ffc00000, data 0x6ed6fbc/0x702e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.394297600s of 13.305886269s, submitted: 69
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:45.268190+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165814272 unmapped: 33275904 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2684401 data_alloc: 268435456 data_used: 55496704
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:46.268316+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165814272 unmapped: 33275904 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:47.268474+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165830656 unmapped: 33259520 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:48.268631+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 33251328 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:49.268799+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 33251328 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:50.268977+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 33251328 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f3e1c000/0x0/0x4ffc00000, data 0x6ed8b39/0x7031000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2744901 data_alloc: 268435456 data_used: 55504896
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:51.269116+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 171671552 unmapped: 27418624 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:52.269443+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 171671552 unmapped: 27418624 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 ms_handle_reset con 0x55f94dcda000 session 0x55f94dc5ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:53.269582+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165847040 unmapped: 33243136 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 ms_handle_reset con 0x55f94ca2d400 session 0x55f94d5f7a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f388d000/0x0/0x4ffc00000, data 0x7468b39/0x75c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:54.269763+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 170573824 unmapped: 28516352 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 ms_handle_reset con 0x55f94e190400 session 0x55f94da71c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccc000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:55.269990+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 171081728 unmapped: 28008448 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.842226028s of 10.801381111s, submitted: 26
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 ms_handle_reset con 0x55f94fccc000 session 0x55f94b24dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:56.270306+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2788096 data_alloc: 268435456 data_used: 57835520
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 168738816 unmapped: 30351360 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:57.279044+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 168738816 unmapped: 30351360 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 ms_handle_reset con 0x55f94b118000 session 0x55f94cf4cd20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:58.279177+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 172310528 unmapped: 26779648 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:59.279318+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f33ab000/0x0/0x4ffc00000, data 0x7949b49/0x7aa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174546944 unmapped: 24543232 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f33ab000/0x0/0x4ffc00000, data 0x7949b49/0x7aa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:00.279479+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174555136 unmapped: 24535040 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:01.279612+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2830792 data_alloc: 285212672 data_used: 63668224
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174555136 unmapped: 24535040 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:02.279733+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174563328 unmapped: 24526848 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 ms_handle_reset con 0x55f94ca2d400 session 0x55f94cf4c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:03.279854+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174899200 unmapped: 24190976 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:04.280035+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 24076288 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f333e000/0x0/0x4ffc00000, data 0x79b7b39/0x7b10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 ms_handle_reset con 0x55f94e190400 session 0x55f94ca3cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:05.280385+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 24076288 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccc000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:06.280551+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2839302 data_alloc: 285212672 data_used: 63668224
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 24076288 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:07.280688+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175013888 unmapped: 24076288 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:08.280832+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175046656 unmapped: 24043520 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:09.281003+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175046656 unmapped: 24043520 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:10.281198+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f333e000/0x0/0x4ffc00000, data 0x79b7b39/0x7b10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175046656 unmapped: 24043520 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:11.281360+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2839302 data_alloc: 285212672 data_used: 63668224
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175046656 unmapped: 24043520 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.438794136s of 16.102273941s, submitted: 18
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:12.281507+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 23805952 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:13.281791+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175284224 unmapped: 23805952 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:14.281943+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177750016 unmapped: 21340160 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f3209000/0x0/0x4ffc00000, data 0x7aea6b6/0x7c44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccc400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 ms_handle_reset con 0x55f94fccc400 session 0x55f94be3a000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 ms_handle_reset con 0x55f94fccc000 session 0x55f94d579680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:15.282159+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 178413568 unmapped: 20676608 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 ms_handle_reset con 0x55f94b118000 session 0x55f94cac4780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:16.282297+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 ms_handle_reset con 0x55f94ca2d400 session 0x55f94bd33860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2912995 data_alloc: 285212672 data_used: 65155072
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177741824 unmapped: 21348352 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:17.282470+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f3b80000/0x0/0x4ffc00000, data 0x81946b6/0x82ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177774592 unmapped: 21315584 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:18.282607+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177774592 unmapped: 21315584 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 ms_handle_reset con 0x55f94e190400 session 0x55f94ca354a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccc000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 ms_handle_reset con 0x55f94fccc000 session 0x55f94da734a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:19.282773+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177774592 unmapped: 21315584 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:20.282926+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f3b7f000/0x0/0x4ffc00000, data 0x81946c6/0x82ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 21282816 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:21.283066+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2914663 data_alloc: 285212672 data_used: 65155072
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 21282816 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:22.283219+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 21282816 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:23.283404+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 21282816 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:24.283651+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccc400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f3b7f000/0x0/0x4ffc00000, data 0x81946c6/0x82ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177840128 unmapped: 21250048 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:25.283908+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177840128 unmapped: 21250048 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:26.284107+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2914823 data_alloc: 285212672 data_used: 65159168
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177840128 unmapped: 21250048 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:27.284260+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177840128 unmapped: 21250048 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:28.284431+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177840128 unmapped: 21250048 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:29.284587+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177840128 unmapped: 21250048 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:30.284789+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f3b7f000/0x0/0x4ffc00000, data 0x81946c6/0x82ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 20652032 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:31.285036+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 2926023 data_alloc: 285212672 data_used: 67108864
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 20652032 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:32.285237+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 180477952 unmapped: 18612224 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.329763412s of 21.018806458s, submitted: 63
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:33.285391+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 17596416 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:34.285604+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f397d000/0x0/0x4ffc00000, data 0x83958c6/0x84f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 17596416 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:35.285799+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f397b000/0x0/0x4ffc00000, data 0x83978c6/0x84f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 17596416 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:36.285944+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 2947147 data_alloc: 285212672 data_used: 69210112
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 17596416 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:37.286107+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 17596416 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 ms_handle_reset con 0x55f94fccc400 session 0x55f94ca34f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:38.286274+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 17596416 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f397b000/0x0/0x4ffc00000, data 0x83978c6/0x84f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:39.286499+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181493760 unmapped: 17596416 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 ms_handle_reset con 0x55f94f96e400 session 0x55f94b23b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 ms_handle_reset con 0x55f94f96e000 session 0x55f94d6b4f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:40.286646+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181526528 unmapped: 17563648 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 236 ms_handle_reset con 0x55f94ca2d400 session 0x55f94be3cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 236 heartbeat osd_stat(store_statfs(0x4f397b000/0x0/0x4ffc00000, data 0x83978c6/0x84f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 236 ms_handle_reset con 0x55f94e190400 session 0x55f94d6b4960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:41.286795+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 2902308 data_alloc: 285212672 data_used: 68767744
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181641216 unmapped: 17448960 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:42.286999+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccc000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181641216 unmapped: 17448960 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 237 ms_handle_reset con 0x55f94fccc000 session 0x55f94ca3fc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:43.287135+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 17375232 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:44.287286+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 237 ms_handle_reset con 0x55f94e190400 session 0x55f94ccd8960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94f96e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 17375232 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:45.287512+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.740693092s of 12.612131119s, submitted: 75
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 238 ms_handle_reset con 0x55f94ca2d400 session 0x55f94bd2ef00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 238 ms_handle_reset con 0x55f94f96e000 session 0x55f94caf2960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 17309696 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94f96e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 238 ms_handle_reset con 0x55f94f96e400 session 0x55f94d6e01e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:46.287707+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 2893988 data_alloc: 285212672 data_used: 68763648
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181813248 unmapped: 17276928 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 238 heartbeat osd_stat(store_statfs(0x4f40c7000/0x0/0x4ffc00000, data 0x7c498fa/0x7da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d827400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:47.287925+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 239 ms_handle_reset con 0x55f94d828000 session 0x55f94b942d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181829632 unmapped: 17260544 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 239 ms_handle_reset con 0x55f94d827400 session 0x55f94ca3e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:48.288098+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82ec00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 239 ms_handle_reset con 0x55f94d82ec00 session 0x55f94be3d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f5000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 239 ms_handle_reset con 0x55f94d5f5000 session 0x55f94bd1b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc82400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165928960 unmapped: 33161216 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 240 ms_handle_reset con 0x55f94d5f5800 session 0x55f94d6b43c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:49.288311+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165928960 unmapped: 33161216 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 240 heartbeat osd_stat(store_statfs(0x4f64b4000/0x0/0x4ffc00000, data 0x585d20c/0x59ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:50.288488+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165928960 unmapped: 33161216 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 240 ms_handle_reset con 0x55f94dc82400 session 0x55f94cf4d860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:51.288637+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2478450 data_alloc: 251658240 data_used: 44716032
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165920768 unmapped: 33169408 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f5000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:52.288790+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 241 ms_handle_reset con 0x55f94dc83c00 session 0x55f94be3a5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 241 ms_handle_reset con 0x55f94d5f5000 session 0x55f94d7d8f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f64b0000/0x0/0x4ffc00000, data 0x585ee19/0x59bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165969920 unmapped: 33120256 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d827400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:53.289030+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f64b0000/0x0/0x4ffc00000, data 0x585ee19/0x59bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165969920 unmapped: 33120256 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:54.289345+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 ms_handle_reset con 0x55f94d5f5800 session 0x55f94dbcfc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 144908288 unmapped: 54181888 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f73bf000/0x0/0x4ffc00000, data 0x4125e19/0x4284000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:55.289550+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f73bf000/0x0/0x4ffc00000, data 0x4125e19/0x4284000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.260531425s of 10.017105103s, submitted: 140
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 ms_handle_reset con 0x55f94d827400 session 0x55f94bd32780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82ec00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 ms_handle_reset con 0x55f94d82ec00 session 0x55f94b0245a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 54140928 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:56.289751+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 ms_handle_reset con 0x55f94b118000 session 0x55f94dc5ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2184438 data_alloc: 234881024 data_used: 20357120
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 144949248 unmapped: 54140928 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 ms_handle_reset con 0x55f94dcda000 session 0x55f94dc5b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:57.289921+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f5000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 ms_handle_reset con 0x55f94d5f5000 session 0x55f94dbcf0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 139075584 unmapped: 60014592 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 ms_handle_reset con 0x55f94d5f5800 session 0x55f94bd1bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d827400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 heartbeat osd_stat(store_statfs(0x4f7bed000/0x0/0x4ffc00000, data 0x4121a16/0x4281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:58.290104+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 ms_handle_reset con 0x55f94d827400 session 0x55f94bd7e960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136691712 unmapped: 62398464 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:59.290375+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136691712 unmapped: 62398464 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:00.290548+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136691712 unmapped: 62398464 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:01.290776+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2024136 data_alloc: 234881024 data_used: 11497472
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136691712 unmapped: 62398464 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:02.290938+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 heartbeat osd_stat(store_statfs(0x4f8b56000/0x0/0x4ffc00000, data 0x31b825f/0x3316000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136691712 unmapped: 62398464 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 heartbeat osd_stat(store_statfs(0x4f8b56000/0x0/0x4ffc00000, data 0x31b825f/0x3316000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f94d82cc00 session 0x55f94d711e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f94d82c400 session 0x55f94d710960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f950407000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f950407000 session 0x55f94d711c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:03.291144+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f94dc7e400 session 0x55f94d711860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f94dc7e000 session 0x55f94d710000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 136724480 unmapped: 62365696 heap: 199090176 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:04.291303+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f94dc7f400 session 0x55f94ca3ed20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f94d82cc00 session 0x55f94dc5a3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f94dc7e000 session 0x55f94b24c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f94dc7e400 session 0x55f94b24c3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f950407000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f94d82c400 session 0x55f94b2494a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 ms_handle_reset con 0x55f950407000 session 0x55f94d6b3860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 135413760 unmapped: 67878912 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:05.291513+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.273336411s of 10.169609070s, submitted: 77
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 heartbeat osd_stat(store_statfs(0x4f8e2b000/0x0/0x4ffc00000, data 0x2ee426f/0x3043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 135413760 unmapped: 67878912 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 heartbeat osd_stat(store_statfs(0x4f8e2b000/0x0/0x4ffc00000, data 0x2ee426f/0x3043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:06.291742+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2017855 data_alloc: 234881024 data_used: 11497472
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 244 ms_handle_reset con 0x55f94d82cc00 session 0x55f94b2483c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 67862528 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:07.291940+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 67862528 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 244 ms_handle_reset con 0x55f94dc7e000 session 0x55f94cf4dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:08.292135+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 244 ms_handle_reset con 0x55f94dc7f400 session 0x55f94d727c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 135307264 unmapped: 67985408 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 245 ms_handle_reset con 0x55f94dc7e400 session 0x55f94d6b4960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:09.292277+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 135315456 unmapped: 67977216 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:10.292410+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f8e23000/0x0/0x4ffc00000, data 0x2ee79e0/0x304a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 65781760 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:11.292539+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2092542 data_alloc: 234881024 data_used: 20738048
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 65781760 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f8e24000/0x0/0x4ffc00000, data 0x2ee79e0/0x304a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:12.292717+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 246 ms_handle_reset con 0x55f94dc7e000 session 0x55f94cd1ba40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 65773568 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:13.292883+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 65773568 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:14.293040+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 65773568 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:15.293270+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 65773568 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:16.293425+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2096493 data_alloc: 234881024 data_used: 20746240
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 65773568 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:17.293525+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 246 heartbeat osd_stat(store_statfs(0x4f8e20000/0x0/0x4ffc00000, data 0x2ee9579/0x304d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 65773568 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:18.293706+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 246 heartbeat osd_stat(store_statfs(0x4f8e20000/0x0/0x4ffc00000, data 0x2ee9579/0x304d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 65773568 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:19.294013+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 65773568 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:20.294165+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.772642136s of 15.022816658s, submitted: 41
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 137519104 unmapped: 65773568 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94dc7f400 session 0x55f94caa2f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f950407000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f950407000 session 0x55f94b024b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb5800 session 0x55f94b862b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:21.294318+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f8e20000/0x0/0x4ffc00000, data 0x2ee9579/0x304d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2134347 data_alloc: 234881024 data_used: 20762624
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 138272768 unmapped: 65019904 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:22.294487+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb4800 session 0x55f94b23b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb4800 session 0x55f94b23dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 54067200 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:23.294649+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 54067200 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:24.294793+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 53846016 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:25.294940+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 53846016 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f7f1f000/0x0/0x4ffc00000, data 0x3de103e/0x3f47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:26.295122+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2234005 data_alloc: 234881024 data_used: 21946368
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 53846016 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb5800 session 0x55f94b23af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:27.295241+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 53846016 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94dc7e000 session 0x55f94dc93680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:28.295366+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 53846016 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94dc7f400 session 0x55f94bd33860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f950407000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:29.295531+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f950407000 session 0x55f94cac4780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f950407000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 55771136 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f7efc000/0x0/0x4ffc00000, data 0x3e0b04e/0x3f72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:30.296110+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 55771136 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:31.297098+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2229337 data_alloc: 234881024 data_used: 21946368
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 55771136 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:32.297232+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 55771136 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f7ef9000/0x0/0x4ffc00000, data 0x3e0e04e/0x3f75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:33.298051+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 55771136 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:34.298257+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 55771136 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:35.299293+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f7ef9000/0x0/0x4ffc00000, data 0x3e0e04e/0x3f75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 55771136 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:36.299940+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb5800 session 0x55f94d6b5860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2227417 data_alloc: 234881024 data_used: 23388160
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147832832 unmapped: 55459840 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94dc7e000 session 0x55f94b249860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:37.300508+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148889600 unmapped: 54403072 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f7ef9000/0x0/0x4ffc00000, data 0x3e0e04e/0x3f75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:38.300696+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.686836243s of 18.126831055s, submitted: 167
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 54206464 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:39.300877+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94dc7f400 session 0x55f94bd1b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 54206464 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:40.301301+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f7ef8000/0x0/0x4ffc00000, data 0x3e0e05e/0x3f76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 54206464 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:41.301583+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2255583 data_alloc: 234881024 data_used: 26865664
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149094400 unmapped: 54198272 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:42.301926+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94ca3c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149094400 unmapped: 54198272 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94be7a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:43.302107+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f7ef8000/0x0/0x4ffc00000, data 0x3e0e05e/0x3f76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94b124d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb5800 session 0x55f94d7112c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150167552 unmapped: 53125120 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:44.302250+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150167552 unmapped: 53125120 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:45.302424+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150175744 unmapped: 53116928 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:46.302646+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 ms_handle_reset con 0x55f94ceb4400 session 0x55f94be3a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2344120 data_alloc: 234881024 data_used: 26865664
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f73d8000/0x0/0x4ffc00000, data 0x492e05e/0x4a96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151232512 unmapped: 52060160 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:47.302909+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 heartbeat osd_stat(store_statfs(0x4f73d8000/0x0/0x4ffc00000, data 0x492e05e/0x4a96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d743c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151552000 unmapped: 51740672 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 248 ms_handle_reset con 0x55f94d742400 session 0x55f94ca3e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:48.303090+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.186973095s of 10.014674187s, submitted: 82
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155901952 unmapped: 47390720 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:49.303240+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 248 ms_handle_reset con 0x55f94d742800 session 0x55f94d710f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 50946048 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:50.303448+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 50946048 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:51.303577+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2492669 data_alloc: 234881024 data_used: 26906624
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 ms_handle_reset con 0x55f94d743c00 session 0x55f94d578960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153346048 unmapped: 49946624 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f5142000/0x0/0x4ffc00000, data 0x5a20c4d/0x5b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 ms_handle_reset con 0x55f94ceb4400 session 0x55f94d579a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:52.303890+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 ms_handle_reset con 0x55f94dc7f400 session 0x55f94b24c3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 50798592 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 ms_handle_reset con 0x55f94dc7e000 session 0x55f94ca3f2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:53.304141+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 50798592 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:54.304417+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f50c0000/0x0/0x4ffc00000, data 0x5a9fe72/0x5c0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 50798592 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:55.304668+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 ms_handle_reset con 0x55f94d82cc00 session 0x55f94bd1b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94dbce1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152559616 unmapped: 50733056 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:56.304800+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 ms_handle_reset con 0x55f94ceb4400 session 0x55f94dbce5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d743c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f5c91000/0x0/0x4ffc00000, data 0x4621e3f/0x478e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 ms_handle_reset con 0x55f94dc7e000 session 0x55f94d5f6000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2280584 data_alloc: 234881024 data_used: 16994304
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145948672 unmapped: 57344000 heap: 203292672 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 ms_handle_reset con 0x55f94dc7f400 session 0x55f94bd334a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:57.304941+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 ms_handle_reset con 0x55f94ceb5800 session 0x55f94cd1a000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 ms_handle_reset con 0x55f94ceb5800 session 0x55f94be3a5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 ms_handle_reset con 0x55f94ceb4400 session 0x55f94dbcef00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 ms_handle_reset con 0x55f94d743c00 session 0x55f94a3dfa40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:58.305103+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:59.305281+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:00.305430+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:01.305604+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d6b23c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2402954 data_alloc: 234881024 data_used: 17010688
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.867624283s of 13.145202637s, submitted: 146
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x5589a1e/0x56f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 ms_handle_reset con 0x55f94dc7e000 session 0x55f94be3b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:02.305772+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:03.305931+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:04.306145+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:05.306356+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:06.306568+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2402954 data_alloc: 234881024 data_used: 17010688
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:07.306731+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x5589a1e/0x56f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:08.306892+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:09.307052+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:10.307237+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:11.307409+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2402954 data_alloc: 234881024 data_used: 17010688
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:12.307596+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.770396233s of 10.775433540s, submitted: 1
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 ms_handle_reset con 0x55f94ceb4400 session 0x55f94be3b0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145457152 unmapped: 65716224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:13.308263+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 heartbeat osd_stat(store_statfs(0x4f55d5000/0x0/0x4ffc00000, data 0x5589a41/0x56f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 145465344 unmapped: 65708032 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:14.308378+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d743c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 ms_handle_reset con 0x55f94d743c00 session 0x55f94caf3680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152633344 unmapped: 58540032 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:15.308547+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 156327936 unmapped: 54845440 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:16.308664+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 251 ms_handle_reset con 0x55f94dc7f400 session 0x55f94d6b5e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2523751 data_alloc: 251658240 data_used: 33173504
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155525120 unmapped: 55648256 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:17.308787+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f55d3000/0x0/0x4ffc00000, data 0x5589ab2/0x56fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 251 ms_handle_reset con 0x55f94d828c00 session 0x55f94dc5af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 251 ms_handle_reset con 0x55f94d828000 session 0x55f94dc5b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 251 ms_handle_reset con 0x55f94ceb4400 session 0x55f94d6b4d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d743c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 251 ms_handle_reset con 0x55f94d743c00 session 0x55f94be7bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155525120 unmapped: 55648256 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:18.308924+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 252 ms_handle_reset con 0x55f94d742400 session 0x55f94d595c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 252 ms_handle_reset con 0x55f94d828c00 session 0x55f94ca3d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155533312 unmapped: 55640064 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 252 ms_handle_reset con 0x55f94dc7f400 session 0x55f94b024b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 252 ms_handle_reset con 0x55f94dc7f400 session 0x55f94b862780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:19.309126+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 252 heartbeat osd_stat(store_statfs(0x4f55ce000/0x0/0x4ffc00000, data 0x558b63f/0x56ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 252 ms_handle_reset con 0x55f94ceb4400 session 0x55f94d5781e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 252 ms_handle_reset con 0x55f94d742400 session 0x55f94cac5c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d743c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 252 ms_handle_reset con 0x55f94d743c00 session 0x55f94cac4d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155590656 unmapped: 55582720 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:20.309245+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 252 handle_osd_map epochs [252,253], i have 252, src has [1,253]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 253 ms_handle_reset con 0x55f94d828c00 session 0x55f94be7b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155607040 unmapped: 55566336 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:21.309354+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 253 ms_handle_reset con 0x55f94d828c00 session 0x55f94b025680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2556832 data_alloc: 251658240 data_used: 33181696
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 254 ms_handle_reset con 0x55f94ceb4400 session 0x55f94b23ba40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 156655616 unmapped: 54517760 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:22.309477+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 254 ms_handle_reset con 0x55f94ca2c800 session 0x55f94d6b23c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.640677452s of 10.277044296s, submitted: 84
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 254 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 ms_handle_reset con 0x55f94ca2c400 session 0x55f94bd334a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157622272 unmapped: 53551104 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:23.309595+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157974528 unmapped: 53198848 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:24.309734+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 ms_handle_reset con 0x55f94ca2d400 session 0x55f94be3c5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 ms_handle_reset con 0x55f94ceb5800 session 0x55f94be7af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f52c2000/0x0/0x4ffc00000, data 0x589254f/0x5a0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 ms_handle_reset con 0x55f950407000 session 0x55f94be3a000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 ms_handle_reset con 0x55f94ceb4800 session 0x55f94d727680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 156663808 unmapped: 54509568 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:25.310464+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 ms_handle_reset con 0x55f94ca2c400 session 0x55f94cf4c5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159301632 unmapped: 51871744 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:26.310614+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2588429 data_alloc: 251658240 data_used: 30535680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159301632 unmapped: 51871744 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:27.310808+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f54bd000/0x0/0x4ffc00000, data 0x5e354dd/0x5810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159301632 unmapped: 51871744 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:28.311066+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 ms_handle_reset con 0x55f94ca2c800 session 0x55f94ca3c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159039488 unmapped: 52133888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:29.311205+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159039488 unmapped: 52133888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:30.311376+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159039488 unmapped: 52133888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:31.311562+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f54d9000/0x0/0x4ffc00000, data 0x5e18f6c/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2592601 data_alloc: 251658240 data_used: 30617600
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159039488 unmapped: 52133888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:32.311707+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.206484795s of 10.033633232s, submitted: 178
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 257 ms_handle_reset con 0x55f94ca2d400 session 0x55f94be3b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 52117504 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:33.311844+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 52117504 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:34.311995+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 52117504 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:35.312151+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 52117504 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:36.312275+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 258 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94be3be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 258 ms_handle_reset con 0x55f94ca2c400 session 0x55f94cd1ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f54d2000/0x0/0x4ffc00000, data 0x5e1c568/0x57fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 258 ms_handle_reset con 0x55f94ceb4800 session 0x55f94ca3dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2599028 data_alloc: 251658240 data_used: 30621696
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f54d2000/0x0/0x4ffc00000, data 0x5e1c5ca/0x57fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159088640 unmapped: 52084736 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:37.312409+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f54d2000/0x0/0x4ffc00000, data 0x5e1c5a7/0x57fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159088640 unmapped: 52084736 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:38.312554+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 52076544 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:39.312734+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f54ce000/0x0/0x4ffc00000, data 0x5e1e178/0x57fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 258 handle_osd_map epochs [259,259], i have 259, src has [1,259]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 260 ms_handle_reset con 0x55f94ceb5800 session 0x55f94a3de960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 160169984 unmapped: 51003392 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:40.312895+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 260 heartbeat osd_stat(store_statfs(0x4f54cc000/0x0/0x4ffc00000, data 0x5e1fd11/0x5800000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 260 ms_handle_reset con 0x55f94ca2c800 session 0x55f94bd32b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 160202752 unmapped: 50970624 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets getting new tickets!
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:41.313098+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _finish_auth 0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:41.314004+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 260 ms_handle_reset con 0x55f94ca2d400 session 0x55f94dbce5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 260 ms_handle_reset con 0x55f94ca2c400 session 0x55f94dc5a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f950407000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 260 ms_handle_reset con 0x55f94ceb4400 session 0x55f94dc5a960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2607667 data_alloc: 251658240 data_used: 30629888
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 160210944 unmapped: 50962432 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:42.313294+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.793386459s of 10.020104408s, submitted: 100
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 261 ms_handle_reset con 0x55f94ca2d800 session 0x55f94bd7e960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 261 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d6b4960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 160235520 unmapped: 50937856 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:43.313442+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 ms_handle_reset con 0x55f94d828c00 session 0x55f94d726960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 ms_handle_reset con 0x55f94ceb4800 session 0x55f94be7ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f7611000/0x0/0x4ffc00000, data 0x312dbc3/0x32ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 ms_handle_reset con 0x55f94ca2c400 session 0x55f94ca3fc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 ms_handle_reset con 0x55f94ca2c800 session 0x55f94be3da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 ms_handle_reset con 0x55f950407000 session 0x55f94dbcf680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 ms_handle_reset con 0x55f94ca2d400 session 0x55f94d5f6f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 ms_handle_reset con 0x55f94ca2c400 session 0x55f94d6b5860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149151744 unmapped: 62021632 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:44.313601+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:45.313745+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149151744 unmapped: 62021632 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 ms_handle_reset con 0x55f94ca2c800 session 0x55f94dbce1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f8133000/0x0/0x4ffc00000, data 0x260d496/0x2788000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:46.313906+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149168128 unmapped: 62005248 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2078968 data_alloc: 234881024 data_used: 13037568
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:47.314030+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 149168128 unmapped: 62005248 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 263 ms_handle_reset con 0x55f94ceb4800 session 0x55f94dc5be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 263 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94bd094a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: mgrc ms_handle_reset ms_handle_reset con 0x55f94d7cb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/878361048
Nov 29 08:16:49 compute-0 ceph-osd[89840]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/878361048,v1:192.168.122.100:6801/878361048]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: get_auth_request con 0x55f94ceb4800 auth_method 0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: mgrc handle_mgr_configure stats_period=5
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:48.314157+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148545536 unmapped: 62627840 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 263 ms_handle_reset con 0x55f94ca2c400 session 0x55f94caa2b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:49.314287+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 263 ms_handle_reset con 0x55f94ca2c800 session 0x55f94d5f7860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148561920 unmapped: 62611456 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:50.314436+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148561920 unmapped: 62611456 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 263 heartbeat osd_stat(store_statfs(0x4f8134000/0x0/0x4ffc00000, data 0x260edc4/0x2788000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 264 ms_handle_reset con 0x55f94cf1a000 session 0x55f94d5954a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 264 ms_handle_reset con 0x55f94b960400 session 0x55f94da7cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf18c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 264 ms_handle_reset con 0x55f94bde0c00 session 0x55f94af9af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b960400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:51.314569+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148561920 unmapped: 62611456 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 264 ms_handle_reset con 0x55f94d828c00 session 0x55f94d5f7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2082852 data_alloc: 234881024 data_used: 13041664
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:52.314730+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148561920 unmapped: 62611456 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 264 ms_handle_reset con 0x55f94ceb4400 session 0x55f94b24dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.100901604s of 10.025535583s, submitted: 195
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 264 ms_handle_reset con 0x55f94ca2c400 session 0x55f94b025c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:53.315596+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148578304 unmapped: 62595072 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 265 ms_handle_reset con 0x55f94ca2c800 session 0x55f94a3de960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:54.315778+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 62545920 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 265 heartbeat osd_stat(store_statfs(0x4f8130000/0x0/0x4ffc00000, data 0x2612475/0x278d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:55.316026+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 62545920 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:56.316215+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148627456 unmapped: 62545920 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94be3a000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 ms_handle_reset con 0x55f94ceb4400 session 0x55f94ca40f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2091001 data_alloc: 234881024 data_used: 13053952
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:57.316383+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 heartbeat osd_stat(store_statfs(0x4f812d000/0x0/0x4ffc00000, data 0x2613f10/0x2790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:58.316573+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:59.316717+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 heartbeat osd_stat(store_statfs(0x4f812d000/0x0/0x4ffc00000, data 0x2613f10/0x2790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 ms_handle_reset con 0x55f94d828c00 session 0x55f94dbcf0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:00.316890+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:01.317062+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2090367 data_alloc: 234881024 data_used: 13053952
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:02.317269+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:03.317468+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.210656166s of 10.295023918s, submitted: 44
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 heartbeat osd_stat(store_statfs(0x4f812e000/0x0/0x4ffc00000, data 0x2613f10/0x2790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:04.317665+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:05.317851+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:06.318632+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2091618 data_alloc: 234881024 data_used: 13053952
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 heartbeat osd_stat(store_statfs(0x4f812e000/0x0/0x4ffc00000, data 0x2613f10/0x2790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:07.319732+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148619264 unmapped: 62554112 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 ms_handle_reset con 0x55f94ca2c800 session 0x55f94bd334a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:08.320094+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148635648 unmapped: 62537728 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 ms_handle_reset con 0x55f94ceb4400 session 0x55f94b23ba40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 ms_handle_reset con 0x55f94ca2c400 session 0x55f94b88a3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:09.321090+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 62504960 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 ms_handle_reset con 0x55f94d828c00 session 0x55f94cd1b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ef8b000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 267 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94be7b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 267 heartbeat osd_stat(store_statfs(0x4f812c000/0x0/0x4ffc00000, data 0x2613fc2/0x2792000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:10.321342+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148684800 unmapped: 62488576 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 267 heartbeat osd_stat(store_statfs(0x4f8129000/0x0/0x4ffc00000, data 0x2615add/0x2794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:11.321582+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148692992 unmapped: 62480384 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 268 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d6b4d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 268 ms_handle_reset con 0x55f94ef8b000 session 0x55f94b23b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2101531 data_alloc: 234881024 data_used: 13074432
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:12.322375+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 268 ms_handle_reset con 0x55f94ceb4400 session 0x55f94bd7e3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148701184 unmapped: 62472192 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 269 ms_handle_reset con 0x55f94ca2c400 session 0x55f94dc5ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 269 ms_handle_reset con 0x55f94d828c00 session 0x55f94b863a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:13.322871+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148742144 unmapped: 62431232 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.586554527s of 10.101684570s, submitted: 92
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 269 ms_handle_reset con 0x55f94ca2c400 session 0x55f94dbcf2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 270 ms_handle_reset con 0x55f94b118000 session 0x55f94b24da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:14.323623+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148733952 unmapped: 62439424 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 270 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94b24de00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 270 ms_handle_reset con 0x55f94d828c00 session 0x55f94d727860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 270 ms_handle_reset con 0x55f94ca2c800 session 0x55f94bd1bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 270 ms_handle_reset con 0x55f94ceb4400 session 0x55f94d726780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 270 ms_handle_reset con 0x55f94b118000 session 0x55f94bd1b0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:15.324146+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f811d000/0x0/0x4ffc00000, data 0x261b37b/0x27a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148742144 unmapped: 62431232 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 271 ms_handle_reset con 0x55f94ca2c400 session 0x55f94d6b4000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 271 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94ca3e5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d828c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 271 ms_handle_reset con 0x55f94d828c00 session 0x55f94dbcf4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:16.324303+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 271 ms_handle_reset con 0x55f94b118000 session 0x55f94b862d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148742144 unmapped: 62431232 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 271 heartbeat osd_stat(store_statfs(0x4f811d000/0x0/0x4ffc00000, data 0x261c953/0x279e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2110725 data_alloc: 234881024 data_used: 13078528
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:17.324445+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148742144 unmapped: 62431232 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 271 ms_handle_reset con 0x55f94ca2c400 session 0x55f94d6b4d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:18.324675+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148742144 unmapped: 62431232 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:19.325028+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 271 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94be3a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148742144 unmapped: 62431232 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 271 heartbeat osd_stat(store_statfs(0x4f811d000/0x0/0x4ffc00000, data 0x261c9d5/0x27a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ef8b000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 272 ms_handle_reset con 0x55f94d5fb000 session 0x55f94dbce000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:20.325189+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148750336 unmapped: 62423040 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 273 ms_handle_reset con 0x55f94ef8b000 session 0x55f94be3ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:21.325497+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2121428 data_alloc: 234881024 data_used: 13099008
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:22.325702+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 273 ms_handle_reset con 0x55f94ceb4400 session 0x55f94cd1b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:23.326044+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:24.326362+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 273 heartbeat osd_stat(store_statfs(0x4f8115000/0x0/0x4ffc00000, data 0x262015b/0x27a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:25.326674+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 273 heartbeat osd_stat(store_statfs(0x4f8115000/0x0/0x4ffc00000, data 0x262015b/0x27a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:26.327145+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.704334259s of 13.222218513s, submitted: 127
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2125074 data_alloc: 234881024 data_used: 13099008
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:27.327344+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:28.327524+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:29.327674+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:30.328049+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:31.328306+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 274 heartbeat osd_stat(store_statfs(0x4f8114000/0x0/0x4ffc00000, data 0x2621bbe/0x27aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 274 heartbeat osd_stat(store_statfs(0x4f8114000/0x0/0x4ffc00000, data 0x2621bbe/0x27aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:32.328535+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2124396 data_alloc: 234881024 data_used: 13099008
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 274 ms_handle_reset con 0x55f94ca2c400 session 0x55f94be3b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:33.328767+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148766720 unmapped: 62406656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:34.329030+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 62398464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:35.329304+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 62398464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 274 heartbeat osd_stat(store_statfs(0x4f8114000/0x0/0x4ffc00000, data 0x2621bbe/0x27aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:36.329528+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 62398464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1.573469639s of 10.171764374s, submitted: 17
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:37.329683+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2124642 data_alloc: 234881024 data_used: 13099008
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 62398464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:38.329861+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 275 heartbeat osd_stat(store_statfs(0x4f8110000/0x0/0x4ffc00000, data 0x262378f/0x27ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 62398464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:39.332900+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 62398464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:40.333108+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 62398464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:41.333827+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 62398464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f810c000/0x0/0x4ffc00000, data 0x2625328/0x27b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:42.335042+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2130838 data_alloc: 234881024 data_used: 13107200
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f810c000/0x0/0x4ffc00000, data 0x2625328/0x27b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148783104 unmapped: 62390272 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:43.335275+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 276 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d593860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f810e000/0x0/0x4ffc00000, data 0x2625328/0x27b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148791296 unmapped: 62382080 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:44.335736+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 62373888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:45.335939+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f810f000/0x0/0x4ffc00000, data 0x2625318/0x27af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 62373888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:46.336098+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 276 ms_handle_reset con 0x55f94b118000 session 0x55f94b88a3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 62373888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.589335203s of 10.295221329s, submitted: 30
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d824800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:47.336420+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2136826 data_alloc: 234881024 data_used: 13107200
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 277 ms_handle_reset con 0x55f94d824800 session 0x55f94a3de960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 62373888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:48.336611+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148807680 unmapped: 62365696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 277 ms_handle_reset con 0x55f94d5fb000 session 0x55f94da703c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:49.336789+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148807680 unmapped: 62365696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 277 heartbeat osd_stat(store_statfs(0x4f810a000/0x0/0x4ffc00000, data 0x2626ec1/0x27b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:50.336935+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148799488 unmapped: 62373888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 279 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94a3dfe00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:51.337071+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 279 ms_handle_reset con 0x55f94b118000 session 0x55f94b24dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147931136 unmapped: 63242240 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d824800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:52.337225+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 280 ms_handle_reset con 0x55f94d824800 session 0x55f94caa2b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fe000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 280 ms_handle_reset con 0x55f94d5fe000 session 0x55f94bd094a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 280 ms_handle_reset con 0x55f94b118000 session 0x55f94dc5be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2152435 data_alloc: 234881024 data_used: 13115392
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 280 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94dbce1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d824800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 280 ms_handle_reset con 0x55f94d5fb000 session 0x55f94d6b5860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147947520 unmapped: 63225856 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 280 ms_handle_reset con 0x55f94ceb4400 session 0x55f94d6b2f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:53.337403+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 147947520 unmapped: 63225856 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f9c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:54.337720+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 280 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0x262c103/0x27c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,15])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 158244864 unmapped: 52928512 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 280 heartbeat osd_stat(store_statfs(0x4f7a37000/0x0/0x4ffc00000, data 0x2cf3103/0x2e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,15])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:55.337990+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 158244864 unmapped: 52928512 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:56.338215+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148299776 unmapped: 62873600 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.543677092s of 10.007971764s, submitted: 59
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 ms_handle_reset con 0x55f94d5f9c00 session 0x55f94d6b2960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 ms_handle_reset con 0x55f94d824800 session 0x55f94dbcf680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 ms_handle_reset con 0x55f94b118000 session 0x55f94d5794a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d579680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f9c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:57.338354+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2236824 data_alloc: 234881024 data_used: 13135872
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 heartbeat osd_stat(store_statfs(0x4f7710000/0x0/0x4ffc00000, data 0x3017c80/0x31ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 ms_handle_reset con 0x55f94e191400 session 0x55f94ca412c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148307968 unmapped: 62865408 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 ms_handle_reset con 0x55f94d5f9c00 session 0x55f94d578960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 ms_handle_reset con 0x55f94b118000 session 0x55f94d579a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 ms_handle_reset con 0x55f94ca2c400 session 0x55f94adc7a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:58.338502+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148307968 unmapped: 62865408 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 282 ms_handle_reset con 0x55f94ceb4400 session 0x55f94b24d2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 282 heartbeat osd_stat(store_statfs(0x4f770d000/0x0/0x4ffc00000, data 0x3019851/0x31b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:59.338653+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148307968 unmapped: 62865408 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:00.338862+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148307968 unmapped: 62865408 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:01.339215+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148307968 unmapped: 62865408 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:02.339510+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2239198 data_alloc: 234881024 data_used: 13135872
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 282 ms_handle_reset con 0x55f94d5f4c00 session 0x55f94b88a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d824800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148332544 unmapped: 62840832 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 282 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d578000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:03.339630+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148348928 unmapped: 62824448 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 282 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94d710960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 283 ms_handle_reset con 0x55f94e191400 session 0x55f94b23c5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:04.339742+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 283 ms_handle_reset con 0x55f94ca2c400 session 0x55f94d710000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 148381696 unmapped: 62791680 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 283 heartbeat osd_stat(store_statfs(0x4f7708000/0x0/0x4ffc00000, data 0x301b4e6/0x31b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:05.339900+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152821760 unmapped: 58351616 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:06.340056+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152821760 unmapped: 58351616 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 283 heartbeat osd_stat(store_statfs(0x4f7708000/0x0/0x4ffc00000, data 0x301b4e6/0x31b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:07.340218+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.843582630s of 10.222355843s, submitted: 53
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2321505 data_alloc: 234881024 data_used: 23441408
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152854528 unmapped: 58318848 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:08.340540+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152854528 unmapped: 58318848 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:09.340835+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152854528 unmapped: 58318848 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:10.341016+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 283 heartbeat osd_stat(store_statfs(0x4f7708000/0x0/0x4ffc00000, data 0x301b4e6/0x31b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 283 handle_osd_map epochs [284,284], i have 284, src has [1,284]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 284 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94dbce3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 58253312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:11.341199+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152936448 unmapped: 58236928 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:12.341388+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2324698 data_alloc: 234881024 data_used: 23453696
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152952832 unmapped: 58220544 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 284 ms_handle_reset con 0x55f94ceb4400 session 0x55f94d6b2000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82ec00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 284 ms_handle_reset con 0x55f94d82ec00 session 0x55f94bd1af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:13.341581+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152977408 unmapped: 58195968 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:14.341746+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 284 heartbeat osd_stat(store_statfs(0x4f7705000/0x0/0x4ffc00000, data 0x301d571/0x31b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 284 ms_handle_reset con 0x55f94d5fb000 session 0x55f94adc6f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152977408 unmapped: 58195968 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:15.341946+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152977408 unmapped: 58195968 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:16.342101+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152977408 unmapped: 58195968 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:17.342263+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.880927801s of 10.024450302s, submitted: 16
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2330057 data_alloc: 234881024 data_used: 23474176
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153010176 unmapped: 58163200 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:18.342434+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155402240 unmapped: 55771136 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f7432000/0x0/0x4ffc00000, data 0x32ee142/0x348b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,10])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 285 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94b23b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:19.342600+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 286 ms_handle_reset con 0x55f94ceb4400 session 0x55f94da69860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 286 ms_handle_reset con 0x55f94e191400 session 0x55f94dbcf860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 158384128 unmapped: 52789248 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 286 heartbeat osd_stat(store_statfs(0x4f6f7e000/0x0/0x4ffc00000, data 0x379fcdb/0x393e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:20.342761+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 158449664 unmapped: 52723712 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:21.343051+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 53403648 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 288 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94bd09680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 288 ms_handle_reset con 0x55f94dc83800 session 0x55f94b23dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:22.343215+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2397120 data_alloc: 234881024 data_used: 24141824
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 53403648 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:23.343360+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f6f73000/0x0/0x4ffc00000, data 0x37a82f3/0x3949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 53403648 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:24.343514+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 53403648 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:25.343723+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 288 ms_handle_reset con 0x55f94ca2c400 session 0x55f94adc7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 53403648 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:26.343939+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f6f73000/0x0/0x4ffc00000, data 0x37a82f3/0x3949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,3])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 53403648 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:27.344174+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2406094 data_alloc: 234881024 data_used: 24494080
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157810688 unmapped: 53362688 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:28.344356+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157810688 unmapped: 53362688 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 289 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d5f74a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.713335514s of 11.830338478s, submitted: 147
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:29.344551+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 289 heartbeat osd_stat(store_statfs(0x4f6f67000/0x0/0x4ffc00000, data 0x37b3d56/0x3956000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 289 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94d7274a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157810688 unmapped: 53362688 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 290 ms_handle_reset con 0x55f94d5fb000 session 0x55f94d726000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:30.344662+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157810688 unmapped: 53362688 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 291 ms_handle_reset con 0x55f94e191400 session 0x55f94be3ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:31.344782+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 291 ms_handle_reset con 0x55f94ceb4400 session 0x55f94da7dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 291 heartbeat osd_stat(store_statfs(0x4f6f63000/0x0/0x4ffc00000, data 0x37b5935/0x395a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157827072 unmapped: 53346304 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 292 ms_handle_reset con 0x55f94ca2c400 session 0x55f94bd32f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:32.344997+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2415725 data_alloc: 234881024 data_used: 24506368
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157843456 unmapped: 53329920 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 293 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94da7da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:33.345162+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 293 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94bd32f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157851648 unmapped: 53321728 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 293 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94d726000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 293 ms_handle_reset con 0x55f94ca2c400 session 0x55f94d7274a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:34.345337+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 293 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d711860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 293 ms_handle_reset con 0x55f94e191400 session 0x55f94b942000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157851648 unmapped: 53321728 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 294 ms_handle_reset con 0x55f94ceb4400 session 0x55f94adc7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 294 ms_handle_reset con 0x55f94ca2c400 session 0x55f94bd09680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:35.345592+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 294 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94dbcf860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157843456 unmapped: 53329920 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:36.345743+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 295 ms_handle_reset con 0x55f94e191400 session 0x55f94cac50e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 295 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94b23b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 295 ms_handle_reset con 0x55f94bd7c400 session 0x55f94d6b2960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 295 heartbeat osd_stat(store_statfs(0x4f6f54000/0x0/0x4ffc00000, data 0x37bc853/0x3969000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157908992 unmapped: 53264384 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 295 ms_handle_reset con 0x55f94dc83800 session 0x55f94da7d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:37.345918+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2428439 data_alloc: 234881024 data_used: 24510464
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 295 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94dc5a5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157908992 unmapped: 53264384 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:38.346117+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 296 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94ca345a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 296 ms_handle_reset con 0x55f94ca2c400 session 0x55f94d6b2f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 296 ms_handle_reset con 0x55f94e191400 session 0x55f94d6b2000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 53231616 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 296 ms_handle_reset con 0x55f94e191400 session 0x55f94af71c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.437003136s of 10.036405563s, submitted: 152
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:39.346251+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f6f51000/0x0/0x4ffc00000, data 0x37bfa3f/0x396c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f6f52000/0x0/0x4ffc00000, data 0x37bfa2f/0x396b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 157966336 unmapped: 53207040 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 297 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94da7cd20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:40.346371+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 297 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94b24d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 52125696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:41.346581+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 298 ms_handle_reset con 0x55f94dc83800 session 0x55f94d727680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 299 ms_handle_reset con 0x55f94d826800 session 0x55f94ca343c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159080448 unmapped: 52092928 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:42.346779+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2437092 data_alloc: 234881024 data_used: 24518656
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 299 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d6b41e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 299 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94caf21e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 300 ms_handle_reset con 0x55f94ca2c400 session 0x55f94cf4c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 52019200 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 300 ms_handle_reset con 0x55f94d826800 session 0x55f94b1250e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:43.347050+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 52011008 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:44.347252+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 300 heartbeat osd_stat(store_statfs(0x4f6f4a000/0x0/0x4ffc00000, data 0x37c68f3/0x3971000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 52011008 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:45.347457+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 301 ms_handle_reset con 0x55f94e191400 session 0x55f94d578f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 51978240 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 301 ms_handle_reset con 0x55f94ca2c400 session 0x55f94ca403c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:46.347702+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 160251904 unmapped: 50921472 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:47.347851+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2442598 data_alloc: 234881024 data_used: 24506368
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94cf4cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159244288 unmapped: 51929088 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:48.348040+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94d5f65a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 heartbeat osd_stat(store_statfs(0x4f6f46000/0x0/0x4ffc00000, data 0x37c9fe7/0x3976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94d826800 session 0x55f94bd094a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 51912704 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94dc83800 session 0x55f94ca3cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:49.348193+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 heartbeat osd_stat(store_statfs(0x4f6f44000/0x0/0x4ffc00000, data 0x37cbbc7/0x3979000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 51912704 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:50.348363+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.257724762s of 11.176609039s, submitted: 166
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94dc83800 session 0x55f94cf4cd20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94b118000 session 0x55f94bd1b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94ca2c400 session 0x55f94d6b4000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 51904512 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94cac5860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94d826800 session 0x55f94be7b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94d726780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:51.348541+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 ms_handle_reset con 0x55f94d826800 session 0x55f94dc5ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 60489728 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:52.348701+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246600 data_alloc: 234881024 data_used: 13180928
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 heartbeat osd_stat(store_statfs(0x4f80ba000/0x0/0x4ffc00000, data 0x2653ced/0x2804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94b118000 session 0x55f94bd7e3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 60489728 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:53.348843+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 60489728 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94ca2c400 session 0x55f94ca3cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94be3d2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:54.349073+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94d5f65a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150683648 unmapped: 60489728 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94b118000 session 0x55f94be3a3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94be3ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94ca2c400 session 0x55f94cf4cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:55.349265+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94d826800 session 0x55f94cf4c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94b118000 session 0x55f94d6b41e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94ca2c400 session 0x55f94d727680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150749184 unmapped: 60424192 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f80b2000/0x0/0x4ffc00000, data 0x26558d2/0x280c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:56.349446+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94d578d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94d826800 session 0x55f94cac52c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d607800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94ceb4c00 session 0x55f94da7cd20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94dc83800 session 0x55f94dc5bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 60399616 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 ms_handle_reset con 0x55f94d607800 session 0x55f94b24d2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:57.349600+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2259080 data_alloc: 234881024 data_used: 13180928
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 60399616 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94b118000 session 0x55f94ca345a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:58.349795+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94cac4780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94ca2c400 session 0x55f94da7d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94d826800 session 0x55f94d579680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 60366848 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:59.349954+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94b118000 session 0x55f94bd2f0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d607800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94d607800 session 0x55f94af70f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f80b3000/0x0/0x4ffc00000, data 0x265718c/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94d6b5a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94dc83800 session 0x55f94bd7e960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150831104 unmapped: 60342272 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:00.350171+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.152478218s of 10.435193062s, submitted: 184
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150831104 unmapped: 60342272 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:01.350354+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150831104 unmapped: 60342272 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f80b9000/0x0/0x4ffc00000, data 0x26570a8/0x2805000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:02.350489+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94b118000 session 0x55f94da70b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d607800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94b024d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2255859 data_alloc: 234881024 data_used: 13193216
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94d607800 session 0x55f94da714a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94d826800 session 0x55f94caf2d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:03.350706+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94d0afc00 session 0x55f94cac4d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 ms_handle_reset con 0x55f94b118000 session 0x55f94d5f6780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f80ba000/0x0/0x4ffc00000, data 0x2657099/0x2804000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:04.350849+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:05.351033+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:06.351178+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:07.351336+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2254439 data_alloc: 234881024 data_used: 13193216
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:08.351580+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 heartbeat osd_stat(store_statfs(0x4f80bb000/0x0/0x4ffc00000, data 0x2657037/0x2803000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94d6b45a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:09.351786+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:10.352012+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x2658bb4/0x2806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:11.352262+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 ms_handle_reset con 0x55f94d0afc00 session 0x55f94dbda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d607800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.153721809s of 10.871993065s, submitted: 53
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 ms_handle_reset con 0x55f94d826800 session 0x55f94d7112c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 ms_handle_reset con 0x55f94d607800 session 0x55f94ca3e5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x2658bb4/0x2806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 ms_handle_reset con 0x55f94b118000 session 0x55f94bd2eb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:12.352437+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2262540 data_alloc: 234881024 data_used: 13201408
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:13.352649+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:14.352877+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94b24c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:15.353085+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x2658bf6/0x2807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:16.353244+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:17.353386+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2262672 data_alloc: 234881024 data_used: 13201408
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:18.358808+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:19.359016+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150855680 unmapped: 60317696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:20.359156+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150863872 unmapped: 60309504 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x2658bf6/0x2807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:21.359314+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.049233198s of 10.171382904s, submitted: 24
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:22.359486+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2262731 data_alloc: 234881024 data_used: 13205504
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:23.359651+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f80b7000/0x0/0x4ffc00000, data 0x2658bf6/0x2807000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150872064 unmapped: 60301312 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:24.359793+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150888448 unmapped: 60284928 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:25.359996+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 ms_handle_reset con 0x55f94d0afc00 session 0x55f94dbcf0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150888448 unmapped: 60284928 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:26.360203+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d607800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f80b8000/0x0/0x4ffc00000, data 0x2658bb4/0x2806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150896640 unmapped: 60276736 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:27.360322+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2261176 data_alloc: 234881024 data_used: 13201408
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150896640 unmapped: 60276736 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:28.360456+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150896640 unmapped: 60276736 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:29.360579+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f80b8000/0x0/0x4ffc00000, data 0x2658bb4/0x2806000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150896640 unmapped: 60276736 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:30.360731+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150896640 unmapped: 60276736 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:31.360909+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150913024 unmapped: 60260352 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:32.361057+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.163516521s of 10.688584328s, submitted: 42
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2264470 data_alloc: 234881024 data_used: 13209600
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150913024 unmapped: 60260352 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:33.361167+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150921216 unmapped: 60252160 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:34.361302+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 307 ms_handle_reset con 0x55f94d826800 session 0x55f94bd7e960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 307 ms_handle_reset con 0x55f94d607800 session 0x55f94bd2f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150921216 unmapped: 60252160 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 307 heartbeat osd_stat(store_statfs(0x4f80b5000/0x0/0x4ffc00000, data 0x265a785/0x2809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:35.361525+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 307 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94d6b5a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150921216 unmapped: 60252160 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:36.361671+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d607800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150937600 unmapped: 60235776 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94b118000 session 0x55f94d592b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:37.361810+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94d0afc00 session 0x55f94af70f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2269417 data_alloc: 234881024 data_used: 13217792
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94d826800 session 0x55f94d6b2f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150937600 unmapped: 60235776 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:38.362010+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94d607800 session 0x55f94b124960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150937600 unmapped: 60235776 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:39.362198+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94b118000 session 0x55f94be7b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94b23d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150945792 unmapped: 60227584 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94d0afc00 session 0x55f94adc6f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9ec00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:40.362342+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94dc9ec00 session 0x55f94bd2f0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94d826800 session 0x55f94adc6000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 ms_handle_reset con 0x55f94b118000 session 0x55f94d5f7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 heartbeat osd_stat(store_statfs(0x4f80b2000/0x0/0x4ffc00000, data 0x265c31e/0x280c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150945792 unmapped: 60227584 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:41.362490+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150945792 unmapped: 60227584 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:42.362637+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2272094 data_alloc: 234881024 data_used: 13217792
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.022008896s of 10.275527000s, submitted: 82
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 handle_osd_map epochs [309,309], i have 309, src has [1,309]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94ca34d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150953984 unmapped: 60219392 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:43.362765+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150962176 unmapped: 60211200 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:44.362937+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 ms_handle_reset con 0x55f94d0afc00 session 0x55f94d711e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150970368 unmapped: 60203008 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:45.363171+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 ms_handle_reset con 0x55f94d826800 session 0x55f94b1250e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150970368 unmapped: 60203008 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:46.363297+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f80ae000/0x0/0x4ffc00000, data 0x265df15/0x2810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150970368 unmapped: 60203008 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9ec00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:47.363430+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 ms_handle_reset con 0x55f94dc9ec00 session 0x55f94ca3c5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f80ae000/0x0/0x4ffc00000, data 0x265df15/0x2810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2276805 data_alloc: 234881024 data_used: 13225984
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 heartbeat osd_stat(store_statfs(0x4f7c9e000/0x0/0x4ffc00000, data 0x265df15/0x2810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150986752 unmapped: 60186624 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:48.363572+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94cac4780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 150994944 unmapped: 60178432 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:49.363744+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94b118000 session 0x55f94ca41680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151011328 unmapped: 60162048 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:50.363882+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94d0afc00 session 0x55f94af9af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 heartbeat osd_stat(store_statfs(0x4f7c9a000/0x0/0x4ffc00000, data 0x265fa92/0x2813000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94d826800 session 0x55f94b24d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9ec00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151011328 unmapped: 60162048 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:51.364112+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94dc9ec00 session 0x55f94dc5ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ceb5c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151322624 unmapped: 59850752 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:52.364259+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94d0afc00 session 0x55f94d6b52c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94d826800 session 0x55f94be3da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2295063 data_alloc: 234881024 data_used: 13234176
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94bd7d800 session 0x55f94ca41680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94d609800 session 0x55f94d7265a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.706150055s of 10.092123985s, submitted: 72
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94d82cc00 session 0x55f94dc5a000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151371776 unmapped: 59801600 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:53.364376+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94bd7d800 session 0x55f94caf2f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94d0afc00 session 0x55f94cac41e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151363584 unmapped: 59809792 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:54.364502+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151363584 unmapped: 59809792 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:55.364742+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151363584 unmapped: 59809792 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:56.364936+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 heartbeat osd_stat(store_statfs(0x4f7c74000/0x0/0x4ffc00000, data 0x2683b07/0x283a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94b118000 session 0x55f94d711a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94d826800 session 0x55f94d711e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 ms_handle_reset con 0x55f94ceb5c00 session 0x55f94ca345a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151371776 unmapped: 59801600 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94d609800 session 0x55f94dc5b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:57.365114+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94b118000 session 0x55f94ca34d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2297654 data_alloc: 234881024 data_used: 13246464
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94bd7d800 session 0x55f94b23d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f7c93000/0x0/0x4ffc00000, data 0x26616c3/0x2819000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94d0afc00 session 0x55f94b124960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151404544 unmapped: 59768832 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:58.365217+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94d82cc00 session 0x55f94dc5bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94d826800 session 0x55f94d6b5a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151420928 unmapped: 59752448 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:59.365355+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94d82cc00 session 0x55f94d6b45a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94b118000 session 0x55f94d5f6780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94bd7d800 session 0x55f94b23b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151453696 unmapped: 59719680 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:00.365521+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94d0afc00 session 0x55f94dbcf680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94b118000 session 0x55f94ca34000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94bd7d800 session 0x55f94cac52c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94d826800 session 0x55f94d579c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151453696 unmapped: 59719680 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94d82cc00 session 0x55f94d578d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:01.365659+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f7c97000/0x0/0x4ffc00000, data 0x2661651/0x2817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151453696 unmapped: 59719680 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:02.365807+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2294816 data_alloc: 234881024 data_used: 13246464
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.276391983s of 10.043848991s, submitted: 88
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 ms_handle_reset con 0x55f94b8d2000 session 0x55f94be3ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 heartbeat osd_stat(store_statfs(0x4f7c96000/0x0/0x4ffc00000, data 0x2661661/0x2818000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151461888 unmapped: 59711488 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:03.366047+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151478272 unmapped: 59695104 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:04.366204+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 ms_handle_reset con 0x55f94d609800 session 0x55f94b23ba40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 ms_handle_reset con 0x55f94b118000 session 0x55f94d5f65a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151486464 unmapped: 59686912 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:05.366451+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 ms_handle_reset con 0x55f94d826800 session 0x55f94bd7e3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151371776 unmapped: 59801600 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:06.366648+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151838720 unmapped: 59334656 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:07.366847+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 ms_handle_reset con 0x55f94d5fb400 session 0x55f94dc5ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2298711 data_alloc: 234881024 data_used: 13254656
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 ms_handle_reset con 0x55f94bd7d800 session 0x55f94cf4c3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 ms_handle_reset con 0x55f94d82cc00 session 0x55f94be3a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151846912 unmapped: 59326464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:08.367063+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f7c94000/0x0/0x4ffc00000, data 0x2663222/0x281a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 ms_handle_reset con 0x55f94b118000 session 0x55f94d672960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151846912 unmapped: 59326464 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:09.367213+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f7c94000/0x0/0x4ffc00000, data 0x2663222/0x281a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151855104 unmapped: 59318272 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:10.367346+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151855104 unmapped: 59318272 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:11.367480+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 ms_handle_reset con 0x55f94d609800 session 0x55f94bd1b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151871488 unmapped: 59301888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:12.367628+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2306198 data_alloc: 234881024 data_used: 13262848
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 313 ms_handle_reset con 0x55f94d5fb400 session 0x55f94d673680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.048898697s of 10.005864143s, submitted: 120
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151871488 unmapped: 59301888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:13.367820+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f7c8e000/0x0/0x4ffc00000, data 0x2664e81/0x281f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 314 heartbeat osd_stat(store_statfs(0x4f7c8b000/0x0/0x4ffc00000, data 0x26668be/0x2821000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 151871488 unmapped: 59301888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:14.368007+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 314 ms_handle_reset con 0x55f94d826800 session 0x55f94be3a960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 314 ms_handle_reset con 0x55f94d5fb400 session 0x55f94caf3a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 314 ms_handle_reset con 0x55f94b118000 session 0x55f94be3be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 314 ms_handle_reset con 0x55f94d609800 session 0x55f94cac4000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 56025088 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:15.368192+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 314 ms_handle_reset con 0x55f94d826800 session 0x55f94d5f65a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 314 ms_handle_reset con 0x55f94d82cc00 session 0x55f94d5f6780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152043520 unmapped: 59129856 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:16.368359+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152043520 unmapped: 59129856 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:17.368548+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2363881 data_alloc: 234881024 data_used: 13271040
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 315 ms_handle_reset con 0x55f94b118000 session 0x55f94ca34d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152068096 unmapped: 59105280 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:18.368695+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:19.369789+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 58056704 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x2d39eba/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:20.370673+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 58056704 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:21.371386+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 58056704 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 316 ms_handle_reset con 0x55f94d5fb400 session 0x55f94ca41680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x2d39eba/0x2ef7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:22.371705+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152092672 unmapped: 59080704 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2368938 data_alloc: 234881024 data_used: 13271040
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.619301319s of 10.295153618s, submitted: 93
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:23.371894+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152117248 unmapped: 59056128 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:24.372344+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152117248 unmapped: 59056128 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d826800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 317 ms_handle_reset con 0x55f94d609800 session 0x55f94d6b52c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 317 ms_handle_reset con 0x55f94d82f400 session 0x55f94ca3eb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 317 ms_handle_reset con 0x55f94d826800 session 0x55f94d6b41e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 317 heartbeat osd_stat(store_statfs(0x4f75b2000/0x0/0x4ffc00000, data 0x2d3ba47/0x2efb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:25.372563+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152248320 unmapped: 58925056 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 317 ms_handle_reset con 0x55f94b118000 session 0x55f94d672d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:26.372696+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 317 heartbeat osd_stat(store_statfs(0x4f75b1000/0x0/0x4ffc00000, data 0x2d3baa9/0x2efc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152272896 unmapped: 58900480 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:27.372873+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 152272896 unmapped: 58900480 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 317 ms_handle_reset con 0x55f94d5fb400 session 0x55f94d726780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2377723 data_alloc: 234881024 data_used: 13287424
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:28.373100+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154402816 unmapped: 56770560 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 317 heartbeat osd_stat(store_statfs(0x4f6411000/0x0/0x4ffc00000, data 0x2d3bab9/0x2efd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:29.373581+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154411008 unmapped: 56762368 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 318 ms_handle_reset con 0x55f94d609800 session 0x55f94d6b3860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:30.373724+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154443776 unmapped: 56729600 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 318 ms_handle_reset con 0x55f94d82f400 session 0x55f94cf4dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:31.373882+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 318 ms_handle_reset con 0x55f94ca2b800 session 0x55f94cf4c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154443776 unmapped: 56729600 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 318 ms_handle_reset con 0x55f94b118000 session 0x55f94d593680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 318 ms_handle_reset con 0x55f94d5fb400 session 0x55f94be3d2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:32.376416+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154460160 unmapped: 56713216 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2383712 data_alloc: 234881024 data_used: 13299712
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:33.377344+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154484736 unmapped: 56688640 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 318 ms_handle_reset con 0x55f94d82f400 session 0x55f94d6b2f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.337884903s of 10.434386253s, submitted: 24
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 318 ms_handle_reset con 0x55f94d609800 session 0x55f94b23b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:34.377502+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcca800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 56483840 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 318 heartbeat osd_stat(store_statfs(0x4f640b000/0x0/0x4ffc00000, data 0x2d3d73a/0x2f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 319 ms_handle_reset con 0x55f94dccb000 session 0x55f94bd1b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:35.377668+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154705920 unmapped: 56467456 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 319 ms_handle_reset con 0x55f94dccb000 session 0x55f94dc5b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 319 ms_handle_reset con 0x55f94b118000 session 0x55f94da701e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:36.377876+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154648576 unmapped: 56524800 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 319 ms_handle_reset con 0x55f94d5fb400 session 0x55f94b88bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:37.378034+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154714112 unmapped: 56459264 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2437487 data_alloc: 234881024 data_used: 20451328
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 319 heartbeat osd_stat(store_statfs(0x4f63e8000/0x0/0x4ffc00000, data 0x2d631f7/0x2f26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:38.378229+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154714112 unmapped: 56459264 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:39.378375+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154730496 unmapped: 56442880 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 320 ms_handle_reset con 0x55f94dcca800 session 0x55f94b248000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 320 ms_handle_reset con 0x55f94d5f2400 session 0x55f94be3c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 320 ms_handle_reset con 0x55f94d609800 session 0x55f94bd092c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:40.378514+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154738688 unmapped: 56434688 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 320 ms_handle_reset con 0x55f94d5f2400 session 0x55f94dbce960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 320 ms_handle_reset con 0x55f94b118000 session 0x55f94b0250e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:41.378695+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154779648 unmapped: 56393728 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 320 ms_handle_reset con 0x55f94d5fb400 session 0x55f94ca3f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:42.378862+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154779648 unmapped: 56393728 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcca800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 320 ms_handle_reset con 0x55f94dcca800 session 0x55f94ca3c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 320 heartbeat osd_stat(store_statfs(0x4f6409000/0x0/0x4ffc00000, data 0x2d40db8/0x2f04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2437105 data_alloc: 234881024 data_used: 20455424
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:43.379221+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154804224 unmapped: 56369152 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.671638966s of 10.106349945s, submitted: 75
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:44.379414+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154853376 unmapped: 56320000 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 321 ms_handle_reset con 0x55f94b118000 session 0x55f94b24c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:45.379596+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154869760 unmapped: 56303616 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 321 heartbeat osd_stat(store_statfs(0x4f6407000/0x0/0x4ffc00000, data 0x2d427d5/0x2f06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:46.379783+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 321 ms_handle_reset con 0x55f94d5f2400 session 0x55f94dbce960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5fb400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 56295424 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 321 ms_handle_reset con 0x55f94d5fb400 session 0x55f94be3c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 321 handle_osd_map epochs [322,322], i have 322, src has [1,322]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 322 ms_handle_reset con 0x55f94d609800 session 0x55f94bd1b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:47.379935+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155975680 unmapped: 55197696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccb000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2447133 data_alloc: 234881024 data_used: 20467712
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 322 ms_handle_reset con 0x55f94dccb000 session 0x55f94cf4dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:48.380149+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155975680 unmapped: 55197696 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 322 heartbeat osd_stat(store_statfs(0x4f6404000/0x0/0x4ffc00000, data 0x2d44408/0x2f0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 322 ms_handle_reset con 0x55f94b118000 session 0x55f94d672d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:49.380321+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 155967488 unmapped: 55205888 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:50.380523+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153493504 unmapped: 57679872 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 324 ms_handle_reset con 0x55f94d82f400 session 0x55f94bd2f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 324 ms_handle_reset con 0x55f94d609800 session 0x55f94d6b41e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 324 ms_handle_reset con 0x55f94d5f2400 session 0x55f94da71860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:51.380716+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 324 ms_handle_reset con 0x55f94d830000 session 0x55f94d710780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153640960 unmapped: 57532416 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 324 heartbeat osd_stat(store_statfs(0x4f6acd000/0x0/0x4ffc00000, data 0x2677a10/0x283f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:52.380866+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153640960 unmapped: 57532416 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 324 ms_handle_reset con 0x55f94b118000 session 0x55f94d6b52c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2355853 data_alloc: 234881024 data_used: 13336576
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:53.381954+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153649152 unmapped: 57524224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 324 ms_handle_reset con 0x55f94d5f2400 session 0x55f94ca34d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:54.383553+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153649152 unmapped: 57524224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 324 handle_osd_map epochs [326,326], i have 324, src has [1,326]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 324 handle_osd_map epochs [325,326], i have 324, src has [1,326]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.434559345s of 11.044068336s, submitted: 94
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94d609800 session 0x55f94d5f65a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94d830000 session 0x55f94be7af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf19400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94d82f400 session 0x55f94cac4000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:55.383730+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94cf19400 session 0x55f94cac45a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153649152 unmapped: 57524224 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:56.384058+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94b118000 session 0x55f94caf3a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153665536 unmapped: 57507840 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94d5f2400 session 0x55f94bd1b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94d609800 session 0x55f94d6723c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f6ac9000/0x0/0x4ffc00000, data 0x267b00e/0x2844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:57.384166+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153665536 unmapped: 57507840 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2359291 data_alloc: 234881024 data_used: 13336576
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:58.384549+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153665536 unmapped: 57507840 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:59.385191+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153665536 unmapped: 57507840 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94d604800 session 0x55f94bd2ef00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94b118000 session 0x55f94d672000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf19400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:00.385305+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94d830000 session 0x55f94be3ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94cf19400 session 0x55f94dbcfe00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154361856 unmapped: 56811520 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f63b9000/0x0/0x4ffc00000, data 0x2d8cfac/0x2f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 ms_handle_reset con 0x55f94d5f2400 session 0x55f94dbcf0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f63b9000/0x0/0x4ffc00000, data 0x2d8cfac/0x2f55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:01.385499+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154361856 unmapped: 56811520 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:02.385866+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154361856 unmapped: 56811520 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2413625 data_alloc: 234881024 data_used: 13336576
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:03.386245+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154361856 unmapped: 56811520 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 327 ms_handle_reset con 0x55f94d609800 session 0x55f94dbce1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:04.386645+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154361856 unmapped: 56811520 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:05.387012+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154361856 unmapped: 56811520 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.240956306s of 10.945042610s, submitted: 103
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf19400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 327 ms_handle_reset con 0x55f94cf19400 session 0x55f94d673860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:06.387182+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 328 ms_handle_reset con 0x55f94d5f2400 session 0x55f94d710000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 56786944 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f63b0000/0x0/0x4ffc00000, data 0x2d905fe/0x2f5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 329 ms_handle_reset con 0x55f94d609800 session 0x55f94dbcfa40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:07.387400+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 329 ms_handle_reset con 0x55f94b118000 session 0x55f94be3d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 56786944 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2430761 data_alloc: 234881024 data_used: 13352960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:08.387647+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 56786944 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 329 heartbeat osd_stat(store_statfs(0x4f63ac000/0x0/0x4ffc00000, data 0x2d9217b/0x2f60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:09.387878+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 56786944 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:10.388036+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 56786944 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:11.388249+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154386432 unmapped: 56786944 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:12.388416+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 330 heartbeat osd_stat(store_statfs(0x4f63ac000/0x0/0x4ffc00000, data 0x2d9217b/0x2f60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f3400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 330 ms_handle_reset con 0x55f94d5f3400 session 0x55f94bd1b0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154411008 unmapped: 56762368 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2435070 data_alloc: 234881024 data_used: 13361152
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 330 handle_osd_map epochs [330,331], i have 330, src has [1,331]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:13.388546+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 331 ms_handle_reset con 0x55f94b118000 session 0x55f94b88bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 331 ms_handle_reset con 0x55f94d830000 session 0x55f94d592d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154435584 unmapped: 56737792 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf19400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 331 ms_handle_reset con 0x55f94cf19400 session 0x55f94d579a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:14.388769+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 331 ms_handle_reset con 0x55f94d5f2400 session 0x55f94caf23c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 158629888 unmapped: 52543488 heap: 211173376 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:15.388992+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 331 ms_handle_reset con 0x55f94d609800 session 0x55f94cd1b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154476544 unmapped: 60899328 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 332 ms_handle_reset con 0x55f94b118000 session 0x55f94be3b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf19400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.113396645s of 10.011259079s, submitted: 53
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 332 ms_handle_reset con 0x55f94cf19400 session 0x55f94cd1ba40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 332 heartbeat osd_stat(store_statfs(0x4f37a6000/0x0/0x4ffc00000, data 0x5995dc9/0x5b68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 332 ms_handle_reset con 0x55f94d5f2400 session 0x55f94adc6000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:16.389279+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154476544 unmapped: 60899328 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:17.389505+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154476544 unmapped: 60899328 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2751871 data_alloc: 234881024 data_used: 13369344
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:18.389757+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154476544 unmapped: 60899328 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 332 ms_handle_reset con 0x55f94d830000 session 0x55f94bd1b0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:19.389944+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154492928 unmapped: 60882944 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 332 heartbeat osd_stat(store_statfs(0x4f37a2000/0x0/0x4ffc00000, data 0x599751a/0x5b6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:20.390150+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154492928 unmapped: 60882944 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:21.390322+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d833800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 332 ms_handle_reset con 0x55f94d833800 session 0x55f94dbcfa40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 332 handle_osd_map epochs [332,333], i have 332, src has [1,333]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154501120 unmapped: 60874752 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf19400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 333 ms_handle_reset con 0x55f94b118000 session 0x55f94d673860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:22.390609+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 334 ms_handle_reset con 0x55f94cf19400 session 0x55f94bd1b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153763840 unmapped: 61612032 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 334 ms_handle_reset con 0x55f94d5f2400 session 0x55f94d6b41e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 334 ms_handle_reset con 0x55f94d609800 session 0x55f94d5f6780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2765828 data_alloc: 234881024 data_used: 13385728
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:23.390855+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 334 ms_handle_reset con 0x55f94d5f4c00 session 0x55f94bd2f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153772032 unmapped: 61603840 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 335 ms_handle_reset con 0x55f94d5f4c00 session 0x55f94d672d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f3799000/0x0/0x4ffc00000, data 0x599acae/0x5b73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:24.391033+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf19400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 61227008 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 335 handle_osd_map epochs [336,336], i have 336, src has [1,336]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 336 ms_handle_reset con 0x55f94b118000 session 0x55f94b23af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:25.391216+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 336 ms_handle_reset con 0x55f94d609800 session 0x55f94d592b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 336 ms_handle_reset con 0x55f94d830000 session 0x55f94dc5b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154189824 unmapped: 61186048 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:26.391340+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccc800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.824214935s of 10.796704292s, submitted: 123
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 336 ms_handle_reset con 0x55f94fccc800 session 0x55f94caa2f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153919488 unmapped: 61456384 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:27.391438+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153919488 unmapped: 61456384 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 336 ms_handle_reset con 0x55f94b118000 session 0x55f94be3b0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2828533 data_alloc: 234881024 data_used: 20602880
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:28.391574+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153919488 unmapped: 61456384 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 336 ms_handle_reset con 0x55f94d609800 session 0x55f94bd33e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:29.391775+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d602c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 337 heartbeat osd_stat(store_statfs(0x4f376a000/0x0/0x4ffc00000, data 0x59c83e7/0x5ba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf18800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 337 ms_handle_reset con 0x55f94d830000 session 0x55f94ca41c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 337 ms_handle_reset con 0x55f94cf18800 session 0x55f94d710000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 153944064 unmapped: 61431808 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d603800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 337 ms_handle_reset con 0x55f94d603800 session 0x55f94be3be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 338 ms_handle_reset con 0x55f94d602c00 session 0x55f94b24c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 338 ms_handle_reset con 0x55f94b118000 session 0x55f94caf3680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 338 ms_handle_reset con 0x55f94d5f4c00 session 0x55f94da7de00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:30.398076+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf18800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154312704 unmapped: 61063168 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:31.398233+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 338 ms_handle_reset con 0x55f94d609800 session 0x55f94d711680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 338 heartbeat osd_stat(store_statfs(0x4f373e000/0x0/0x4ffc00000, data 0x59efbe6/0x5bcf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154312704 unmapped: 61063168 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:32.398401+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154312704 unmapped: 61063168 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2843682 data_alloc: 234881024 data_used: 20619264
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:33.398553+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154312704 unmapped: 61063168 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 339 ms_handle_reset con 0x55f94ca2d000 session 0x55f94d6b50e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 339 ms_handle_reset con 0x55f94b118000 session 0x55f94da71e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:34.398661+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 60678144 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:35.398822+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159883264 unmapped: 55492608 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f373c000/0x0/0x4ffc00000, data 0x59f17c5/0x5bd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 339 ms_handle_reset con 0x55f94d5f4c00 session 0x55f94d5794a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:36.399043+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.056007385s of 10.098265648s, submitted: 95
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 159883264 unmapped: 55492608 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 339 heartbeat osd_stat(store_statfs(0x4f373c000/0x0/0x4ffc00000, data 0x59f17c5/0x5bd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:37.399212+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 161251328 unmapped: 54124544 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2919783 data_alloc: 251658240 data_used: 29638656
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:38.399396+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165036032 unmapped: 50339840 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d602c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 339 ms_handle_reset con 0x55f94d609800 session 0x55f94cf4cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:39.399549+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf1d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 340 ms_handle_reset con 0x55f94d397c00 session 0x55f94dbcf680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 165183488 unmapped: 50192384 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.3 total, 600.0 interval
                                           Cumulative writes: 19K writes, 75K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 19K writes, 6587 syncs, 3.02 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8309 writes, 29K keys, 8309 commit groups, 1.0 writes per commit group, ingest: 19.27 MB, 0.03 MB/s
                                           Interval WAL: 8309 writes, 3430 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:40.399735+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 341 heartbeat osd_stat(store_statfs(0x4f2dcb000/0x0/0x4ffc00000, data 0x635f3d0/0x6542000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 341 ms_handle_reset con 0x55f94cf1d400 session 0x55f94cf4c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 341 ms_handle_reset con 0x55f94b118000 session 0x55f94d6b50e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 341 ms_handle_reset con 0x55f94d602c00 session 0x55f94bd09a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166756352 unmapped: 48619520 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:41.399925+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf1d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166756352 unmapped: 48619520 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:42.400058+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 342 ms_handle_reset con 0x55f94cf1d400 session 0x55f94d593e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166789120 unmapped: 48586752 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 342 ms_handle_reset con 0x55f94d397c00 session 0x55f94d6b5e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3011866 data_alloc: 251658240 data_used: 29786112
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f2d27000/0x0/0x4ffc00000, data 0x63ff4c0/0x65e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:43.400166+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166789120 unmapped: 48586752 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:44.400303+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166797312 unmapped: 48578560 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 343 ms_handle_reset con 0x55f94d5f4c00 session 0x55f94b249680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:45.400464+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176816128 unmapped: 38559744 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:46.400648+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 40427520 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f4c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.177947044s of 10.234701157s, submitted: 173
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 343 ms_handle_reset con 0x55f94b118000 session 0x55f94bd2f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf1d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 343 ms_handle_reset con 0x55f94cf1d400 session 0x55f94d6b41e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:47.400770+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 343 heartbeat osd_stat(store_statfs(0x4f2c6e000/0x0/0x4ffc00000, data 0x6427b20/0x6610000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175177728 unmapped: 40198144 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 344 ms_handle_reset con 0x55f94d397c00 session 0x55f94be7bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d602c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 344 ms_handle_reset con 0x55f94d602c00 session 0x55f94be3b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3200038 data_alloc: 251658240 data_used: 31047680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:48.400936+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 344 ms_handle_reset con 0x55f94d5f4c00 session 0x55f94b24c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175661056 unmapped: 39714816 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:49.401372+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 39616512 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:50.401590+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174333952 unmapped: 41041920 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 346 ms_handle_reset con 0x55f94b118000 session 0x55f94caf2d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf1d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 346 ms_handle_reset con 0x55f94cf1d400 session 0x55f94be3bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:51.401728+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174350336 unmapped: 41025536 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:52.401890+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 346 heartbeat osd_stat(store_statfs(0x4f1835000/0x0/0x4ffc00000, data 0x78ecc8b/0x7ad8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174350336 unmapped: 41025536 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3196611 data_alloc: 251658240 data_used: 31055872
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:53.402075+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 347 ms_handle_reset con 0x55f94d397c00 session 0x55f94be3d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d602c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 347 ms_handle_reset con 0x55f94d602c00 session 0x55f94b24cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174366720 unmapped: 41009152 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:54.402244+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d609800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174366720 unmapped: 41009152 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 347 handle_osd_map epochs [348,348], i have 348, src has [1,348]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:55.402516+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 348 ms_handle_reset con 0x55f94d609800 session 0x55f94caf30e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174391296 unmapped: 40984576 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f1831000/0x0/0x4ffc00000, data 0x78efefe/0x7adb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 348 ms_handle_reset con 0x55f94cf18800 session 0x55f94da71a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:56.402720+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 348 ms_handle_reset con 0x55f94d830000 session 0x55f94d6b54a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174456832 unmapped: 40919040 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.996846199s of 10.001896858s, submitted: 189
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 348 ms_handle_reset con 0x55f94b118000 session 0x55f94dc5ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:57.402896+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf1d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174522368 unmapped: 40853504 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:58.403288+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3195007 data_alloc: 251658240 data_used: 30986240
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174563328 unmapped: 40812544 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 349 handle_osd_map epochs [349,350], i have 349, src has [1,350]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:59.403460+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 350 ms_handle_reset con 0x55f94cf1d400 session 0x55f94d6b2b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174604288 unmapped: 40771584 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:00.404042+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174645248 unmapped: 40730624 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 350 heartbeat osd_stat(store_statfs(0x4f1850000/0x0/0x4ffc00000, data 0x78d257f/0x7abd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d602c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:01.404240+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 351 handle_osd_map epochs [351,351], i have 351, src has [1,351]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b030c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 351 ms_handle_reset con 0x55f94d602c00 session 0x55f94d5f70e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174678016 unmapped: 40697856 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:02.404673+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f143d000/0x0/0x4ffc00000, data 0x78d40d6/0x7abf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 351 handle_osd_map epochs [352,352], i have 352, src has [1,352]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 352 ms_handle_reset con 0x55f94d397c00 session 0x55f94be7a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 352 ms_handle_reset con 0x55f94b030c00 session 0x55f94cac45a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174718976 unmapped: 40656896 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 352 ms_handle_reset con 0x55f94b118000 session 0x55f94b23a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 352 heartbeat osd_stat(store_statfs(0x4f143d000/0x0/0x4ffc00000, data 0x78d40d6/0x7abf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:03.404951+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3203719 data_alloc: 251658240 data_used: 30998528
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 352 heartbeat osd_stat(store_statfs(0x4f143a000/0x0/0x4ffc00000, data 0x78d5e43/0x7ac3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174718976 unmapped: 40656896 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 352 heartbeat osd_stat(store_statfs(0x4f143a000/0x0/0x4ffc00000, data 0x78d5e43/0x7ac3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:04.405256+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174776320 unmapped: 40599552 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 353 heartbeat osd_stat(store_statfs(0x4f1436000/0x0/0x4ffc00000, data 0x78d78de/0x7ac6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:05.405583+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174825472 unmapped: 40550400 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:06.405807+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174825472 unmapped: 40550400 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:07.406061+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174825472 unmapped: 40550400 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:08.406289+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3219527 data_alloc: 251658240 data_used: 30998528
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174825472 unmapped: 40550400 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf18800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.330086708s of 12.160899162s, submitted: 152
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 354 ms_handle_reset con 0x55f94cf18800 session 0x55f94dbce1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f1433000/0x0/0x4ffc00000, data 0x7a948de/0x7acb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:09.406453+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 40534016 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:10.407017+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf1d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175906816 unmapped: 39469056 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:11.407187+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175906816 unmapped: 39469056 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:12.407339+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 355 ms_handle_reset con 0x55f94d830000 session 0x55f94d592000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b030c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 355 ms_handle_reset con 0x55f94b118000 session 0x55f94cf4de00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 355 ms_handle_reset con 0x55f94b030c00 session 0x55f94b24c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 355 heartbeat osd_stat(store_statfs(0x4f1428000/0x0/0x4ffc00000, data 0x7cc2f06/0x7ad5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175964160 unmapped: 39411712 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf18800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 356 ms_handle_reset con 0x55f94cf18800 session 0x55f94caf3e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 356 ms_handle_reset con 0x55f94d397c00 session 0x55f94d579680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 356 ms_handle_reset con 0x55f94cf1d400 session 0x55f94bd1b0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:13.407496+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3256297 data_alloc: 251658240 data_used: 31006720
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175980544 unmapped: 39395328 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:14.407731+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175980544 unmapped: 39395328 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b030c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:15.408017+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175980544 unmapped: 39395328 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:16.408155+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 357 ms_handle_reset con 0x55f94b030c00 session 0x55f94d6b4b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 357 ms_handle_reset con 0x55f94b118000 session 0x55f94b1250e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 357 heartbeat osd_stat(store_statfs(0x4f1422000/0x0/0x4ffc00000, data 0x7cc6654/0x7adb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176005120 unmapped: 39370752 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:17.408325+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 357 heartbeat osd_stat(store_statfs(0x4f1422000/0x0/0x4ffc00000, data 0x7cc6654/0x7adb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf18800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176021504 unmapped: 39354368 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 358 ms_handle_reset con 0x55f94cf18800 session 0x55f94caf3c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:18.408538+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 358 ms_handle_reset con 0x55f94d397c00 session 0x55f94b942d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3258143 data_alloc: 251658240 data_used: 31006720
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176054272 unmapped: 39321600 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:19.408679+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176054272 unmapped: 39321600 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:20.408856+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 39305216 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:21.409054+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 39305216 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 358 heartbeat osd_stat(store_statfs(0x4f1420000/0x0/0x4ffc00000, data 0x7cc8231/0x7add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:22.409242+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d830000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.540192604s of 13.710654259s, submitted: 98
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 358 ms_handle_reset con 0x55f94d830000 session 0x55f94d711680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 39305216 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:23.409396+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3260749 data_alloc: 251658240 data_used: 31010816
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 358 heartbeat osd_stat(store_statfs(0x4f1420000/0x0/0x4ffc00000, data 0x7cc8254/0x7ade000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b030c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176078848 unmapped: 39297024 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:24.409532+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176078848 unmapped: 39297024 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:25.409677+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176103424 unmapped: 39272448 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:26.409835+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176160768 unmapped: 39215104 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f141c000/0x0/0x4ffc00000, data 0x7cc9cd3/0x7ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:27.410005+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176160768 unmapped: 39215104 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:28.410348+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3265619 data_alloc: 251658240 data_used: 31043584
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176177152 unmapped: 39198720 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:29.410498+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176201728 unmapped: 39174144 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:30.410623+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f1419000/0x0/0x4ffc00000, data 0x7ccb736/0x7ae4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf18800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 360 ms_handle_reset con 0x55f94cf18800 session 0x55f94d6b2960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 39141376 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:31.413479+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 39141376 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:32.413604+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 39141376 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:33.413707+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.582867622s of 10.683995247s, submitted: 40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3271322 data_alloc: 251658240 data_used: 31047680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 360 ms_handle_reset con 0x55f94dc7f000 session 0x55f94dc5be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 39141376 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:34.413997+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f1418000/0x0/0x4ffc00000, data 0x7ccb769/0x7ae6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176267264 unmapped: 39108608 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:35.414201+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 360 ms_handle_reset con 0x55f94d742c00 session 0x55f94cac4b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176365568 unmapped: 39010304 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82b400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 361 ms_handle_reset con 0x55f94e190c00 session 0x55f94b248960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:36.414351+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176758784 unmapped: 38617088 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 362 ms_handle_reset con 0x55f94d82b400 session 0x55f94caa2b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 362 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b23ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:37.414539+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181583872 unmapped: 33792000 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:38.414718+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3331775 data_alloc: 251658240 data_used: 33816576
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf18800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 362 heartbeat osd_stat(store_statfs(0x4f10bd000/0x0/0x4ffc00000, data 0x8020ec5/0x7e3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181665792 unmapped: 33710080 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 362 ms_handle_reset con 0x55f94d742c00 session 0x55f94d6b52c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:39.414866+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181682176 unmapped: 33693696 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 363 ms_handle_reset con 0x55f94dc7f000 session 0x55f94d7270e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:40.415030+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177364992 unmapped: 38010880 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 363 heartbeat osd_stat(store_statfs(0x4f10ba000/0x0/0x4ffc00000, data 0x8022aa4/0x7e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:41.415216+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 364 ms_handle_reset con 0x55f94e190c00 session 0x55f94d6b2780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 364 ms_handle_reset con 0x55f94cf18800 session 0x55f94be3a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177381376 unmapped: 37994496 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:42.415404+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 177381376 unmapped: 37994496 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 364 ms_handle_reset con 0x55f94a3c9400 session 0x55f94bd1b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:43.415566+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.406921387s of 10.004858971s, submitted: 34
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3336749 data_alloc: 251658240 data_used: 33820672
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f10b1000/0x0/0x4ffc00000, data 0x8026202/0x7e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 365 ms_handle_reset con 0x55f94d742c00 session 0x55f94cd1b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 36896768 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:44.415738+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 178511872 unmapped: 36864000 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:45.416058+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 365 ms_handle_reset con 0x55f94e190c00 session 0x55f94b88af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ef8bc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 366 ms_handle_reset con 0x55f94ef8bc00 session 0x55f94dc5a960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 178724864 unmapped: 36651008 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:46.416184+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 367 ms_handle_reset con 0x55f94ca2d800 session 0x55f94adc6b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 36429824 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 367 ms_handle_reset con 0x55f94dc7f000 session 0x55f94cd1bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:47.416306+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179011584 unmapped: 36364288 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:48.416482+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3356078 data_alloc: 251658240 data_used: 34713600
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f10ad000/0x0/0x4ffc00000, data 0x8029944/0x7e51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179011584 unmapped: 36364288 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 368 ms_handle_reset con 0x55f94a3c9400 session 0x55f94ca352c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:49.416631+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179068928 unmapped: 36306944 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:50.424920+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 368 ms_handle_reset con 0x55f94d742c00 session 0x55f94ca3f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 368 ms_handle_reset con 0x55f94ca2d800 session 0x55f94d63c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179085312 unmapped: 36290560 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:51.425045+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e190c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179142656 unmapped: 36233216 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 368 ms_handle_reset con 0x55f94e190c00 session 0x55f94d711860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:52.425194+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 368 heartbeat osd_stat(store_statfs(0x4f10ac000/0x0/0x4ffc00000, data 0x802b6ab/0x7e52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 180076544 unmapped: 35299328 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:53.425356+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3366454 data_alloc: 251658240 data_used: 36454400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.648896217s of 10.556773186s, submitted: 80
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 368 ms_handle_reset con 0x55f94ca2d800 session 0x55f94da7d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 180101120 unmapped: 35274752 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 369 ms_handle_reset con 0x55f94dc7f000 session 0x55f94bd08960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:54.425517+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 34160640 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f10a7000/0x0/0x4ffc00000, data 0x802d2fa/0x7e56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:55.426023+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 370 ms_handle_reset con 0x55f94d742c00 session 0x55f94d579860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 370 ms_handle_reset con 0x55f94a3c9400 session 0x55f94da7d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181313536 unmapped: 34062336 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:56.426281+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f10a2000/0x0/0x4ffc00000, data 0x802ef11/0x7e5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ef8bc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181329920 unmapped: 34045952 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:57.426450+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 371 ms_handle_reset con 0x55f94ef8bc00 session 0x55f94be3b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 371 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b9430e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181354496 unmapped: 34021376 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:58.426630+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3378669 data_alloc: 251658240 data_used: 36470784
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181354496 unmapped: 34021376 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:59.426779+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 371 ms_handle_reset con 0x55f94ca2d800 session 0x55f94d7272c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181354496 unmapped: 34021376 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:00.427015+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 371 heartbeat osd_stat(store_statfs(0x4f10a1000/0x0/0x4ffc00000, data 0x8030aac/0x7e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181469184 unmapped: 33906688 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:01.427178+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 372 ms_handle_reset con 0x55f94dc7f000 session 0x55f94bd09c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 372 heartbeat osd_stat(store_statfs(0x4f109e000/0x0/0x4ffc00000, data 0x8032563/0x7e5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181624832 unmapped: 33751040 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:02.427338+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d603000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 373 ms_handle_reset con 0x55f94bde1000 session 0x55f94adc7a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 373 ms_handle_reset con 0x55f94b030c00 session 0x55f94da712c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 373 ms_handle_reset con 0x55f94b118000 session 0x55f94d578f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 180092928 unmapped: 35282944 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:03.427461+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 373 ms_handle_reset con 0x55f94a3c9400 session 0x55f94bd094a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 374 ms_handle_reset con 0x55f94d603000 session 0x55f94d673a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3391908 data_alloc: 251658240 data_used: 36462592
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 374 ms_handle_reset con 0x55f94d742c00 session 0x55f94cf4c3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 180101120 unmapped: 35274752 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:04.428062+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.132298470s of 10.364068031s, submitted: 213
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 180101120 unmapped: 35274752 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:05.428258+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 376 ms_handle_reset con 0x55f94ca2d800 session 0x55f94caa25a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 376 ms_handle_reset con 0x55f94bde1000 session 0x55f94ca412c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 376 ms_handle_reset con 0x55f94a3c9400 session 0x55f94af712c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 376 ms_handle_reset con 0x55f94b118000 session 0x55f94b125860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181231616 unmapped: 34144256 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:06.428496+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 376 ms_handle_reset con 0x55f94ca2d800 session 0x55f94b23b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 180183040 unmapped: 35192832 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d603000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 376 ms_handle_reset con 0x55f94d603000 session 0x55f94cac50e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:07.428665+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 376 heartbeat osd_stat(store_statfs(0x4f13e2000/0x0/0x4ffc00000, data 0x7ce737a/0x7b1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 377 ms_handle_reset con 0x55f94a3c9400 session 0x55f94caf2780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 180232192 unmapped: 35143680 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:08.429380+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 377 ms_handle_reset con 0x55f94cf19400 session 0x55f94bd1b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 377 ms_handle_reset con 0x55f94d5f2400 session 0x55f94bd32780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 377 ms_handle_reset con 0x55f94bde1000 session 0x55f94bd2ed20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3354624 data_alloc: 251658240 data_used: 36352000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 377 ms_handle_reset con 0x55f94ca2d800 session 0x55f94da703c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 377 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d710000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181280768 unmapped: 34095104 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:09.430045+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 378 ms_handle_reset con 0x55f94d742c00 session 0x55f94da68960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 378 ms_handle_reset con 0x55f94bde1000 session 0x55f94d63d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 378 ms_handle_reset con 0x55f94b118000 session 0x55f94b862b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181297152 unmapped: 34078720 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:10.430371+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 378 ms_handle_reset con 0x55f94ca2d800 session 0x55f94cac5c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 378 ms_handle_reset con 0x55f94a3c9400 session 0x55f94da71c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175849472 unmapped: 39526400 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:11.430535+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 378 heartbeat osd_stat(store_statfs(0x4f2553000/0x0/0x4ffc00000, data 0x6794ad1/0x69a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 379 ms_handle_reset con 0x55f94b118000 session 0x55f94da69c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 379 ms_handle_reset con 0x55f94bde1000 session 0x55f94da690e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f2553000/0x0/0x4ffc00000, data 0x6794ad1/0x69a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175898624 unmapped: 39477248 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:12.430866+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d742c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf19400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175915008 unmapped: 39460864 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:13.431134+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 380 ms_handle_reset con 0x55f94d397c00 session 0x55f94cf4c1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3150620 data_alloc: 234881024 data_used: 25313280
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 380 ms_handle_reset con 0x55f94d742c00 session 0x55f94d63cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 380 ms_handle_reset con 0x55f94cf19400 session 0x55f94d6b4960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 380 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d578960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175964160 unmapped: 39411712 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:14.431283+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f2552000/0x0/0x4ffc00000, data 0x6798342/0x69a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.123127937s of 10.611358643s, submitted: 217
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175972352 unmapped: 39403520 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:15.431584+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 382 ms_handle_reset con 0x55f94b118000 session 0x55f94da7d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 176029696 unmapped: 39346176 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:16.431760+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 383 ms_handle_reset con 0x55f94bde1000 session 0x55f94caa2b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175144960 unmapped: 40230912 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:17.432129+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175144960 unmapped: 40230912 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:18.432554+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 383 heartbeat osd_stat(store_statfs(0x4f254c000/0x0/0x4ffc00000, data 0x679d515/0x69af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 383 ms_handle_reset con 0x55f94d5f2400 session 0x55f94d592f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3153512 data_alloc: 234881024 data_used: 25300992
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 384 ms_handle_reset con 0x55f94d5f2400 session 0x55f94b124000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 167321600 unmapped: 48054272 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:19.432851+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 385 ms_handle_reset con 0x55f94d397c00 session 0x55f94b8625a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 385 ms_handle_reset con 0x55f94a3c9400 session 0x55f94cf4d860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 167337984 unmapped: 48037888 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:20.433105+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 167337984 unmapped: 48037888 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:21.433378+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 385 ms_handle_reset con 0x55f94b118000 session 0x55f94d593680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f6608000/0x0/0x4ffc00000, data 0x26e0d7f/0x28f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 385 ms_handle_reset con 0x55f94bde1000 session 0x55f94cac5680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166297600 unmapped: 49078272 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:22.433656+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f6608000/0x0/0x4ffc00000, data 0x26e0d7f/0x28f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166297600 unmapped: 49078272 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:23.433889+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 385 ms_handle_reset con 0x55f94a3c9400 session 0x55f94cac41e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2625642 data_alloc: 234881024 data_used: 13545472
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166297600 unmapped: 49078272 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:24.434215+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 ms_handle_reset con 0x55f94b118000 session 0x55f94ca3dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166322176 unmapped: 49053696 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:25.434491+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.675374985s of 11.171332359s, submitted: 194
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 ms_handle_reset con 0x55f94d397c00 session 0x55f94da7c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166322176 unmapped: 49053696 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:26.434775+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 ms_handle_reset con 0x55f94d5f2400 session 0x55f94bd490e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf19400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 heartbeat osd_stat(store_statfs(0x4f6604000/0x0/0x4ffc00000, data 0x26e28c8/0x28f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 ms_handle_reset con 0x55f94cf19400 session 0x55f94da68960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 ms_handle_reset con 0x55f94a3c9400 session 0x55f94cac50e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 ms_handle_reset con 0x55f94d397c00 session 0x55f94b125860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 ms_handle_reset con 0x55f94b118000 session 0x55f94d5934a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166445056 unmapped: 48930816 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 ms_handle_reset con 0x55f94d5f2400 session 0x55f94af712c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:27.434942+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d832c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 387 ms_handle_reset con 0x55f94d832c00 session 0x55f94caa25a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166445056 unmapped: 48930816 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:28.435227+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 388 ms_handle_reset con 0x55f94dc9f000 session 0x55f94be3c3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 388 ms_handle_reset con 0x55f94dc7f000 session 0x55f94b862d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2712910 data_alloc: 234881024 data_used: 13565952
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166445056 unmapped: 48930816 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:29.435474+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 389 ms_handle_reset con 0x55f94a3c9400 session 0x55f94da710e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166395904 unmapped: 48979968 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:30.435715+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166395904 unmapped: 48979968 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:31.435893+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 389 ms_handle_reset con 0x55f94d397c00 session 0x55f94bd09680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166395904 unmapped: 48979968 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:32.436022+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 389 heartbeat osd_stat(store_statfs(0x4f5de2000/0x0/0x4ffc00000, data 0x2efcb93/0x311b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 390 ms_handle_reset con 0x55f94d5f2400 session 0x55f94bd2ef00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166445056 unmapped: 48930816 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:33.436170+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 391 ms_handle_reset con 0x55f94d5f2400 session 0x55f94d578780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 391 ms_handle_reset con 0x55f94b118000 session 0x55f94bd32000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2727674 data_alloc: 234881024 data_used: 13578240
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166477824 unmapped: 48898048 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:34.436360+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 391 ms_handle_reset con 0x55f94a3c9400 session 0x55f94be3cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 391 ms_handle_reset con 0x55f94dc7f000 session 0x55f94da71860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166477824 unmapped: 48898048 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:35.436617+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 ms_handle_reset con 0x55f94d397c00 session 0x55f94cf4da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 ms_handle_reset con 0x55f94b118000 session 0x55f94cac45a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 ms_handle_reset con 0x55f94a3c9400 session 0x55f94ca34000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166477824 unmapped: 48898048 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:36.436832+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 ms_handle_reset con 0x55f94d5f2400 session 0x55f94cd1a3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.886300087s of 10.370611191s, submitted: 113
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 ms_handle_reset con 0x55f94dc7f000 session 0x55f94adc7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d60bc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f5db3000/0x0/0x4ffc00000, data 0x2f25f23/0x314a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166780928 unmapped: 48594944 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:37.437048+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d7ca000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 ms_handle_reset con 0x55f94d7ca000 session 0x55f94da70d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 166780928 unmapped: 48594944 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:38.437326+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783318 data_alloc: 234881024 data_used: 20054016
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 heartbeat osd_stat(store_statfs(0x4f5db2000/0x0/0x4ffc00000, data 0x2f25f33/0x314b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 169058304 unmapped: 46317568 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:39.437509+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 ms_handle_reset con 0x55f94b118000 session 0x55f94d593860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 169066496 unmapped: 46309376 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:40.437687+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d7ca000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 393 ms_handle_reset con 0x55f94d5f2400 session 0x55f94dbcf2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 169082880 unmapped: 46292992 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:41.437921+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 394 ms_handle_reset con 0x55f94d7ca000 session 0x55f94be7bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 394 ms_handle_reset con 0x55f94dc7f000 session 0x55f94da71860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 394 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b0241e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 170156032 unmapped: 45219840 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:42.438093+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 170156032 unmapped: 45219840 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:43.438262+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 395 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b1252c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2810186 data_alloc: 234881024 data_used: 21897216
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 395 ms_handle_reset con 0x55f94b118000 session 0x55f94af70960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 170156032 unmapped: 45219840 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:44.438430+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 396 heartbeat osd_stat(store_statfs(0x4f5da7000/0x0/0x4ffc00000, data 0x2f2b73a/0x3155000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 170156032 unmapped: 45219840 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:45.438606+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 397 ms_handle_reset con 0x55f94d5f2400 session 0x55f94cac5c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d7ca000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 397 ms_handle_reset con 0x55f94d7ca000 session 0x55f94d711860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 170172416 unmapped: 45203456 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:46.438767+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.061577797s of 10.239310265s, submitted: 76
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 397 ms_handle_reset con 0x55f94dc7f000 session 0x55f94d6b2960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 397 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d6b2780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:47.438917+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 170434560 unmapped: 44941312 heap: 215375872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 397 handle_osd_map epochs [398,398], i have 398, src has [1,398]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:48.439041+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 170459136 unmapped: 49119232 heap: 219578368 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 398 ms_handle_reset con 0x55f94b118000 session 0x55f94adc70e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d7ca000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 398 ms_handle_reset con 0x55f94d5f2400 session 0x55f94bd32000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 398 ms_handle_reset con 0x55f94d7ca000 session 0x55f94d673c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3040872 data_alloc: 234881024 data_used: 21893120
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:49.439201+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 175857664 unmapped: 47923200 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcda000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f3b00000/0x0/0x4ffc00000, data 0x51d4368/0x53fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:50.439334+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 180772864 unmapped: 43008000 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 400 ms_handle_reset con 0x55f94dcda000 session 0x55f94d673860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:51.439525+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 178167808 unmapped: 45613056 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:52.439706+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179249152 unmapped: 44531712 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 401 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d672000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:53.439957+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179257344 unmapped: 44523520 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f2e78000/0x0/0x4ffc00000, data 0x5e5156b/0x607c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 401 ms_handle_reset con 0x55f94b118000 session 0x55f94da71680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180124 data_alloc: 234881024 data_used: 24080384
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:54.440282+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179257344 unmapped: 44523520 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:55.440561+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179257344 unmapped: 44523520 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:56.440734+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179257344 unmapped: 44523520 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f2e7f000/0x0/0x4ffc00000, data 0x5e52fa4/0x607e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 402 ms_handle_reset con 0x55f94d5f2400 session 0x55f94be3d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.942288399s of 10.140292168s, submitted: 293
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:57.440926+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 44515328 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:58.441088+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179290112 unmapped: 44490752 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187926 data_alloc: 234881024 data_used: 24100864
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 402 ms_handle_reset con 0x55f94dc9f000 session 0x55f94ca35680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d7ca000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 402 ms_handle_reset con 0x55f94d7ca000 session 0x55f94b124000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 402 ms_handle_reset con 0x55f94d60bc00 session 0x55f94cac5860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:59.441247+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179290112 unmapped: 44490752 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 402 ms_handle_reset con 0x55f94b118000 session 0x55f94cac4d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:00.441389+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179306496 unmapped: 44474368 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94d5f2400 session 0x55f94b88be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:01.441586+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179453952 unmapped: 44326912 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccc400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d592f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:02.441764+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179453952 unmapped: 44326912 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f2e7a000/0x0/0x4ffc00000, data 0x5e54a72/0x6083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:03.441906+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179453952 unmapped: 44326912 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188983 data_alloc: 234881024 data_used: 24260608
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:04.442105+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 179462144 unmapped: 44318720 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f2e7a000/0x0/0x4ffc00000, data 0x5e54a72/0x6083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:05.442272+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:06.443562+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f2e7a000/0x0/0x4ffc00000, data 0x5e54a72/0x6083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:07.443789+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:08.443939+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3229463 data_alloc: 251658240 data_used: 29171712
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f2e7a000/0x0/0x4ffc00000, data 0x5e54a72/0x6083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:09.444164+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:10.444629+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f2e7a000/0x0/0x4ffc00000, data 0x5e54a72/0x6083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:11.444814+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:12.445013+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f2e7a000/0x0/0x4ffc00000, data 0x5e54a72/0x6083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:13.445192+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3229463 data_alloc: 251658240 data_used: 29171712
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:14.445476+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 181198848 unmapped: 42582016 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:15.445678+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.395915985s of 18.495174408s, submitted: 71
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 185319424 unmapped: 38461440 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:16.445837+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192380928 unmapped: 31399936 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:17.446022+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193028096 unmapped: 30752768 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:18.446200+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188612608 unmapped: 35168256 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94e191400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94e191400 session 0x55f94d7d9e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3353383 data_alloc: 251658240 data_used: 30334976
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b61000/0x0/0x4ffc00000, data 0x6d5ea72/0x6f8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:19.446364+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188612608 unmapped: 35168256 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d7d9a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94b118000 session 0x55f94bd09c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:20.446487+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94d5f2400 session 0x55f94bd481e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 35160064 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d60bc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:21.446610+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 35160064 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:22.446730+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188678144 unmapped: 35102720 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:23.446874+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188710912 unmapped: 35069952 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b60000/0x0/0x4ffc00000, data 0x6d5ea95/0x6f8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3357100 data_alloc: 251658240 data_used: 30515200
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:24.447007+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b60000/0x0/0x4ffc00000, data 0x6d5ea95/0x6f8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 35061760 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94dc9f000 session 0x55f94d711a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94fccc400 session 0x55f94b24cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:25.447178+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.234384537s of 10.064209938s, submitted: 102
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d6b41e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 33996800 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:26.447342+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 33996800 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:27.447513+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 33996800 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b85000/0x0/0x4ffc00000, data 0x6d3aa62/0x6f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:28.447717+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 33996800 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3347731 data_alloc: 251658240 data_used: 30404608
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:29.447912+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 33996800 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:30.448092+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 33996800 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:31.448236+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 33996800 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:32.448421+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 33996800 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b86000/0x0/0x4ffc00000, data 0x6d3aa62/0x6f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:33.448574+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191102976 unmapped: 32677888 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3363435 data_alloc: 251658240 data_used: 31477760
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:34.448745+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190824448 unmapped: 32956416 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b86000/0x0/0x4ffc00000, data 0x6d3aa62/0x6f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:35.449037+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190824448 unmapped: 32956416 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:36.449178+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190824448 unmapped: 32956416 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:37.449336+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190824448 unmapped: 32956416 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:38.449588+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190824448 unmapped: 32956416 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.269784927s of 13.333827019s, submitted: 21
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3361115 data_alloc: 251658240 data_used: 31477760
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:39.449759+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190914560 unmapped: 32866304 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:40.450120+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190914560 unmapped: 32866304 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b86000/0x0/0x4ffc00000, data 0x6d3aa62/0x6f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:41.450626+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190914560 unmapped: 32866304 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:42.450771+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190914560 unmapped: 32866304 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:43.450921+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190914560 unmapped: 32866304 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3361115 data_alloc: 251658240 data_used: 31477760
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b86000/0x0/0x4ffc00000, data 0x6d3aa62/0x6f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:44.451075+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190996480 unmapped: 32784384 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:45.451251+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190865408 unmapped: 32915456 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:46.451402+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94b118000 session 0x55f94bd08960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190865408 unmapped: 32915456 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:47.451534+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190865408 unmapped: 32915456 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94d5f2400 session 0x55f94b88b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:48.451661+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190889984 unmapped: 32890880 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3361515 data_alloc: 251658240 data_used: 31465472
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94dc9f000 session 0x55f94da7d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:49.451805+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08b000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.089774132s of 10.552040100s, submitted: 16
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 ms_handle_reset con 0x55f94d08b000 session 0x55f94d7265a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b86000/0x0/0x4ffc00000, data 0x6d3aa62/0x6f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190898176 unmapped: 32882688 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08b000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:50.451946+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190922752 unmapped: 32858112 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:51.452207+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190922752 unmapped: 32858112 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:52.452337+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190922752 unmapped: 32858112 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:53.452480+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 32825344 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b85000/0x0/0x4ffc00000, data 0x6d3aa72/0x6f69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365974 data_alloc: 251658240 data_used: 31535104
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:54.452658+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 32825344 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:55.452881+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 32825344 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:56.453033+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 32825344 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 heartbeat osd_stat(store_statfs(0x4f1b85000/0x0/0x4ffc00000, data 0x6d3aa72/0x6f69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:57.453180+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 32825344 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:58.453327+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 32825344 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365974 data_alloc: 251658240 data_used: 31535104
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:59.453434+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191094784 unmapped: 32686080 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.773355484s of 10.804097176s, submitted: 9
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94b118000 session 0x55f94ca35e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:00.453567+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191102976 unmapped: 32677888 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94d5f2400 session 0x55f94bd094a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94dc9f000 session 0x55f94d63d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:01.453735+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0af800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc80400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191545344 unmapped: 32235520 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94dc80400 session 0x55f94b24cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94d0af800 session 0x55f94cac50e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f115e000/0x0/0x4ffc00000, data 0x775e618/0x798f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:02.453897+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191545344 unmapped: 32235520 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:03.454068+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191545344 unmapped: 32235520 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3462315 data_alloc: 251658240 data_used: 33755136
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f115d000/0x0/0x4ffc00000, data 0x775e661/0x7990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:04.454243+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191987712 unmapped: 31793152 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:05.454424+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192233472 unmapped: 31547392 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:06.454585+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192233472 unmapped: 31547392 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:07.454742+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192233472 unmapped: 31547392 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b118000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94b118000 session 0x55f94b124960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:08.454941+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192585728 unmapped: 31195136 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f1130000/0x0/0x4ffc00000, data 0x783e661/0x79be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3476681 data_alloc: 251658240 data_used: 34136064
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:09.455115+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc80400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192585728 unmapped: 31195136 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:10.455225+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f1130000/0x0/0x4ffc00000, data 0x783e661/0x79be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202121216 unmapped: 21659648 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:11.455646+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202121216 unmapped: 21659648 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:12.455808+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202121216 unmapped: 21659648 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:13.456450+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202121216 unmapped: 21659648 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f1130000/0x0/0x4ffc00000, data 0x783e661/0x79be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3551241 data_alloc: 251658240 data_used: 44564480
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:14.457457+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202153984 unmapped: 21626880 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:15.457765+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 21110784 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.535037041s of 15.832836151s, submitted: 37
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94d08b000 session 0x55f94b88b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d6b25a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:16.457893+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 21110784 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94dc9f000 session 0x55f94d5934a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:17.458084+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 21110784 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:18.458610+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202670080 unmapped: 21110784 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3556992 data_alloc: 268435456 data_used: 45842432
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:19.458897+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f1131000/0x0/0x4ffc00000, data 0x783e651/0x79bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202924032 unmapped: 20856832 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:20.459054+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 203022336 unmapped: 20758528 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:21.459166+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 205832192 unmapped: 17948672 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:22.459483+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 17211392 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:23.459647+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 17211392 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f07c4000/0x0/0x4ffc00000, data 0x7dcc651/0x7f0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x752f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3624546 data_alloc: 268435456 data_used: 46182400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:24.460054+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206602240 unmapped: 17178624 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d827400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94d827400 session 0x55f94da703c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:25.460273+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94a3c9400 session 0x55f94dbce000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 200695808 unmapped: 23085056 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:26.460580+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 200695808 unmapped: 23085056 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:27.460751+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f4b6e000/0x0/0x4ffc00000, data 0x4e82651/0x4fc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 200695808 unmapped: 23085056 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.481411934s of 12.319342613s, submitted: 118
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:28.460940+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 200712192 unmapped: 23068672 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3230914 data_alloc: 251658240 data_used: 36630528
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:29.461103+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 200712192 unmapped: 23068672 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:30.461236+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 ms_handle_reset con 0x55f94d396800 session 0x55f94adc7a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 207282176 unmapped: 16498688 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:31.461358+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206364672 unmapped: 17416192 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:32.461595+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206364672 unmapped: 17416192 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:33.461823+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f47f8000/0x0/0x4ffc00000, data 0x51f8651/0x5336000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206364672 unmapped: 17416192 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f47f8000/0x0/0x4ffc00000, data 0x51f8651/0x5336000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3281104 data_alloc: 251658240 data_used: 38658048
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:34.462087+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 21970944 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:35.462353+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 21970944 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:36.462561+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 21970944 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:37.462773+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 21970944 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:38.462923+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f47f3000/0x0/0x4ffc00000, data 0x51fd651/0x533b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 21970944 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3273384 data_alloc: 251658240 data_used: 38662144
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:39.463651+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201809920 unmapped: 21970944 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.937317848s of 12.025940895s, submitted: 14
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:40.463815+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201826304 unmapped: 21954560 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:41.464082+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201826304 unmapped: 21954560 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:42.464283+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201826304 unmapped: 21954560 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:43.464411+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201826304 unmapped: 21954560 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3284461 data_alloc: 251658240 data_used: 38694912
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:44.464614+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f4766000/0x0/0x4ffc00000, data 0x5289651/0x53c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 405 ms_handle_reset con 0x55f94dc83400 session 0x55f94dc5af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201875456 unmapped: 21905408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:45.464828+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202162176 unmapped: 21618688 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 405 ms_handle_reset con 0x55f94fccd400 session 0x55f94da69e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82ac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:46.465009+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 405 ms_handle_reset con 0x55f94d82ac00 session 0x55f94da7d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 405 handle_osd_map epochs [406,406], i have 406, src has [1,406]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 406 ms_handle_reset con 0x55f94d0afc00 session 0x55f94af71c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202211328 unmapped: 21569536 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:47.465259+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201760768 unmapped: 22020096 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:48.465393+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 407 ms_handle_reset con 0x55f94a3c9400 session 0x55f94adc7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199868416 unmapped: 23912448 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 407 ms_handle_reset con 0x55f94d396800 session 0x55f94dc5b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f4729000/0x0/0x4ffc00000, data 0x53ca92a/0x5404000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:49.465584+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3316999 data_alloc: 251658240 data_used: 38731776
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 407 ms_handle_reset con 0x55f94dc83400 session 0x55f94cf4da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199868416 unmapped: 23912448 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 407 ms_handle_reset con 0x55f94d82d800 session 0x55f94b24dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:50.465726+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 407 ms_handle_reset con 0x55f94a3c9400 session 0x55f94ca3c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.466004372s of 10.152762413s, submitted: 62
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 407 ms_handle_reset con 0x55f94d0afc00 session 0x55f94d673e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199876608 unmapped: 23904256 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:51.465903+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 408 ms_handle_reset con 0x55f94dc83400 session 0x55f94ca354a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 408 ms_handle_reset con 0x55f94d396800 session 0x55f94da70000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199876608 unmapped: 23904256 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:52.466055+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199876608 unmapped: 23904256 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:53.466174+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 408 ms_handle_reset con 0x55f94d60bc00 session 0x55f94d727680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199876608 unmapped: 23904256 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:54.466375+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3288599 data_alloc: 251658240 data_used: 38596608
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199892992 unmapped: 23887872 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 408 ms_handle_reset con 0x55f94a3c9400 session 0x55f94cac4780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f4a7b000/0x0/0x4ffc00000, data 0x4f854d8/0x50b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:55.466627+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199892992 unmapped: 23887872 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:56.466817+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199892992 unmapped: 23887872 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f4a7b000/0x0/0x4ffc00000, data 0x4f854d8/0x50b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:57.467029+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199892992 unmapped: 23887872 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:58.467190+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 200466432 unmapped: 23314432 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:59.467350+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3298133 data_alloc: 251658240 data_used: 39133184
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 200466432 unmapped: 23314432 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 408 ms_handle_reset con 0x55f94d396800 session 0x55f94b23bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 408 ms_handle_reset con 0x55f94d0afc00 session 0x55f94cac4000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94dc83400 session 0x55f94da7de00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:00.467595+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f49ef000/0x0/0x4ffc00000, data 0x5010f2b/0x513e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191774720 unmapped: 32006144 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:01.467775+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191774720 unmapped: 32006144 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde0000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.169512749s of 11.561429977s, submitted: 84
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94bde0000 session 0x55f94d63cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94fccd400 session 0x55f94bd334a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:02.468039+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191782912 unmapped: 31997952 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:03.468194+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191258624 unmapped: 32522240 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:04.468358+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006783 data_alloc: 234881024 data_used: 23977984
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191258624 unmapped: 32522240 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:05.468566+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191258624 unmapped: 32522240 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f60d8000/0x0/0x4ffc00000, data 0x3927ec9/0x3a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:06.468779+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191258624 unmapped: 32522240 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:07.469050+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191258624 unmapped: 32522240 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:08.469225+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191266816 unmapped: 32514048 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:09.469358+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006923 data_alloc: 234881024 data_used: 23986176
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191266816 unmapped: 32514048 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:10.469496+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f60d7000/0x0/0x4ffc00000, data 0x392aec9/0x3a57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191381504 unmapped: 32399360 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:11.469696+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191381504 unmapped: 32399360 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94a3c9400 session 0x55f94da7de00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:12.469892+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191381504 unmapped: 32399360 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:13.470112+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.397934914s of 11.817637444s, submitted: 18
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94d396800 session 0x55f94da7c1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 32382976 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94d0afc00 session 0x55f94ca354a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:14.470316+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3008815 data_alloc: 234881024 data_used: 23998464
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 32382976 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:15.470530+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 32382976 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f60c4000/0x0/0x4ffc00000, data 0x393dec9/0x3a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:16.470701+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 32382976 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:17.470877+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 32382976 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:18.471035+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 32153600 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f60c4000/0x0/0x4ffc00000, data 0x393dec9/0x3a6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:19.471194+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013215 data_alloc: 234881024 data_used: 24121344
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191700992 unmapped: 32079872 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:20.471371+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94dc83400 session 0x55f94dc5b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 32063488 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:21.471514+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94a3c9400 session 0x55f94dbce000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 32063488 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94d0afc00 session 0x55f94b88b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:22.471726+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 32063488 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:23.471874+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94d396800 session 0x55f94b24cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 ms_handle_reset con 0x55f94fccd400 session 0x55f94ca35e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.932109833s of 10.019873619s, submitted: 24
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 32063488 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:24.472162+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 410 heartbeat osd_stat(store_statfs(0x4f60c7000/0x0/0x4ffc00000, data 0x39382fc/0x3a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019172 data_alloc: 234881024 data_used: 24129536
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 410 ms_handle_reset con 0x55f94bd7c000 session 0x55f94be3cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 410 ms_handle_reset con 0x55f94a3c9400 session 0x55f94bd2e3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 32063488 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 410 ms_handle_reset con 0x55f94d0afc00 session 0x55f94cf4c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:25.472376+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191725568 unmapped: 32055296 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 410 ms_handle_reset con 0x55f94d396800 session 0x55f94cf4de00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 410 heartbeat osd_stat(store_statfs(0x4f60c4000/0x0/0x4ffc00000, data 0x3939e69/0x3a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:26.472528+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 411 ms_handle_reset con 0x55f94fccd400 session 0x55f94bd33e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 411 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94cac4b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 411 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94d672f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 411 ms_handle_reset con 0x55f94a3c9400 session 0x55f94bd08960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191692800 unmapped: 32088064 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 411 ms_handle_reset con 0x55f94d0afc00 session 0x55f94cac5c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 411 heartbeat osd_stat(store_statfs(0x4f6147000/0x0/0x4ffc00000, data 0x38b4a8b/0x39e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:27.472686+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 411 ms_handle_reset con 0x55f94d396800 session 0x55f94da7c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191709184 unmapped: 32071680 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d606000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 411 ms_handle_reset con 0x55f94d606000 session 0x55f94caa3a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:28.472833+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 412 ms_handle_reset con 0x55f94fccd400 session 0x55f94ca3f2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 412 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b24cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191709184 unmapped: 32071680 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:29.473038+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013696 data_alloc: 234881024 data_used: 24125440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 412 ms_handle_reset con 0x55f94d0afc00 session 0x55f94b9423c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191709184 unmapped: 32071680 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:30.473236+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 ms_handle_reset con 0x55f94d396800 session 0x55f94bd7ef00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94bd7e960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191725568 unmapped: 32055296 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 ms_handle_reset con 0x55f94a3c9400 session 0x55f94da68f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:31.473396+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94be7af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192266240 unmapped: 31514624 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:32.473572+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 ms_handle_reset con 0x55f94dc80400 session 0x55f94bd481e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 ms_handle_reset con 0x55f94d5f2400 session 0x55f94ca34000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0afc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191225856 unmapped: 32555008 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f71bb000/0x0/0x4ffc00000, data 0x273ae8b/0x2972000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 ms_handle_reset con 0x55f94d0afc00 session 0x55f94cf4d860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:33.473701+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40296448 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f71e5000/0x0/0x4ffc00000, data 0x2710e8b/0x2948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:34.473902+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2788469 data_alloc: 234881024 data_used: 12697600
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 183484416 unmapped: 40296448 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:35.474200+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.239036560s of 11.907006264s, submitted: 190
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 414 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b8632c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184532992 unmapped: 39247872 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:36.474442+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184532992 unmapped: 39247872 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:37.474648+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f71e1000/0x0/0x4ffc00000, data 0x2712926/0x294b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184532992 unmapped: 39247872 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 414 ms_handle_reset con 0x55f94d5f2400 session 0x55f94b24de00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:38.474837+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc80400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 415 ms_handle_reset con 0x55f94d396800 session 0x55f94d727860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184541184 unmapped: 39239680 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:39.475060+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2799397 data_alloc: 234881024 data_used: 12713984
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 416 ms_handle_reset con 0x55f94dc80400 session 0x55f94d7d8960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 416 ms_handle_reset con 0x55f94fccd400 session 0x55f94ca34b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 416 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94b024780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184557568 unmapped: 39223296 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:40.475347+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 417 ms_handle_reset con 0x55f94fccd400 session 0x55f94bd2eb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184557568 unmapped: 39223296 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 417 ms_handle_reset con 0x55f94a3c9400 session 0x55f94da68780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:41.475526+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 417 ms_handle_reset con 0x55f94d396800 session 0x55f94caa25a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f2400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 417 ms_handle_reset con 0x55f94d5f2400 session 0x55f94d5f7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 417 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d6b4960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f71d6000/0x0/0x4ffc00000, data 0x2718035/0x2956000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184557568 unmapped: 39223296 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:42.475710+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94d579860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94d396800 session 0x55f94d726d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184434688 unmapped: 39346176 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:43.475909+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94fccd400 session 0x55f94d726000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc80400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94dc80400 session 0x55f94adc7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184451072 unmapped: 39329792 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:44.476103+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2816869 data_alloc: 234881024 data_used: 12726272
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94a3c9400 session 0x55f94be7a5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184459264 unmapped: 39321600 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94a3dfa40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:45.476302+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f71d2000/0x0/0x4ffc00000, data 0x27197e0/0x295b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184459264 unmapped: 39321600 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:46.476457+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94d396800 session 0x55f94dc5ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.467374802s of 10.813317299s, submitted: 125
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94fccd400 session 0x55f94dc5a960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f71d7000/0x0/0x4ffc00000, data 0x27196fc/0x2957000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:47.476650+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f5800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94d5f5800 session 0x55f94be3a000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94a3c9400 session 0x55f94be3be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:48.476853+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94be3a5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:49.477034+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 ms_handle_reset con 0x55f94d396800 session 0x55f94b88a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2812746 data_alloc: 234881024 data_used: 12730368
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:50.477234+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 420 ms_handle_reset con 0x55f94fccd400 session 0x55f94bd7f2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d825800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f71d3000/0x0/0x4ffc00000, data 0x271b15f/0x295a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:51.477393+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 420 ms_handle_reset con 0x55f94d825800 session 0x55f94b2485a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:52.477565+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:53.477722+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f71d1000/0x0/0x4ffc00000, data 0x271cd30/0x295d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:54.477874+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2819334 data_alloc: 234881024 data_used: 12746752
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:55.478092+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:56.478325+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f71d1000/0x0/0x4ffc00000, data 0x271cd30/0x295d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:57.478549+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:58.478711+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184467456 unmapped: 39313408 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:59.478888+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2819334 data_alloc: 234881024 data_used: 12746752
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.300554276s of 13.471602440s, submitted: 58
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 39297024 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:00.479109+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f71cd000/0x0/0x4ffc00000, data 0x271e793/0x2960000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 39297024 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:01.479386+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 39297024 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:02.479525+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 39297024 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:03.479700+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f71cd000/0x0/0x4ffc00000, data 0x271e793/0x2960000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 39297024 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:04.479873+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2823508 data_alloc: 234881024 data_used: 12754944
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 39297024 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:05.480101+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 39297024 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f71cd000/0x0/0x4ffc00000, data 0x271e793/0x2960000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:06.480307+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 39297024 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:07.480491+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 421 ms_handle_reset con 0x55f94a3c9400 session 0x55f94af70f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 39297024 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:08.480683+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184492032 unmapped: 39288832 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:09.480871+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 421 ms_handle_reset con 0x55f94d396800 session 0x55f94ca3d0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2823508 data_alloc: 234881024 data_used: 12754944
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.100603104s of 10.117833138s, submitted: 13
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184492032 unmapped: 39288832 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:10.481045+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 422 ms_handle_reset con 0x55f94fccd400 session 0x55f94bd2f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x2720372/0x2964000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184516608 unmapped: 39264256 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 423 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94b23c5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:11.481248+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184786944 unmapped: 38993920 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:12.481415+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184795136 unmapped: 38985728 heap: 223780864 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 423 ms_handle_reset con 0x55f94d08c000 session 0x55f94b24de00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:13.481682+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 55754752 heap: 248979456 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:14.481877+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3098266 data_alloc: 234881024 data_used: 12754944
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197427200 unmapped: 51552256 heap: 248979456 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:15.482154+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 51527680 heap: 248979456 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 423 heartbeat osd_stat(store_statfs(0x4ef7e6000/0x0/0x4ffc00000, data 0x8f61eff/0x91a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:16.482386+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 424 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b23ba40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 184958976 unmapped: 64020480 heap: 248979456 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:17.483102+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197541888 unmapped: 51437568 heap: 248979456 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:18.483247+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 424 heartbeat osd_stat(store_statfs(0x4eabe1000/0x0/0x4ffc00000, data 0xdb63adf/0xddac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 424 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94b23be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193486848 unmapped: 59695104 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:19.483404+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4611645 data_alloc: 234881024 data_used: 12775424
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202129408 unmapped: 51052544 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.486003399s of 10.092064857s, submitted: 77
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:20.483659+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 185352192 unmapped: 67829760 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:21.483930+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 185524224 unmapped: 67657728 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:22.484131+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 424 heartbeat osd_stat(store_statfs(0x4e3be2000/0x0/0x4ffc00000, data 0x14b63adf/0x14dac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 202784768 unmapped: 50397184 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:23.484807+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 424 ms_handle_reset con 0x55f94ca28800 session 0x55f94cd1a3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186105856 unmapped: 67076096 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:24.485003+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5338221 data_alloc: 234881024 data_used: 12775424
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186105856 unmapped: 67076096 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:25.485321+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 424 heartbeat osd_stat(store_statfs(0x4dfbe2000/0x0/0x4ffc00000, data 0x18b63adf/0x18dac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 424 ms_handle_reset con 0x55f94d08c000 session 0x55f94d63dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186318848 unmapped: 66863104 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:26.485502+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d396800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 425 ms_handle_reset con 0x55f94d396800 session 0x55f94da70b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186318848 unmapped: 66863104 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:27.485665+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 425 ms_handle_reset con 0x55f94a3c9400 session 0x55f94be7b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186327040 unmapped: 66854912 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:28.485943+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 425 heartbeat osd_stat(store_statfs(0x4df07a000/0x0/0x4ffc00000, data 0x196c96b0/0x19913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186335232 unmapped: 66846720 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:29.486131+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5424499 data_alloc: 234881024 data_used: 12783616
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 426 ms_handle_reset con 0x55f94ca28800 session 0x55f94d63d2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 66813952 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:30.486262+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.371029854s of 10.927047729s, submitted: 64
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 427 ms_handle_reset con 0x55f94d08c000 session 0x55f94cd1a3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186433536 unmapped: 66748416 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:31.486438+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 427 heartbeat osd_stat(store_statfs(0x4df075000/0x0/0x4ffc00000, data 0x196cb185/0x19918000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 428 ms_handle_reset con 0x55f94fccd400 session 0x55f94b9423c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc7f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 428 ms_handle_reset con 0x55f94dc7f000 session 0x55f94d726000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 428 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94da68000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 66691072 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:32.486566+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 428 heartbeat osd_stat(store_statfs(0x4df06c000/0x0/0x4ffc00000, data 0x196cedd2/0x19920000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186589184 unmapped: 66592768 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:33.486834+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 428 ms_handle_reset con 0x55f94d08c000 session 0x55f94bd08960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 428 heartbeat osd_stat(store_statfs(0x4df06c000/0x0/0x4ffc00000, data 0x196cedd2/0x19920000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 428 handle_osd_map epochs [429,429], i have 429, src has [1,429]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 429 ms_handle_reset con 0x55f94fccd400 session 0x55f94cac4b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186605568 unmapped: 66576384 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:34.487006+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5529780 data_alloc: 234881024 data_used: 24735744
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 ms_handle_reset con 0x55f94d82f800 session 0x55f94b23b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82dc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 ms_handle_reset con 0x55f94d82dc00 session 0x55f94d6b5860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 ms_handle_reset con 0x55f94ca28800 session 0x55f94d726d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186613760 unmapped: 66568192 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:35.487177+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186613760 unmapped: 66568192 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:36.487348+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94b8d2c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 ms_handle_reset con 0x55f94d08c000 session 0x55f94da70d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 186621952 unmapped: 66560000 heap: 253181952 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:37.487541+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 heartbeat osd_stat(store_statfs(0x4df065000/0x0/0x4ffc00000, data 0x196d29bd/0x19927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2,13])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 250068992 unmapped: 40886272 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:38.487685+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 heartbeat osd_stat(store_statfs(0x4db467000/0x0/0x4ffc00000, data 0x1d2d29bd/0x1d527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:39.487836+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 99328000 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6487019 data_alloc: 234881024 data_used: 24739840
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:40.488008+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196460544 unmapped: 94494720 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 heartbeat osd_stat(store_statfs(0x4d4c67000/0x0/0x4ffc00000, data 0x23ad29bd/0x23d27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.829863071s of 10.006028175s, submitted: 77
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:41.488125+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196993024 unmapped: 93962240 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 heartbeat osd_stat(store_statfs(0x4d1467000/0x0/0x4ffc00000, data 0x272d29bd/0x27527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:42.488304+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192905216 unmapped: 98050048 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:43.488446+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 93650944 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:44.488670+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 195371008 unmapped: 95584256 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7591711 data_alloc: 234881024 data_used: 24739840
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:45.488883+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 195723264 unmapped: 95232000 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:46.489445+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191692800 unmapped: 99262464 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:47.489600+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 200433664 unmapped: 90521600 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 heartbeat osd_stat(store_statfs(0x4c80de000/0x0/0x4ffc00000, data 0x3065b9bd/0x308b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,1,0,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 heartbeat osd_stat(store_statfs(0x4c80de000/0x0/0x4ffc00000, data 0x3065b9bd/0x308b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:48.489772+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 201842688 unmapped: 89112576 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 ms_handle_reset con 0x55f94d82f800 session 0x55f94ca35a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94fccd400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 ms_handle_reset con 0x55f94fccd400 session 0x55f94d5954a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 ms_handle_reset con 0x55f94b8d2c00 session 0x55f94d6b4960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:49.490068+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193478656 unmapped: 97476608 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 431 ms_handle_reset con 0x55f94ca28800 session 0x55f94b88a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 431 ms_handle_reset con 0x55f94d08c000 session 0x55f94cac4d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8442421 data_alloc: 234881024 data_used: 25567232
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:50.490287+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193503232 unmapped: 97452032 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 431 heartbeat osd_stat(store_statfs(0x4c54db000/0x0/0x4ffc00000, data 0x3325d09d/0x334b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:51.490464+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 97443840 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.522129059s of 10.462851524s, submitted: 111
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 ms_handle_reset con 0x55f94d82f800 session 0x55f94b863e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ef8b000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 ms_handle_reset con 0x55f94ef8b000 session 0x55f94ca354a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:52.490583+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192815104 unmapped: 98140160 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:53.490695+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192815104 unmapped: 98140160 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:54.490893+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192815104 unmapped: 98140160 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8441575 data_alloc: 234881024 data_used: 25559040
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d824000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 ms_handle_reset con 0x55f94d824000 session 0x55f94be3a000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d824000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:55.491186+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192675840 unmapped: 98279424 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 ms_handle_reset con 0x55f94d824000 session 0x55f94caa2b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 heartbeat osd_stat(store_statfs(0x4c50cc000/0x0/0x4ffc00000, data 0x3325e6a9/0x334b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:56.491321+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192684032 unmapped: 98271232 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 ms_handle_reset con 0x55f94ca28800 session 0x55f94da68d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:57.491477+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192684032 unmapped: 98271232 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:58.491681+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192684032 unmapped: 98271232 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 ms_handle_reset con 0x55f94d08c000 session 0x55f94b24c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:59.491861+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 ms_handle_reset con 0x55f94d82f800 session 0x55f94bd7f680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192724992 unmapped: 98230272 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ef8b000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 ms_handle_reset con 0x55f94ef8b000 session 0x55f94d6b3860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8512382 data_alloc: 234881024 data_used: 25559040
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ef8b000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 ms_handle_reset con 0x55f94ef8b000 session 0x55f94b24da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:00.492029+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193904640 unmapped: 97050624 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:01.492181+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192774144 unmapped: 98181120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.557949066s of 10.055332184s, submitted: 110
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 433 ms_handle_reset con 0x55f94d08c000 session 0x55f94be3ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 433 heartbeat osd_stat(store_statfs(0x4c482f000/0x0/0x4ffc00000, data 0x33af916e/0x33d4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:02.492386+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 192774144 unmapped: 98181120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d824000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 434 ms_handle_reset con 0x55f94d82f800 session 0x55f94d5f6b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:03.492539+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193560576 unmapped: 97394688 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d824000 session 0x55f94da71860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcca000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94dcca000 session 0x55f94be7af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94ca28800 session 0x55f94b24d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:04.492719+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193576960 unmapped: 97378304 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8577202 data_alloc: 234881024 data_used: 25567232
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:05.492940+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193576960 unmapped: 97378304 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:06.493146+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193576960 unmapped: 97378304 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:07.493294+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c41bf000/0x0/0x4ffc00000, data 0x34164eb9/0x343be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193585152 unmapped: 97370112 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:08.493470+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193585152 unmapped: 97370112 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:09.493605+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193585152 unmapped: 97370112 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8577522 data_alloc: 234881024 data_used: 25575424
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:10.493785+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193585152 unmapped: 97370112 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c41bf000/0x0/0x4ffc00000, data 0x34164eb9/0x343be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d824000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d824000 session 0x55f94d63dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:11.493948+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193667072 unmapped: 97288192 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:12.494116+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 97280000 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:13.494281+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 97280000 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d08c000 session 0x55f94dc5b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:14.494409+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 97280000 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94a3c9400 session 0x55f94caa3680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82f800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ef8b000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8630960 data_alloc: 234881024 data_used: 25833472
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.625835419s of 13.264857292s, submitted: 29
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94ef8b000 session 0x55f94d7261e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:15.494590+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188973056 unmapped: 101982208 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d82f800 session 0x55f94ca34000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:16.494748+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188973056 unmapped: 101982208 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c49be000/0x0/0x4ffc00000, data 0x33965edc/0x33bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:17.494853+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188973056 unmapped: 101982208 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:18.495032+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 188850176 unmapped: 102105088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:19.495187+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189341696 unmapped: 101613568 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d08c000 session 0x55f94caf30e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d824000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8479755 data_alloc: 234881024 data_used: 20242432
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d824000 session 0x55f94dc5ab40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:20.495311+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189349888 unmapped: 101605376 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d604c00 session 0x55f94be3cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94ca28000 session 0x55f94b125e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:21.495444+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c49bf000/0x0/0x4ffc00000, data 0x33965e79/0x33bbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189349888 unmapped: 101605376 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:22.495600+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189349888 unmapped: 101605376 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:23.495732+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189349888 unmapped: 101605376 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:24.496022+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189349888 unmapped: 101605376 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8477931 data_alloc: 234881024 data_used: 20234240
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:25.496383+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189349888 unmapped: 101605376 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:26.496620+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189349888 unmapped: 101605376 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c49c0000/0x0/0x4ffc00000, data 0x33965e69/0x33bbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:27.496879+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d08c000 session 0x55f94da681e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189349888 unmapped: 101605376 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d604c00 session 0x55f94b9430e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:28.497530+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 101933056 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:29.498102+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.344282150s of 14.505332947s, submitted: 32
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 189440000 unmapped: 101515264 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c49c0000/0x0/0x4ffc00000, data 0x33965e69/0x33bbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,2,6])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8602989 data_alloc: 234881024 data_used: 20512768
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:30.498233+0000)
Nov 29 08:16:49 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 194371584 unmapped: 96583680 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:31.498406+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196820992 unmapped: 94134272 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c425a000/0x0/0x4ffc00000, data 0x34945e69/0x3430b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:32.498592+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196829184 unmapped: 94126080 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:33.498736+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196829184 unmapped: 94126080 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f8800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:34.498904+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196853760 unmapped: 94101504 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8613122 data_alloc: 234881024 data_used: 20865024
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:35.499037+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d5f8800 session 0x55f94b23c1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196853760 unmapped: 94101504 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:36.499163+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196861952 unmapped: 94093312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c4232000/0x0/0x4ffc00000, data 0x34985ecc/0x3434c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:37.499298+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196861952 unmapped: 94093312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:38.499455+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196861952 unmapped: 94093312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:39.499588+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196861952 unmapped: 94093312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8612827 data_alloc: 234881024 data_used: 20865024
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:40.499911+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196861952 unmapped: 94093312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:41.500052+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c4232000/0x0/0x4ffc00000, data 0x34985ecc/0x3434c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196861952 unmapped: 94093312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c4232000/0x0/0x4ffc00000, data 0x34985ecc/0x3434c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:42.500211+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196861952 unmapped: 94093312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:43.500472+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196861952 unmapped: 94093312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:44.500708+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196878336 unmapped: 94076928 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8612827 data_alloc: 234881024 data_used: 20865024
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:45.501057+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c4232000/0x0/0x4ffc00000, data 0x34985ecc/0x3434c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196878336 unmapped: 94076928 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:46.501264+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 196878336 unmapped: 94076928 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.043783188s of 17.408653259s, submitted: 104
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d397c00 session 0x55f94adc65a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc82400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94dc82400 session 0x55f94caa3a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:47.501478+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 209592320 unmapped: 81362944 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:48.501669+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d08c000 session 0x55f94b23cd20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d397c00 session 0x55f94cd1a960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 93790208 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:49.501953+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 93790208 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8889405 data_alloc: 234881024 data_used: 20865024
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:50.502135+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c1a72000/0x0/0x4ffc00000, data 0x37145e69/0x36b0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 93790208 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:51.502353+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 93790208 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94ca28800 session 0x55f94d5f6780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b8623c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f8800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:52.502473+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d5f8800 session 0x55f94bd09680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 99237888 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:53.502853+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 99237888 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:54.503108+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 99237888 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8787855 data_alloc: 234881024 data_used: 13684736
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:55.503327+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 99237888 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 heartbeat osd_stat(store_statfs(0x4c2161000/0x0/0x4ffc00000, data 0x36a57e46/0x3641c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:56.503503+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 99237888 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94ca28800 session 0x55f94d727c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.350796700s of 10.404590607s, submitted: 62
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 ms_handle_reset con 0x55f94d08c000 session 0x55f94af705a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:57.503695+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 191725568 unmapped: 99229696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 436 ms_handle_reset con 0x55f94d604c00 session 0x55f94da7c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:58.504175+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 437 ms_handle_reset con 0x55f94d397c00 session 0x55f94b24d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d822800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 437 ms_handle_reset con 0x55f94d822800 session 0x55f94da68d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d822800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 437 ms_handle_reset con 0x55f94d822800 session 0x55f94be3a000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197459968 unmapped: 93495296 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 437 ms_handle_reset con 0x55f94a3c9400 session 0x55f94da7c1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 437 ms_handle_reset con 0x55f94ca28800 session 0x55f94dbce960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 437 ms_handle_reset con 0x55f94d08c000 session 0x55f94b24cb40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:59.504350+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197468160 unmapped: 93487104 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 437 heartbeat osd_stat(store_statfs(0x4c0918000/0x0/0x4ffc00000, data 0x37760c78/0x36ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8911878 data_alloc: 234881024 data_used: 13701120
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:00.504928+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 437 ms_handle_reset con 0x55f94d604c00 session 0x55f94ca354a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 438 ms_handle_reset con 0x55f94d604c00 session 0x55f94cac4d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197533696 unmapped: 93421568 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:01.505187+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197361664 unmapped: 93593600 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:02.505570+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 438 heartbeat osd_stat(store_statfs(0x4c0918000/0x0/0x4ffc00000, data 0x377621a6/0x36ac6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 197361664 unmapped: 93593600 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:03.505757+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 438 heartbeat osd_stat(store_statfs(0x4c0918000/0x0/0x4ffc00000, data 0x377621a6/0x36ac6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 198705152 unmapped: 92250112 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 439 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b88a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 439 ms_handle_reset con 0x55f94ca28800 session 0x55f94ca35a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:04.506040+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 439 heartbeat osd_stat(store_statfs(0x4c161d000/0x0/0x4ffc00000, data 0x363f6726/0x35dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 198746112 unmapped: 92209152 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8814218 data_alloc: 234881024 data_used: 20901888
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:05.506394+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 439 ms_handle_reset con 0x55f94d08c000 session 0x55f94da70d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 198746112 unmapped: 92209152 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:06.506563+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d822800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 440 ms_handle_reset con 0x55f94d822800 session 0x55f94d6b5860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 198754304 unmapped: 92200960 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 440 heartbeat osd_stat(store_statfs(0x4c161c000/0x0/0x4ffc00000, data 0x363f6788/0x35dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:07.506894+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 198754304 unmapped: 92200960 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.267119408s of 10.672985077s, submitted: 146
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 440 ms_handle_reset con 0x55f94ca28800 session 0x55f94cd1a5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 440 ms_handle_reset con 0x55f94a3c9400 session 0x55f94cac4b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:08.507163+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 198754304 unmapped: 92200960 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 440 heartbeat osd_stat(store_statfs(0x4c1617000/0x0/0x4ffc00000, data 0x363f83e5/0x35dc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,1,1,2,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:09.507410+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 199933952 unmapped: 91021312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9169403 data_alloc: 234881024 data_used: 20926464
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:10.507586+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 208601088 unmapped: 82354176 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:11.507812+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 204775424 unmapped: 86179840 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:12.507992+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 200933376 unmapped: 90021888 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:13.508114+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 209690624 unmapped: 81264640 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 heartbeat osd_stat(store_statfs(0x4b6e13000/0x0/0x4ffc00000, data 0x40bf9e58/0x405cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:14.508352+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217055232 unmapped: 73900032 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 heartbeat osd_stat(store_statfs(0x4b4bd3000/0x0/0x4ffc00000, data 0x427f9e58/0x421cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,1,1])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 10471179 data_alloc: 234881024 data_used: 21286912
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:15.508814+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 221437952 unmapped: 69517312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:16.509011+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218079232 unmapped: 72876032 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:17.509181+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214605824 unmapped: 76349440 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.828237295s of 10.005880356s, submitted: 201
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:18.509348+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 80134144 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 ms_handle_reset con 0x55f94d604c00 session 0x55f94caa25a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 ms_handle_reset con 0x55f94ca2c800 session 0x55f94dbce000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 ms_handle_reset con 0x55f94d08c000 session 0x55f94da68000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:19.509561+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 ms_handle_reset con 0x55f94a3c9400 session 0x55f94da7c5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212312064 unmapped: 78643200 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 ms_handle_reset con 0x55f94ca28800 session 0x55f94b24da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 heartbeat osd_stat(store_statfs(0x4aab0f000/0x0/0x4ffc00000, data 0x4cecd342/0x4c89f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 ms_handle_reset con 0x55f94ca2c800 session 0x55f94b23c5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 ms_handle_reset con 0x55f94d604c00 session 0x55f94da71e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:20.509821+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9099891 data_alloc: 234881024 data_used: 21348352
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212434944 unmapped: 78520320 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0af400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 ms_handle_reset con 0x55f94d0af400 session 0x55f94be3b0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0af400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:21.510039+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 442 ms_handle_reset con 0x55f94d0af400 session 0x55f94caf30e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212451328 unmapped: 78503936 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 442 ms_handle_reset con 0x55f94d397c00 session 0x55f94b23ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:22.510147+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 442 ms_handle_reset con 0x55f94dc83800 session 0x55f94ca41680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 442 ms_handle_reset con 0x55f94a3c9400 session 0x55f94ca35860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 442 ms_handle_reset con 0x55f94ca28800 session 0x55f94bd33860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 442 ms_handle_reset con 0x55f94ca2c800 session 0x55f94b88a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212475904 unmapped: 78479360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:23.510414+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212475904 unmapped: 78479360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:24.510743+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211025920 unmapped: 79929344 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 443 heartbeat osd_stat(store_statfs(0x4bff46000/0x0/0x4ffc00000, data 0x37aca43f/0x37497000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:25.511068+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9057938 data_alloc: 234881024 data_used: 21323776
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 444 ms_handle_reset con 0x55f94a3c9400 session 0x55f94dbcfe00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 444 ms_handle_reset con 0x55f94ca28800 session 0x55f94dbcf4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211066880 unmapped: 79888384 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:26.511286+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0af400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211066880 unmapped: 79888384 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 444 heartbeat osd_stat(store_statfs(0x4c0f22000/0x0/0x4ffc00000, data 0x36258fae/0x364b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:27.511445+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 445 ms_handle_reset con 0x55f94d0af400 session 0x55f94bd7f2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 445 heartbeat osd_stat(store_statfs(0x4c0f21000/0x0/0x4ffc00000, data 0x3625ab47/0x364bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211091456 unmapped: 79863808 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:28.511739+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.209019661s of 10.714200020s, submitted: 282
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 446 ms_handle_reset con 0x55f94dc83800 session 0x55f94da7cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211132416 unmapped: 79822848 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:29.512509+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 447 ms_handle_reset con 0x55f94d397c00 session 0x55f94da7d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211058688 unmapped: 79896576 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 447 ms_handle_reset con 0x55f94a3c9400 session 0x55f94adc7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:30.515087+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6180333 data_alloc: 234881024 data_used: 20721664
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 447 heartbeat osd_stat(store_statfs(0x4da71c000/0x0/0x4ffc00000, data 0x1ca5e2f9/0x1ccc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211066880 unmapped: 79888384 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:31.515357+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 448 handle_osd_map epochs [448,449], i have 448, src has [1,449]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 449 ms_handle_reset con 0x55f94ca28800 session 0x55f94b248d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206553088 unmapped: 84402176 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 449 ms_handle_reset con 0x55f94ca2c800 session 0x55f94be3be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:32.517384+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:33.519196+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 449 heartbeat osd_stat(store_statfs(0x4f0b15000/0x0/0x4ffc00000, data 0x66618f7/0x68c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:34.521366+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:35.522658+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788871 data_alloc: 234881024 data_used: 20733952
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:36.523035+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 449 heartbeat osd_stat(store_statfs(0x4f0b15000/0x0/0x4ffc00000, data 0x66618f7/0x68c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:37.523326+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 449 heartbeat osd_stat(store_statfs(0x4f0b15000/0x0/0x4ffc00000, data 0x66618f7/0x68c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:38.523589+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 449 heartbeat osd_stat(store_statfs(0x4f0b15000/0x0/0x4ffc00000, data 0x66618f7/0x68c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:39.523935+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:40.524381+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788871 data_alloc: 234881024 data_used: 20733952
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.467955589s of 12.071413994s, submitted: 247
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:41.524552+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 450 heartbeat osd_stat(store_statfs(0x4f0b13000/0x0/0x4ffc00000, data 0x666337a/0x68ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d0af400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 450 ms_handle_reset con 0x55f94d0af400 session 0x55f94af701e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:42.524715+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206569472 unmapped: 84385792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 450 heartbeat osd_stat(store_statfs(0x4f0b12000/0x0/0x4ffc00000, data 0x66633ec/0x68cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:43.524876+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 84377600 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 451 ms_handle_reset con 0x55f94a3c9400 session 0x55f94cf4c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:44.525036+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 451 heartbeat osd_stat(store_statfs(0x4f0b0d000/0x0/0x4ffc00000, data 0x6664fcb/0x68d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 84377600 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:45.525204+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800958 data_alloc: 234881024 data_used: 20742144
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 451 ms_handle_reset con 0x55f94d397c00 session 0x55f94dc5b4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 451 ms_handle_reset con 0x55f94ca2c800 session 0x55f94be7ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206594048 unmapped: 84361216 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:46.525343+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 452 ms_handle_reset con 0x55f94dc83800 session 0x55f94d63c3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206594048 unmapped: 84361216 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:47.525534+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206651392 unmapped: 84303872 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:48.525707+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 452 ms_handle_reset con 0x55f94ca28c00 session 0x55f94cac5c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 452 ms_handle_reset con 0x55f94d604c00 session 0x55f94bd32000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206643200 unmapped: 84312064 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:49.525863+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 84295680 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 ms_handle_reset con 0x55f94a3c9400 session 0x55f94ca3c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:50.526018+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3815506 data_alloc: 234881024 data_used: 20910080
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f0b04000/0x0/0x4ffc00000, data 0x6668798/0x68d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 ms_handle_reset con 0x55f94ca2c800 session 0x55f94cac5c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206667776 unmapped: 84287488 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.290907860s of 10.404199600s, submitted: 40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 ms_handle_reset con 0x55f94d397c00 session 0x55f94be7ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:51.526264+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206667776 unmapped: 84287488 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:52.526542+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc83800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 ms_handle_reset con 0x55f94dc83800 session 0x55f94cf4c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206667776 unmapped: 84287488 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 ms_handle_reset con 0x55f94a3c9400 session 0x55f94be3be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:53.526796+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206667776 unmapped: 84287488 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 ms_handle_reset con 0x55f94ca2c800 session 0x55f94adc7e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:54.526929+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f0b05000/0x0/0x4ffc00000, data 0x6668798/0x68d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 ms_handle_reset con 0x55f94d397c00 session 0x55f94da7cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206667776 unmapped: 84287488 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:55.527212+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3815074 data_alloc: 234881024 data_used: 20930560
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 ms_handle_reset con 0x55f94d604c00 session 0x55f94dbcfe00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d602800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 ms_handle_reset con 0x55f94d602800 session 0x55f94b025680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 84295680 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:56.527407+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 454 ms_handle_reset con 0x55f94a3c9400 session 0x55f94bd7ed20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 454 ms_handle_reset con 0x55f94ca2b800 session 0x55f94bd33860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206667776 unmapped: 84287488 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:57.527647+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 454 ms_handle_reset con 0x55f94ca2c800 session 0x55f94da71e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 454 ms_handle_reset con 0x55f94d604c00 session 0x55f94da71860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d82d000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 206667776 unmapped: 84287488 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:58.527834+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 455 ms_handle_reset con 0x55f94d82d000 session 0x55f94bd09a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 455 ms_handle_reset con 0x55f94d397c00 session 0x55f94b23c5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211902464 unmapped: 79052800 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:59.528009+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 455 handle_osd_map epochs [455,456], i have 455, src has [1,456]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211910656 unmapped: 79044608 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 456 ms_handle_reset con 0x55f94a3c9400 session 0x55f94caa25a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 456 heartbeat osd_stat(store_statfs(0x4f0702000/0x0/0x4ffc00000, data 0x6a6be04/0x6cdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:00.528139+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 456 ms_handle_reset con 0x55f94ca2b800 session 0x55f94da70d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3879830 data_alloc: 251658240 data_used: 29294592
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211910656 unmapped: 79044608 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:01.528356+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.116981506s of 10.549886703s, submitted: 110
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 456 ms_handle_reset con 0x55f94ca2c800 session 0x55f94d63d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211943424 unmapped: 79011840 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:02.528587+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 456 heartbeat osd_stat(store_statfs(0x4f0700000/0x0/0x4ffc00000, data 0x6a6d884/0x6cde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211943424 unmapped: 79011840 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:03.528686+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 457 ms_handle_reset con 0x55f94d604c00 session 0x55f94b8623c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 211943424 unmapped: 79011840 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:04.528896+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216211456 unmapped: 74743808 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 457 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d593860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 457 ms_handle_reset con 0x55f94d604c00 session 0x55f94d7d9860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:05.529129+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3915960 data_alloc: 251658240 data_used: 29310976
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212008960 unmapped: 78946304 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:06.529253+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 458 ms_handle_reset con 0x55f94ca2b800 session 0x55f94b125e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 458 heartbeat osd_stat(store_statfs(0x4f02fc000/0x0/0x4ffc00000, data 0x6e6f41d/0x70e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212008960 unmapped: 78946304 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:07.529403+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 458 ms_handle_reset con 0x55f94d397c00 session 0x55f94adc6f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 458 ms_handle_reset con 0x55f94ca2c800 session 0x55f94be7bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212033536 unmapped: 78921728 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:08.529609+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 458 ms_handle_reset con 0x55f94ca2c800 session 0x55f94da7c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 458 heartbeat osd_stat(store_statfs(0x4f02f9000/0x0/0x4ffc00000, data 0x6e70fb6/0x70e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 458 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d6b4f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212033536 unmapped: 78921728 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:09.529911+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212041728 unmapped: 78913536 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 459 ms_handle_reset con 0x55f94ca2b800 session 0x55f94b24cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:10.530039+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 459 ms_handle_reset con 0x55f94d397c00 session 0x55f94be3da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3919972 data_alloc: 251658240 data_used: 29310976
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212041728 unmapped: 78913536 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:11.530276+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 459 ms_handle_reset con 0x55f94d604c00 session 0x55f94d673e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 459 ms_handle_reset con 0x55f94a3c9400 session 0x55f94b88b860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 459 heartbeat osd_stat(store_statfs(0x4f02f6000/0x0/0x4ffc00000, data 0x6e72a19/0x70e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212090880 unmapped: 78864384 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:12.530417+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 459 ms_handle_reset con 0x55f94ca2b800 session 0x55f94dbcf860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.199579239s of 10.927408218s, submitted: 47
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 460 ms_handle_reset con 0x55f94d604c00 session 0x55f94d6b2b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212090880 unmapped: 78864384 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:13.530581+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 460 ms_handle_reset con 0x55f94ca2c800 session 0x55f94b23dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d827400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 460 ms_handle_reset con 0x55f94d827400 session 0x55f94ca3cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212090880 unmapped: 78864384 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:14.530798+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 460 ms_handle_reset con 0x55f94a3c9400 session 0x55f94ca345a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 461 ms_handle_reset con 0x55f94d397c00 session 0x55f94d63dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212099072 unmapped: 78856192 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:15.531051+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3925872 data_alloc: 251658240 data_used: 29310976
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212099072 unmapped: 78856192 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:16.531245+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 461 ms_handle_reset con 0x55f94ca2b800 session 0x55f94d5f6780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 461 ms_handle_reset con 0x55f94ca2c800 session 0x55f94bd32780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 461 heartbeat osd_stat(store_statfs(0x4f02f0000/0x0/0x4ffc00000, data 0x6e761ab/0x70ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212099072 unmapped: 78856192 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:17.531431+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d604c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 461 ms_handle_reset con 0x55f94d604c00 session 0x55f94af70000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212156416 unmapped: 78798848 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:18.531585+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212156416 unmapped: 78798848 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:19.531751+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 462 ms_handle_reset con 0x55f94a3c9400 session 0x55f94be3a5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 462 handle_osd_map epochs [462,463], i have 462, src has [1,463]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212180992 unmapped: 78774272 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:20.531953+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3933956 data_alloc: 251658240 data_used: 29319168
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 463 ms_handle_reset con 0x55f94ca2c800 session 0x55f94bd49860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 463 ms_handle_reset con 0x55f94ca2b800 session 0x55f94cf4d860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212180992 unmapped: 78774272 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:21.532171+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 463 ms_handle_reset con 0x55f94d397c00 session 0x55f94da7d860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 463 ms_handle_reset con 0x55f94bde1c00 session 0x55f94da681e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212180992 unmapped: 78774272 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:22.532329+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 463 heartbeat osd_stat(store_statfs(0x4f02e9000/0x0/0x4ffc00000, data 0x6e79809/0x70f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212180992 unmapped: 78774272 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 463 ms_handle_reset con 0x55f94bde1c00 session 0x55f94da7c1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:23.532556+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 463 ms_handle_reset con 0x55f94ca2b800 session 0x55f94b24cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.746485710s of 10.924163818s, submitted: 59
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 463 ms_handle_reset con 0x55f94ca28800 session 0x55f94d593680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 463 handle_osd_map epochs [463,464], i have 463, src has [1,464]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212500480 unmapped: 78454784 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 464 ms_handle_reset con 0x55f94ca2c800 session 0x55f94da7c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:24.532690+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 464 ms_handle_reset con 0x55f94d397c00 session 0x55f94d727c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 464 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d579860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212508672 unmapped: 78446592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:25.533016+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3945681 data_alloc: 251658240 data_used: 31084544
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 464 ms_handle_reset con 0x55f94bde1c00 session 0x55f94caa3680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212508672 unmapped: 78446592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:26.533218+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212508672 unmapped: 78446592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:27.533403+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212508672 unmapped: 78446592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:28.533577+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 465 heartbeat osd_stat(store_statfs(0x4f02e8000/0x0/0x4ffc00000, data 0x6e7b3da/0x70f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 465 ms_handle_reset con 0x55f94ca28800 session 0x55f94da68d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 465 ms_handle_reset con 0x55f94ca2b800 session 0x55f94dc5a3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 465 ms_handle_reset con 0x55f94ca2c800 session 0x55f94b2492c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 78438400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:29.533771+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 465 heartbeat osd_stat(store_statfs(0x4efed1000/0x0/0x4ffc00000, data 0x6e7cff4/0x70fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 465 ms_handle_reset con 0x55f94a3c9400 session 0x55f94ca3fe00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 465 ms_handle_reset con 0x55f94bde1c00 session 0x55f94b942d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212533248 unmapped: 78422016 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:30.533954+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 ms_handle_reset con 0x55f94ca28800 session 0x55f94d7d9e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3904354 data_alloc: 251658240 data_used: 31096832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 ms_handle_reset con 0x55f94ca2b800 session 0x55f94d592f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f06d1000/0x0/0x4ffc00000, data 0x667cff4/0x68fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212541440 unmapped: 78413824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:31.534164+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2c800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 ms_handle_reset con 0x55f94ca2c800 session 0x55f94d5f7c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d63c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 ms_handle_reset con 0x55f94bde1c00 session 0x55f94be3a000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 ms_handle_reset con 0x55f94ca28800 session 0x55f94d63c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:32.534359+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:33.534567+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:34.534793+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f06cf000/0x0/0x4ffc00000, data 0x667ea48/0x68fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:35.535286+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3903922 data_alloc: 251658240 data_used: 31100928
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:36.535688+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f06cf000/0x0/0x4ffc00000, data 0x667ea48/0x68fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:37.536012+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:38.536362+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:39.536711+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f06cf000/0x0/0x4ffc00000, data 0x667ea48/0x68fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:40.536847+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3903922 data_alloc: 251658240 data_used: 31100928
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.996917725s of 17.505907059s, submitted: 92
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 ms_handle_reset con 0x55f94ca2b800 session 0x55f94bd094a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:41.537048+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:42.537227+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212549632 unmapped: 78405632 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f06cf000/0x0/0x4ffc00000, data 0x667eaaa/0x68ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f06cf000/0x0/0x4ffc00000, data 0x667eaaa/0x68ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:43.537462+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 212557824 unmapped: 78397440 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f06cf000/0x0/0x4ffc00000, data 0x667eaaa/0x68ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:44.537646+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 213606400 unmapped: 77348864 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f3000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 467 ms_handle_reset con 0x55f94d5f3000 session 0x55f94d6b3a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:45.537837+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 77365248 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 468 ms_handle_reset con 0x55f94d397c00 session 0x55f94da7d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3913611 data_alloc: 251658240 data_used: 31113216
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:46.538025+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 77365248 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f06c7000/0x0/0x4ffc00000, data 0x6682206/0x6906000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:47.538170+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 213606400 unmapped: 77348864 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:48.538366+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 213606400 unmapped: 77348864 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 470 ms_handle_reset con 0x55f94a3c9400 session 0x55f94ca3e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:49.538519+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 213614592 unmapped: 77340672 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 470 ms_handle_reset con 0x55f94bde1c00 session 0x55f94ca41680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:50.538672+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214671360 unmapped: 76283904 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3918293 data_alloc: 251658240 data_used: 31117312
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:51.538886+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214671360 unmapped: 76283904 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:52.539039+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214671360 unmapped: 76283904 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f06c3000/0x0/0x4ffc00000, data 0x6685900/0x690a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:53.539245+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214671360 unmapped: 76283904 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f06c3000/0x0/0x4ffc00000, data 0x6685900/0x690a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:54.539402+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214671360 unmapped: 76283904 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:55.539688+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214671360 unmapped: 76283904 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3918453 data_alloc: 251658240 data_used: 31121408
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f06c3000/0x0/0x4ffc00000, data 0x6685900/0x690a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.779050827s of 14.950376511s, submitted: 51
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 470 handle_osd_map epochs [471,471], i have 471, src has [1,471]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 471 ms_handle_reset con 0x55f94ca28800 session 0x55f94ca35c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:56.539835+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214671360 unmapped: 76283904 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:57.539975+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214671360 unmapped: 76283904 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 471 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94d6b25a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 472 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94d7265a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:58.540129+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214695936 unmapped: 76259328 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 472 handle_osd_map epochs [472,473], i have 472, src has [1,473]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 473 ms_handle_reset con 0x55f94ca2b800 session 0x55f94bd1a960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:59.540303+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214695936 unmapped: 76259328 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:00.540505+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214695936 unmapped: 76259328 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3931680 data_alloc: 251658240 data_used: 31129600
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 474 heartbeat osd_stat(store_statfs(0x4f06b8000/0x0/0x4ffc00000, data 0x668ab13/0x6914000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:01.540676+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 76193792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 476 ms_handle_reset con 0x55f94a3c9400 session 0x55f94cac4780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:02.540794+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 476 ms_handle_reset con 0x55f94bde1c00 session 0x55f94d5f61e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 76193792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:03.541020+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 76193792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:04.541177+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 76193792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:05.541403+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 76193792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3939952 data_alloc: 251658240 data_used: 31133696
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 476 heartbeat osd_stat(store_statfs(0x4f06af000/0x0/0x4ffc00000, data 0x668fcd2/0x691c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:06.541655+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 76193792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:07.541845+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 76193792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:08.542015+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 76193792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:09.542174+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214761472 unmapped: 76193792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 476 heartbeat osd_stat(store_statfs(0x4f06af000/0x0/0x4ffc00000, data 0x668fcd2/0x691c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 476 handle_osd_map epochs [476,477], i have 476, src has [1,477]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.913267136s of 14.137533188s, submitted: 90
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:10.542336+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214818816 unmapped: 76136448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3941582 data_alloc: 251658240 data_used: 31133696
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:11.542436+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214818816 unmapped: 76136448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f06ae000/0x0/0x4ffc00000, data 0x669176d/0x691f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:12.542555+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214818816 unmapped: 76136448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:13.542745+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f06ae000/0x0/0x4ffc00000, data 0x669176d/0x691f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214818816 unmapped: 76136448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca28800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 477 ms_handle_reset con 0x55f94ca28800 session 0x55f94b249680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:14.542907+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214818816 unmapped: 76136448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:15.543148+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214818816 unmapped: 76136448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 478 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d578000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3946497 data_alloc: 251658240 data_used: 31141888
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 478 ms_handle_reset con 0x55f94bde1c00 session 0x55f94da7d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:16.543299+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214835200 unmapped: 76120064 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 479 ms_handle_reset con 0x55f94ca2b800 session 0x55f94bd7f2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 479 heartbeat osd_stat(store_statfs(0x4f06aa000/0x0/0x4ffc00000, data 0x669334c/0x6923000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:17.543480+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214843392 unmapped: 76111872 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 479 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94da68780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d397c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 479 ms_handle_reset con 0x55f94d397c00 session 0x55f94be3a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:18.543651+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214867968 unmapped: 76087296 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 479 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d63c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:19.543870+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214884352 unmapped: 76070912 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:20.544036+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 479 heartbeat osd_stat(store_statfs(0x4f06a8000/0x0/0x4ffc00000, data 0x6694ecb/0x6926000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214884352 unmapped: 76070912 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.594738960s of 10.721902847s, submitted: 74
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3955697 data_alloc: 251658240 data_used: 31150080
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 480 ms_handle_reset con 0x55f94bde1c00 session 0x55f94d592f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:21.544181+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 480 ms_handle_reset con 0x55f94ca2b800 session 0x55f94b942d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214892544 unmapped: 76062720 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 480 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94dc5a3c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:22.544340+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214892544 unmapped: 76062720 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5ff400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 481 ms_handle_reset con 0x55f94d5ff400 session 0x55f94caa3680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 481 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d579860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:23.544520+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214908928 unmapped: 76046336 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:24.544757+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 481 ms_handle_reset con 0x55f94bde1c00 session 0x55f94d593680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214917120 unmapped: 76038144 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f06a2000/0x0/0x4ffc00000, data 0x6698635/0x692b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:25.545016+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214925312 unmapped: 76029952 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3958933 data_alloc: 251658240 data_used: 31150080
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 482 handle_osd_map epochs [482,483], i have 482, src has [1,483]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 483 ms_handle_reset con 0x55f94ca2b800 session 0x55f94b24cf00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:26.545226+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214933504 unmapped: 76021760 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f069b000/0x0/0x4ffc00000, data 0x669bc4d/0x6931000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 483 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94da681e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f3800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 483 ms_handle_reset con 0x55f94d5f3800 session 0x55f94cf4d860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:27.545404+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f069b000/0x0/0x4ffc00000, data 0x669bc4d/0x6931000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 483 handle_osd_map epochs [484,484], i have 484, src has [1,484]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 214949888 unmapped: 76005376 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 484 ms_handle_reset con 0x55f94a3c9400 session 0x55f94bd49860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:28.545606+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 75948032 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 484 ms_handle_reset con 0x55f94bde1c00 session 0x55f94be3a5a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:29.545820+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 75948032 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 484 ms_handle_reset con 0x55f94ca2b800 session 0x55f94d5f6780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:30.546027+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215007232 unmapped: 75948032 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3967771 data_alloc: 251658240 data_used: 31154176
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.230096817s of 10.395006180s, submitted: 50
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:31.546203+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f0698000/0x0/0x4ffc00000, data 0x669d8a0/0x6936000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215015424 unmapped: 75939840 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 486 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94ca345a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:32.546397+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215040000 unmapped: 75915264 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 486 ms_handle_reset con 0x55f94ca2d800 session 0x55f94d6b2b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:33.546607+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215056384 unmapped: 75898880 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 487 ms_handle_reset con 0x55f94a3c9400 session 0x55f94bd48960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:34.546770+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215089152 unmapped: 75866112 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 487 ms_handle_reset con 0x55f94bde1c00 session 0x55f94d672b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 487 ms_handle_reset con 0x55f94ca2b800 session 0x55f94bd1b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 487 handle_osd_map epochs [487,488], i have 487, src has [1,488]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:35.547066+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 75841536 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3980090 data_alloc: 251658240 data_used: 31170560
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 488 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94da685a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:36.547242+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 75841536 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f068b000/0x0/0x4ffc00000, data 0x66a4508/0x6941000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bd7c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 489 ms_handle_reset con 0x55f94bd7c000 session 0x55f94d6b5680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:37.547385+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215121920 unmapped: 75833344 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:38.547590+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215121920 unmapped: 75833344 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 489 ms_handle_reset con 0x55f94a3c9400 session 0x55f94ca41c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 489 handle_osd_map epochs [489,490], i have 489, src has [1,490]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 490 ms_handle_reset con 0x55f94bde1c00 session 0x55f94b23a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:39.547736+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215138304 unmapped: 75816960 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 490 ms_handle_reset con 0x55f94ca2b800 session 0x55f94be3dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 490 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94dbcf0e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 490 handle_osd_map epochs [490,491], i have 490, src has [1,491]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.3 total, 600.0 interval
                                           Cumulative writes: 28K writes, 114K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.80 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8583 writes, 38K keys, 8583 commit groups, 1.0 writes per commit group, ingest: 28.44 MB, 0.05 MB/s
                                           Interval WAL: 8583 writes, 3572 syncs, 2.40 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:40.548026+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 75841536 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3989729 data_alloc: 251658240 data_used: 31182848
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d08a400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 491 ms_handle_reset con 0x55f94d08a400 session 0x55f94b2485a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:41.548226+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 491 heartbeat osd_stat(store_statfs(0x4f0684000/0x0/0x4ffc00000, data 0x66a969f/0x6949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 75841536 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:42.548407+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.902436256s of 11.143766403s, submitted: 97
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215121920 unmapped: 75833344 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:43.548580+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 492 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d726d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215146496 unmapped: 75808768 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 492 ms_handle_reset con 0x55f94bde1c00 session 0x55f94d6b5a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:44.548700+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 493 ms_handle_reset con 0x55f94ca2b800 session 0x55f94da70000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215179264 unmapped: 75776000 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 493 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94d5792c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 493 ms_handle_reset con 0x55f94dc9f000 session 0x55f94bd1b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:45.548944+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215195648 unmapped: 75759616 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3997414 data_alloc: 251658240 data_used: 31186944
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:46.549181+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 493 ms_handle_reset con 0x55f94a3c9400 session 0x55f94d5781e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215195648 unmapped: 75759616 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f067d000/0x0/0x4ffc00000, data 0x66ace19/0x694f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:47.549386+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215195648 unmapped: 75759616 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:48.549616+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 494 ms_handle_reset con 0x55f94bde1c00 session 0x55f94b23b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215228416 unmapped: 75726848 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 494 ms_handle_reset con 0x55f94ca2b800 session 0x55f94be3ad20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2cc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:49.550069+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 494 ms_handle_reset con 0x55f94ca2cc00 session 0x55f94d672000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215261184 unmapped: 75694080 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 495 handle_osd_map epochs [495,496], i have 495, src has [1,496]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 496 ms_handle_reset con 0x55f94ca2d400 session 0x55f94d727e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94a3c9400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 496 ms_handle_reset con 0x55f94bde1c00 session 0x55f94be3da40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 496 heartbeat osd_stat(store_statfs(0x4f0678000/0x0/0x4ffc00000, data 0x66b05b3/0x6955000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:50.550237+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 496 ms_handle_reset con 0x55f94ca2b800 session 0x55f94be3c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215302144 unmapped: 75653120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 496 ms_handle_reset con 0x55f94cf18c00 session 0x55f94dc5a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f950407000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 496 ms_handle_reset con 0x55f94b960400 session 0x55f94be3c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94cf18c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4006336 data_alloc: 251658240 data_used: 31186944
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:51.550429+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215302144 unmapped: 75653120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 496 ms_handle_reset con 0x55f94ca2d400 session 0x55f94d592960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:52.550725+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 496 heartbeat osd_stat(store_statfs(0x4f0674000/0x0/0x4ffc00000, data 0x66b2052/0x6958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215302144 unmapped: 75653120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:53.551000+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.866883278s of 11.090641975s, submitted: 90
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 497 ms_handle_reset con 0x55f94dc9f000 session 0x55f94b1252c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215318528 unmapped: 75636736 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f950407400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 497 ms_handle_reset con 0x55f950407400 session 0x55f94be7a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:54.551239+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 497 ms_handle_reset con 0x55f94bde1c00 session 0x55f94b24d4a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215318528 unmapped: 75636736 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 498 ms_handle_reset con 0x55f94ca2b800 session 0x55f94d593e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:55.551481+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 498 ms_handle_reset con 0x55f94ca2d400 session 0x55f94be7be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215343104 unmapped: 75612160 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4011772 data_alloc: 251658240 data_used: 31191040
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:56.551684+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215343104 unmapped: 75612160 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 498 ms_handle_reset con 0x55f94dc9f000 session 0x55f94d6b45a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:57.551900+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215343104 unmapped: 75612160 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5ffc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:58.552131+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 498 heartbeat osd_stat(store_statfs(0x4f066e000/0x0/0x4ffc00000, data 0x66b582e/0x695f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 498 handle_osd_map epochs [499,499], i have 498, src has [1,499]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 498 handle_osd_map epochs [499,499], i have 499, src has [1,499]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 499 ms_handle_reset con 0x55f94d5ffc00 session 0x55f94d6b21e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215351296 unmapped: 75603968 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:59.552280+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 499 ms_handle_reset con 0x55f94bde1c00 session 0x55f94b248b40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 499 handle_osd_map epochs [500,500], i have 499, src has [1,500]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 500 ms_handle_reset con 0x55f94ca2b800 session 0x55f94d5f6000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215367680 unmapped: 75587584 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:00.561256+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 501 ms_handle_reset con 0x55f94ca2d400 session 0x55f94da68f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215416832 unmapped: 75538432 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 501 ms_handle_reset con 0x55f94dc9f000 session 0x55f94dbcfc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4020883 data_alloc: 251658240 data_used: 31195136
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:01.561484+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215416832 unmapped: 75538432 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:02.561654+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 501 heartbeat osd_stat(store_statfs(0x4f0667000/0x0/0x4ffc00000, data 0x66baa05/0x6967000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215416832 unmapped: 75538432 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:03.561809+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215416832 unmapped: 75538432 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:04.562020+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215433216 unmapped: 75522048 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:05.562198+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215433216 unmapped: 75522048 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4021363 data_alloc: 251658240 data_used: 31207424
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcdb800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.535583496s of 12.698331833s, submitted: 79
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 501 ms_handle_reset con 0x55f94dcdb800 session 0x55f94d7d9860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:06.562403+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215433216 unmapped: 75522048 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:07.562621+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215433216 unmapped: 75522048 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 501 heartbeat osd_stat(store_statfs(0x4f0667000/0x0/0x4ffc00000, data 0x66baa05/0x6967000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 501 handle_osd_map epochs [501,502], i have 501, src has [1,502]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 502 ms_handle_reset con 0x55f94bde1c00 session 0x55f94d673e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:08.562821+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215441408 unmapped: 75513856 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 502 ms_handle_reset con 0x55f94ca2b800 session 0x55f94b125e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 502 ms_handle_reset con 0x55f94ca2d400 session 0x55f94dc5af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:09.563006+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215465984 unmapped: 75489280 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 503 ms_handle_reset con 0x55f94dc9f000 session 0x55f94b23be00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcdb800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 503 ms_handle_reset con 0x55f94dcdb800 session 0x55f94a3de960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:10.563163+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 215474176 unmapped: 75481088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4039167 data_alloc: 251658240 data_used: 31215616
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:11.563311+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 504 ms_handle_reset con 0x55f94bde1c00 session 0x55f94cac4000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216555520 unmapped: 74399744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:12.563455+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216563712 unmapped: 74391552 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 504 handle_osd_map epochs [505,505], i have 504, src has [1,505]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 505 ms_handle_reset con 0x55f94ca2b800 session 0x55f94bd32780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 505 heartbeat osd_stat(store_statfs(0x4f065d000/0x0/0x4ffc00000, data 0x66bfbfe/0x6970000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:13.563603+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216580096 unmapped: 74375168 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 505 ms_handle_reset con 0x55f94ca2d400 session 0x55f94ca35e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 505 ms_handle_reset con 0x55f94dc9f000 session 0x55f94cac45a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:14.563779+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216580096 unmapped: 74375168 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dcdb800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 506 ms_handle_reset con 0x55f94dcdb800 session 0x55f94b24c000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 506 ms_handle_reset con 0x55f94bde1c00 session 0x55f94be3bc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:15.563940+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216604672 unmapped: 74350592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4049143 data_alloc: 251658240 data_used: 31215616
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:16.564156+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.312512398s of 10.482665062s, submitted: 49
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 506 ms_handle_reset con 0x55f94ca2b800 session 0x55f94b2492c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216612864 unmapped: 74342400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:17.564331+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216612864 unmapped: 74342400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 506 heartbeat osd_stat(store_statfs(0x4f0658000/0x0/0x4ffc00000, data 0x66c3378/0x6976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 507 ms_handle_reset con 0x55f94ca2d400 session 0x55f94d592000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:18.564508+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 507 heartbeat osd_stat(store_statfs(0x4f0658000/0x0/0x4ffc00000, data 0x66c3378/0x6976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216653824 unmapped: 74301440 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 507 ms_handle_reset con 0x55f94dc9f000 session 0x55f94bd7ef00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f954871800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 507 handle_osd_map epochs [508,508], i have 507, src has [1,508]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 508 ms_handle_reset con 0x55f954871800 session 0x55f94d672960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:19.564663+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216670208 unmapped: 74285056 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 508 handle_osd_map epochs [509,509], i have 508, src has [1,509]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 509 ms_handle_reset con 0x55f94bde1c00 session 0x55f94be3a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:20.564837+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 509 ms_handle_reset con 0x55f94ca2b800 session 0x55f94adc7a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216735744 unmapped: 74219520 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4059482 data_alloc: 251658240 data_used: 31232000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:21.565091+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216735744 unmapped: 74219520 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 509 ms_handle_reset con 0x55f94ca2d400 session 0x55f94d6b2f00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 509 heartbeat osd_stat(store_statfs(0x4f064d000/0x0/0x4ffc00000, data 0x66c85b1/0x697f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:22.565283+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216735744 unmapped: 74219520 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 509 handle_osd_map epochs [510,510], i have 509, src has [1,510]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 510 ms_handle_reset con 0x55f94dc9f000 session 0x55f94b23b680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:23.565471+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216735744 unmapped: 74219520 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 510 ms_handle_reset con 0x55f94dc9e000 session 0x55f94be3d680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:24.565677+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216735744 unmapped: 74219520 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 510 handle_osd_map epochs [511,511], i have 510, src has [1,511]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 511 ms_handle_reset con 0x55f94bde1c00 session 0x55f94d593e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:25.566075+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 511 ms_handle_reset con 0x55f94ca2b800 session 0x55f94be3c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216768512 unmapped: 74186752 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 511 ms_handle_reset con 0x55f94ca2d400 session 0x55f94dc5a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4066143 data_alloc: 251658240 data_used: 31244288
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:26.566227+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 511 heartbeat osd_stat(store_statfs(0x4f0648000/0x0/0x4ffc00000, data 0x66cbd2b/0x6985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216768512 unmapped: 74186752 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:27.566357+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.463889122s of 10.822701454s, submitted: 117
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 511 ms_handle_reset con 0x55f94dc9e000 session 0x55f94d6b5a40
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 511 heartbeat osd_stat(store_statfs(0x4f0648000/0x0/0x4ffc00000, data 0x66cbd2b/0x6985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216784896 unmapped: 74170368 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 511 heartbeat osd_stat(store_statfs(0x4f0648000/0x0/0x4ffc00000, data 0x66cbd2b/0x6985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:28.566484+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9f000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216793088 unmapped: 74162176 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 511 handle_osd_map epochs [512,512], i have 511, src has [1,512]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 512 ms_handle_reset con 0x55f94dc9f000 session 0x55f94b2485a0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:29.566633+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216809472 unmapped: 74145792 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f0644000/0x0/0x4ffc00000, data 0x66cd936/0x6989000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 512 ms_handle_reset con 0x55f94bde1c00 session 0x55f94bd1b2c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 512 handle_osd_map epochs [513,513], i have 512, src has [1,513]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:30.566792+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216842240 unmapped: 74113024 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4075857 data_alloc: 251658240 data_used: 31252480
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 513 handle_osd_map epochs [514,514], i have 513, src has [1,514]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 514 ms_handle_reset con 0x55f94ca2b800 session 0x55f94be3c780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:31.567008+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216850432 unmapped: 74104832 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 514 ms_handle_reset con 0x55f94ca2d400 session 0x55f94be3a1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:32.567165+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 514 ms_handle_reset con 0x55f94dc9e000 session 0x55f94bd7ef00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:33.567311+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:34.567569+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 514 heartbeat osd_stat(store_statfs(0x4f063f000/0x0/0x4ffc00000, data 0x66d0f28/0x698e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:35.567811+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4076290 data_alloc: 251658240 data_used: 31252480
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:36.568055+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:37.568251+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 514 heartbeat osd_stat(store_statfs(0x4f063f000/0x0/0x4ffc00000, data 0x66d0f28/0x698e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:38.568403+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:39.568594+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:40.568736+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4076290 data_alloc: 251658240 data_used: 31252480
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:41.568909+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 216866816 unmapped: 74088448 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 514 heartbeat osd_stat(store_statfs(0x4f063f000/0x0/0x4ffc00000, data 0x66d0f28/0x698e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 514 handle_osd_map epochs [515,515], i have 514, src has [1,515]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.461789131s of 14.721035004s, submitted: 66
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:42.569096+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 73015296 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:43.569293+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217939968 unmapped: 73015296 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:44.569479+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 73007104 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:45.569684+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 73007104 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079264 data_alloc: 251658240 data_used: 31252480
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063c000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:46.569843+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 73007104 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:47.570040+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 73007104 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:48.570204+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 73007104 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:49.570397+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 73007104 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:50.570568+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 73007104 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063c000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079264 data_alloc: 251658240 data_used: 31252480
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:51.570762+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217948160 unmapped: 73007104 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:52.570954+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217956352 unmapped: 72998912 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:53.571152+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217956352 unmapped: 72998912 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:54.571333+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217956352 unmapped: 72998912 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:55.571502+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063c000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217964544 unmapped: 72990720 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079264 data_alloc: 251658240 data_used: 31252480
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:56.571652+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217972736 unmapped: 72982528 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:57.571810+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217972736 unmapped: 72982528 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:58.572097+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063c000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217972736 unmapped: 72982528 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.569801331s of 16.750499725s, submitted: 12
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:59.572232+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217989120 unmapped: 72966144 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:00.572356+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 217989120 unmapped: 72966144 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079024 data_alloc: 251658240 data_used: 31268864
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:01.572480+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218120192 unmapped: 72835072 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:02.572590+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 ms_handle_reset con 0x55f94d824800 session 0x55f94d579e00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94bde1c00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063d000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:03.572718+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:04.572864+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:05.573035+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079024 data_alloc: 251658240 data_used: 31268864
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:06.573249+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063d000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:07.573414+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:08.573569+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:09.573709+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063d000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:10.573851+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079024 data_alloc: 251658240 data_used: 31268864
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:11.574061+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:12.574241+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063d000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:13.574380+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:14.574518+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218349568 unmapped: 72605696 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:15.574729+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218357760 unmapped: 72597504 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079024 data_alloc: 251658240 data_used: 31268864
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:16.574846+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218357760 unmapped: 72597504 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:17.575010+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063d000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218357760 unmapped: 72597504 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:18.575182+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218357760 unmapped: 72597504 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:19.575391+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218365952 unmapped: 72589312 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:20.575579+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218374144 unmapped: 72581120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079024 data_alloc: 251658240 data_used: 31268864
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:21.575751+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218374144 unmapped: 72581120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:22.575916+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063d000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218374144 unmapped: 72581120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:23.576111+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218374144 unmapped: 72581120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:24.576373+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218374144 unmapped: 72581120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:25.576665+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063d000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218374144 unmapped: 72581120 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079024 data_alloc: 251658240 data_used: 31268864
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:26.576911+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.321714401s of 27.634147644s, submitted: 110
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f063d000/0x0/0x4ffc00000, data 0x66d298b/0x6991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 ms_handle_reset con 0x55f94ca2b800 session 0x55f94be7a780
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218382336 unmapped: 72572928 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:27.577084+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _renew_subs
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 515 handle_osd_map epochs [516,516], i have 515, src has [1,516]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218382336 unmapped: 72572928 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 516 ms_handle_reset con 0x55f94ca2d400 session 0x55f94be3c1e0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:28.577299+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218406912 unmapped: 72548352 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:29.577479+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 516 ms_handle_reset con 0x55f94dccac00 session 0x55f94caa3680
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 516 ms_handle_reset con 0x55f94dc9e000 session 0x55f94da68000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218423296 unmapped: 72531968 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:30.577679+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2dc00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 516 ms_handle_reset con 0x55f94ca2dc00 session 0x55f94ca34000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218431488 unmapped: 72523776 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 516 ms_handle_reset con 0x55f94ca2b800 session 0x55f94cf4dc20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4085786 data_alloc: 251658240 data_used: 31277056
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:31.577811+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 516 heartbeat osd_stat(store_statfs(0x4f0638000/0x0/0x4ffc00000, data 0x66d457a/0x6996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218431488 unmapped: 72523776 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 516 ms_handle_reset con 0x55f94ca2d400 session 0x55f94b88af00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dc9e000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:32.578002+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94dccac00
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 516 ms_handle_reset con 0x55f94dccac00 session 0x55f94b942d20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94d5f9000
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218497024 unmapped: 72458240 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 516 handle_osd_map epochs [517,517], i have 516, src has [1,517]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:33.578120+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 517 ms_handle_reset con 0x55f94d5f9000 session 0x55f94d63c960
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218513408 unmapped: 72441856 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:34.578329+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 517 ms_handle_reset con 0x55f94dc9e000 session 0x55f94ca35860
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218513408 unmapped: 72441856 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:35.578560+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2b800
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 517 ms_handle_reset con 0x55f94ca2b800 session 0x55f94ca412c0
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: handle_auth_request added challenge on 0x55f94ca2d400
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 517 ms_handle_reset con 0x55f94ca2d400 session 0x55f94bd09c20
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218513408 unmapped: 72441856 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4087470 data_alloc: 251658240 data_used: 31285248
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:36.578743+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 517 heartbeat osd_stat(store_statfs(0x4f0637000/0x0/0x4ffc00000, data 0x66d60d9/0x6997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218513408 unmapped: 72441856 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:37.578930+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218513408 unmapped: 72441856 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 517 heartbeat osd_stat(store_statfs(0x4f0637000/0x0/0x4ffc00000, data 0x66d60d9/0x6997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:38.579103+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218513408 unmapped: 72441856 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:39.579283+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218513408 unmapped: 72441856 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:40.579433+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 517 heartbeat osd_stat(store_statfs(0x4f0637000/0x0/0x4ffc00000, data 0x66d60d9/0x6997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 517 handle_osd_map epochs [518,518], i have 517, src has [1,518]
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.598101616s of 14.025245667s, submitted: 27
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 72409088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:41.579639+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 72409088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:42.579852+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 72409088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:43.580037+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 72409088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:44.580225+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 72409088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:45.580479+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 72409088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:46.580625+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 72409088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:47.580757+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 72409088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:48.581030+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218546176 unmapped: 72409088 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:49.581214+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218554368 unmapped: 72400896 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:50.581382+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218554368 unmapped: 72400896 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:51.581560+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218554368 unmapped: 72400896 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:52.581770+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218554368 unmapped: 72400896 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:53.581944+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218554368 unmapped: 72400896 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:54.582205+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218554368 unmapped: 72400896 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:55.582447+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218554368 unmapped: 72400896 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:56.582675+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218562560 unmapped: 72392704 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:57.582923+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218562560 unmapped: 72392704 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:58.583073+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218562560 unmapped: 72392704 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:59.583293+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:00.583482+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:01.583660+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:02.583839+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:03.584063+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:04.584226+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:05.584440+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:06.584639+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:07.584869+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:08.585119+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218570752 unmapped: 72384512 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:09.585344+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218578944 unmapped: 72376320 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:10.585590+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218578944 unmapped: 72376320 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:11.585837+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218578944 unmapped: 72376320 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:12.586043+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218595328 unmapped: 72359936 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:13.586219+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218595328 unmapped: 72359936 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:14.586480+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218595328 unmapped: 72359936 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:15.586689+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218595328 unmapped: 72359936 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:16.586893+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:17.587056+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:18.587211+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:19.587357+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:20.587572+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:21.588032+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:22.588201+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:23.588562+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:24.588726+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:25.589030+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:26.589283+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:27.589508+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218619904 unmapped: 72335360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:28.589749+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218619904 unmapped: 72335360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:29.590063+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218619904 unmapped: 72335360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:30.590233+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218619904 unmapped: 72335360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:31.590425+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218619904 unmapped: 72335360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:32.590615+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218619904 unmapped: 72335360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:33.590787+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218619904 unmapped: 72335360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:34.591058+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218619904 unmapped: 72335360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:35.591408+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218619904 unmapped: 72335360 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:36.591621+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218628096 unmapped: 72327168 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:37.591841+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218628096 unmapped: 72327168 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:38.592064+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218628096 unmapped: 72327168 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:39.592297+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218628096 unmapped: 72327168 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:40.592488+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218636288 unmapped: 72318976 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:41.592685+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218636288 unmapped: 72318976 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:42.592913+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218636288 unmapped: 72318976 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:43.593134+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218652672 unmapped: 72302592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:44.593321+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218652672 unmapped: 72302592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:45.593553+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218652672 unmapped: 72302592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:46.593724+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218652672 unmapped: 72302592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:47.593902+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218652672 unmapped: 72302592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:48.594104+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218652672 unmapped: 72302592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:49.594310+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218652672 unmapped: 72302592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:50.594525+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218652672 unmapped: 72302592 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:51.594753+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218660864 unmapped: 72294400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:52.595114+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218660864 unmapped: 72294400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:53.595374+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:54.595588+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218660864 unmapped: 72294400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:55.595821+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218660864 unmapped: 72294400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:56.596020+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218660864 unmapped: 72294400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:57.596331+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218660864 unmapped: 72294400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:58.596511+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218660864 unmapped: 72294400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:59.596772+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218660864 unmapped: 72294400 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:00.596910+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218685440 unmapped: 72269824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:01.597548+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218685440 unmapped: 72269824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:02.598027+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218685440 unmapped: 72269824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:03.598475+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218685440 unmapped: 72269824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:04.598675+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218685440 unmapped: 72269824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:05.598942+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218685440 unmapped: 72269824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:06.599152+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218685440 unmapped: 72269824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:07.599310+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218685440 unmapped: 72269824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:08.599453+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218685440 unmapped: 72269824 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:09.599603+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218701824 unmapped: 72253440 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:10.599763+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218710016 unmapped: 72245248 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:11.599998+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218710016 unmapped: 72245248 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:12.600104+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218710016 unmapped: 72245248 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:13.600242+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218710016 unmapped: 72245248 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:14.600350+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218710016 unmapped: 72245248 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:15.600458+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218710016 unmapped: 72245248 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:16.600616+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: do_command 'config diff' '{prefix=config diff}'
Nov 29 08:16:49 compute-0 ceph-osd[89840]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218603520 unmapped: 72351744 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: do_command 'config show' '{prefix=config show}'
Nov 29 08:16:49 compute-0 ceph-osd[89840]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 08:16:49 compute-0 ceph-osd[89840]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f0633000/0x0/0x4ffc00000, data 0x66d7b3c/0x699a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 29 08:16:49 compute-0 ceph-osd[89840]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 08:16:49 compute-0 ceph-osd[89840]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 08:16:49 compute-0 ceph-osd[89840]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 08:16:49 compute-0 ceph-osd[89840]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:49 compute-0 ceph-osd[89840]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:49 compute-0 ceph-osd[89840]: bluestore.MempoolThread(0x55f9496e3b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4091644 data_alloc: 251658240 data_used: 31293440
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:17.600737+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218865664 unmapped: 72089600 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: tick
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_tickets
Nov 29 08:16:49 compute-0 ceph-osd[89840]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:18.600874+0000)
Nov 29 08:16:49 compute-0 ceph-osd[89840]: prioritycache tune_memory target: 4294967296 mapped: 218292224 unmapped: 72663040 heap: 290955264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:49 compute-0 ceph-osd[89840]: do_command 'log dump' '{prefix=log dump}'
Nov 29 08:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 08:16:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1491442431' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 08:16:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3130557991' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 08:16:49 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:16:49 compute-0 nova_compute[256729]: 2025-11-29 08:16:49.818 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:49 compute-0 rsyslogd[1007]: imjournal from <np0005539576:ceph-osd>: begin to drop messages due to rate-limiting
Nov 29 08:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 08:16:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3156159757' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 08:16:49 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 08:16:49 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2163202496' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 08:16:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1694480650' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 08:16:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1491442431' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 08:16:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3130557991' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 08:16:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3156159757' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 08:16:50 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2163202496' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 08:16:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 08:16:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712946938' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 08:16:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 08:16:50 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3575596609' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 08:16:50 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:50 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19365 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:50 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19363 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:51 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19367 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:51 compute-0 ceph-mon[75050]: pgmap v2450: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1712946938' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 08:16:51 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3575596609' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 08:16:51 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19369 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:51 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19371 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:51 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:51 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19375 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:53 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:53 compute-0 nova_compute[256729]: 2025-11-29 08:16:53.541 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:53 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19379 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:53 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 08:16:53 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817500454' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 08:16:53 compute-0 ceph-mon[75050]: from='client.19365 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:53 compute-0 ceph-mon[75050]: from='client.19363 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:53 compute-0 ceph-mon[75050]: from='client.19367 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19381 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 08:16:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1761140041' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19385 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 08:16:54 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103446988' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: from='client.19369 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: from='client.19371 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: pgmap v2451: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:54 compute-0 ceph-mon[75050]: from='client.19375 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: pgmap v2452: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:54 compute-0 ceph-mon[75050]: from='client.19379 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2817500454' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: from='client.19381 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1761140041' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 08:16:54 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2103446988' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 08:16:54 compute-0 nova_compute[256729]: 2025-11-29 08:16:54.820 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 08:16:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3974811685' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 08:16:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 08:16:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 08:16:55 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 29474816 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:16.842369+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 29450240 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 203 heartbeat osd_stat(store_statfs(0x4ef387000/0x0/0x4ffc00000, data 0xbdb4a18/0xbed7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:17.843277+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 29450240 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.898819923s of 10.686198235s, submitted: 102
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:18.843480+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 29417472 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:19.843718+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 203 heartbeat osd_stat(store_statfs(0x4ee387000/0x0/0x4ffc00000, data 0xcdb4a18/0xced7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 203 heartbeat osd_stat(store_statfs(0x4ee387000/0x0/0x4ffc00000, data 0xcdb4a18/0xced7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 29417472 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2700011 data_alloc: 218103808 data_used: 761856
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:20.844329+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 203 heartbeat osd_stat(store_statfs(0x4ee387000/0x0/0x4ffc00000, data 0xcdb4a18/0xced7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 204 ms_handle_reset con 0x55819b3dec00 session 0x55819dd34d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 29417472 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:21.844646+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99573760 unmapped: 29417472 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:22.844865+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 29384704 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:23.845019+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 heartbeat osd_stat(store_statfs(0x4ee383000/0x0/0x4ffc00000, data 0xcdb6595/0xceda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99647488 unmapped: 29343744 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:24.845196+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 ms_handle_reset con 0x55819a18d400 session 0x55819a1874a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 28672000 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384394 data_alloc: 218103808 data_used: 770048
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 ms_handle_reset con 0x55819b3de800 session 0x55819b2110e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dfc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 ms_handle_reset con 0x55819b3dfc00 session 0x55819d9025a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:25.845565+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99835904 unmapped: 29155328 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 heartbeat osd_stat(store_statfs(0x4faa7a000/0x0/0x4ffc00000, data 0x6bd1c8/0x7e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:26.845717+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819da1b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 ms_handle_reset con 0x55819da1b000 session 0x55819d61d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99540992 unmapped: 29450240 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 ms_handle_reset con 0x55819a18d400 session 0x55819e732960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:27.846066+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 29433856 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:28.846360+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 heartbeat osd_stat(store_statfs(0x4faa7c000/0x0/0x4ffc00000, data 0x6bd166/0x7e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 29433856 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:29.846694+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 29433856 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381024 data_alloc: 218103808 data_used: 770048
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 heartbeat osd_stat(store_statfs(0x4faa7c000/0x0/0x4ffc00000, data 0x6bd166/0x7e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:30.847097+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 ms_handle_reset con 0x55819b3de800 session 0x55819c1623c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 29433856 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:31.847234+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 29433856 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 ms_handle_reset con 0x55819b3dec00 session 0x55819d70da40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:32.847477+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dfc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 ms_handle_reset con 0x55819b3dfc00 session 0x55819c17dc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99557376 unmapped: 29433856 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.652188301s of 14.691629410s, submitted: 122
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 ms_handle_reset con 0x55819d40d800 session 0x55819d9023c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 205 handle_osd_map epochs [205,206], i have 205, src has [1,206]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:33.847721+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 29384704 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:34.847839+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 11K writes, 3270 syncs, 3.45 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5611 writes, 17K keys, 5611 commit groups, 1.0 writes per commit group, ingest: 9.49 MB, 0.02 MB/s
                                           Interval WAL: 5611 writes, 2346 syncs, 2.39 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 heartbeat osd_stat(store_statfs(0x4faa77000/0x0/0x4ffc00000, data 0x6bebd9/0x7e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 29384704 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387036 data_alloc: 218103808 data_used: 778240
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:35.848031+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 29384704 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:36.848163+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:37.848390+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:38.848552+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:39.848744+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 heartbeat osd_stat(store_statfs(0x4faa77000/0x0/0x4ffc00000, data 0x6bebd9/0x7e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1394556 data_alloc: 218103808 data_used: 1835008
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:40.848878+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:41.849018+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:42.849238+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:43.849376+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 heartbeat osd_stat(store_statfs(0x4faa77000/0x0/0x4ffc00000, data 0x6bebd9/0x7e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:44.849564+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1394556 data_alloc: 218103808 data_used: 1835008
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:45.849657+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 29368320 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:46.849810+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99631104 unmapped: 29360128 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:47.850007+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.523371696s of 14.547148705s, submitted: 11
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 ms_handle_reset con 0x55819b3dec00 session 0x55819e732f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 29351936 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:48.850163+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 99639296 unmapped: 29351936 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:49.850359+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 heartbeat osd_stat(store_statfs(0x4faa77000/0x0/0x4ffc00000, data 0x6bebd9/0x7e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dfc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 102834176 unmapped: 26157056 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463710 data_alloc: 218103808 data_used: 2072576
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 ms_handle_reset con 0x55819b3dfc00 session 0x55819e7332c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40c400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:50.850554+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 ms_handle_reset con 0x55819d40c400 session 0x55819e733860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd8c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 ms_handle_reset con 0x55819ccd8c00 session 0x55819d70c3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 27410432 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 ms_handle_reset con 0x55819c763400 session 0x55819b3c50e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:51.850694+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 27385856 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:52.850892+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101605376 unmapped: 27385856 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 ms_handle_reset con 0x55819b3dec00 session 0x55819dace780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:53.851023+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 27369472 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:54.851224+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 27369472 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533352 data_alloc: 218103808 data_used: 2076672
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:55.851376+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 heartbeat osd_stat(store_statfs(0x4f9952000/0x0/0x4ffc00000, data 0x17e3c7b/0x190c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 27369472 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:56.851561+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dfc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 27369472 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:57.851724+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.962596893s of 10.216440201s, submitted: 135
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 ms_handle_reset con 0x55819b3dfc00 session 0x55819dacfa40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101638144 unmapped: 27353088 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:58.851897+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101638144 unmapped: 27353088 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:52:59.852097+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 heartbeat osd_stat(store_statfs(0x4fa236000/0x0/0x4ffc00000, data 0xeffbd9/0x1027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 heartbeat osd_stat(store_statfs(0x4fa236000/0x0/0x4ffc00000, data 0xeffbd9/0x1027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101638144 unmapped: 27353088 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470066 data_alloc: 218103808 data_used: 2076672
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:00.852242+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd8c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 ms_handle_reset con 0x55819ccd8c00 session 0x55819af530e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 27344896 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:01.852405+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 27344896 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:02.852575+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 27344896 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:03.852744+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40c400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 27344896 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 206 handle_osd_map epochs [206,207], i have 206, src has [1,207]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:04.852906+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 207 ms_handle_reset con 0x55819d40c400 session 0x55819acdde00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 207 heartbeat osd_stat(store_statfs(0x4fa237000/0x0/0x4ffc00000, data 0xeffbd9/0x1027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101662720 unmapped: 27328512 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475561 data_alloc: 218103808 data_used: 2084864
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:05.853051+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 207 ms_handle_reset con 0x55819c2eb000 session 0x55819cd592c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101695488 unmapped: 27295744 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:06.853204+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 207 ms_handle_reset con 0x55819b3dec00 session 0x55819e733680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101703680 unmapped: 27287552 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 207 handle_osd_map epochs [207,208], i have 207, src has [1,208]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 208 ms_handle_reset con 0x55819c2eb800 session 0x55819b3c2f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:07.853378+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101711872 unmapped: 27279360 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.174076080s of 10.538993835s, submitted: 54
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 208 ms_handle_reset con 0x55819b3df000 session 0x55819aef5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dfc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:08.853503+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 208 ms_handle_reset con 0x55819b3dfc00 session 0x55819e7325a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 27262976 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:09.853666+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 208 heartbeat osd_stat(store_statfs(0x4fa231000/0x0/0x4ffc00000, data 0xf03327/0x102d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101728256 unmapped: 27262976 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477383 data_alloc: 218103808 data_used: 2093056
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:10.853883+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd8c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 208 ms_handle_reset con 0x55819ccd8c00 session 0x55819b3b4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 27246592 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:11.854106+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 27246592 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:12.854322+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 27246592 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:13.854485+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101744640 unmapped: 27246592 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 208 heartbeat osd_stat(store_statfs(0x4fa231000/0x0/0x4ffc00000, data 0xf03327/0x102d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 208 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:14.854639+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 209 heartbeat osd_stat(store_statfs(0x4fa22d000/0x0/0x4ffc00000, data 0xf04d8a/0x1030000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101752832 unmapped: 27238400 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483031 data_alloc: 218103808 data_used: 2101248
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:15.854798+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101752832 unmapped: 27238400 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:16.854945+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 27222016 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:17.855081+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 27222016 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:18.855235+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101769216 unmapped: 27222016 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.682723999s of 10.889600754s, submitted: 35
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:19.855399+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 209 heartbeat osd_stat(store_statfs(0x4f9eff000/0x0/0x4ffc00000, data 0x1233d8a/0x135f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dfc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 209 ms_handle_reset con 0x55819b3dfc00 session 0x55819990af00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 209 ms_handle_reset con 0x55819b3df000 session 0x55819aef4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101777408 unmapped: 27213824 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1527379 data_alloc: 218103808 data_used: 2101248
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 209 ms_handle_reset con 0x55819c2eb800 session 0x55819c308f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:20.855538+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101277696 unmapped: 27713536 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40c400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 210 ms_handle_reset con 0x55819d40c400 session 0x55819a59f860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:21.855684+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 210 ms_handle_reset con 0x55819c6fd000 session 0x55819cd7dc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101335040 unmapped: 27656192 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 211 ms_handle_reset con 0x55819d40d800 session 0x55819c1883c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:22.855839+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 211 heartbeat osd_stat(store_statfs(0x4f9e72000/0x0/0x4ffc00000, data 0x12ba548/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 211 ms_handle_reset con 0x55819b3df000 session 0x55819a0b5860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101384192 unmapped: 27607040 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:23.856057+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dfc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 211 ms_handle_reset con 0x55819b3dfc00 session 0x55819c2b4960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101392384 unmapped: 27598848 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:24.856259+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 211 heartbeat osd_stat(store_statfs(0x4f9e72000/0x0/0x4ffc00000, data 0x12ba548/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 211 ms_handle_reset con 0x55819c2eb800 session 0x55819c2b4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40c400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 211 ms_handle_reset con 0x55819d40c400 session 0x55819c2b54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 27533312 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1539156 data_alloc: 218103808 data_used: 2125824
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:25.856530+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 211 ms_handle_reset con 0x55819b3df000 session 0x55819d82dc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dfc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 212 ms_handle_reset con 0x55819d40d800 session 0x55819dacef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101531648 unmapped: 27459584 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:26.856662+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 213 ms_handle_reset con 0x55819c6fd800 session 0x55819e732f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 213 ms_handle_reset con 0x55819c2eb800 session 0x55819b3c4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 101646336 unmapped: 27344896 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:27.856802+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819da1a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 214 ms_handle_reset con 0x55819da1a800 session 0x55819c308780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 26181632 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:28.857050+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 26181632 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:29.857227+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.491354942s of 10.287419319s, submitted: 76
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 214 ms_handle_reset con 0x55819b3df000 session 0x55819c057680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 214 heartbeat osd_stat(store_statfs(0x4f9e67000/0x0/0x4ffc00000, data 0x12bf885/0x13f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 214 ms_handle_reset con 0x55819c2eb800 session 0x55819e732960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 26181632 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576394 data_alloc: 218103808 data_used: 5713920
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:30.857384+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 214 heartbeat osd_stat(store_statfs(0x4f9e69000/0x0/0x4ffc00000, data 0x12bf885/0x13f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 26181632 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819da1a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 215 ms_handle_reset con 0x55819d40d800 session 0x55819c043e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:31.857516+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 215 ms_handle_reset con 0x55819da1a800 session 0x55819c05b680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 216 ms_handle_reset con 0x55819c6fd800 session 0x55819c034d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 216 ms_handle_reset con 0x55819b3df000 session 0x55819a6190e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 102858752 unmapped: 26132480 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:32.857685+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 216 ms_handle_reset con 0x55819c2eb800 session 0x55819e732780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819da1a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 217 ms_handle_reset con 0x55819d40d800 session 0x55819c1883c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 217 ms_handle_reset con 0x55819c31a400 session 0x55819d6e4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 217 ms_handle_reset con 0x55819da1a800 session 0x55819c05b860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 103038976 unmapped: 25952256 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:33.857871+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 217 heartbeat osd_stat(store_statfs(0x4f9e60000/0x0/0x4ffc00000, data 0x12c4b36/0x13fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 103063552 unmapped: 25927680 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:34.858043+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 103079936 unmapped: 25911296 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1591687 data_alloc: 218103808 data_used: 5738496
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:35.858189+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 218 heartbeat osd_stat(store_statfs(0x4f9e5e000/0x0/0x4ffc00000, data 0x12c6717/0x1400000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 218 ms_handle_reset con 0x55819b3df000 session 0x55819c2b5860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 103145472 unmapped: 25845760 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:36.858349+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 103145472 unmapped: 25845760 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:37.858523+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 218 ms_handle_reset con 0x55819b3dec00 session 0x55819a76a000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 218 heartbeat osd_stat(store_statfs(0x4f9e5f000/0x0/0x4ffc00000, data 0x12c66b5/0x13ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 103145472 unmapped: 25845760 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:38.858705+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 219 ms_handle_reset con 0x55819b3de800 session 0x55819d9021e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 219 ms_handle_reset con 0x55819a18d400 session 0x55819c034b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 103145472 unmapped: 25845760 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:39.858926+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.799779892s of 10.012530327s, submitted: 143
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 219 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x12c8138/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 219 ms_handle_reset con 0x55819c2eb800 session 0x55819acdc960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 26288128 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1508673 data_alloc: 218103808 data_used: 4464640
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:40.859048+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 219 heartbeat osd_stat(store_statfs(0x4fa79e000/0x0/0x4ffc00000, data 0x9820c6/0xaba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 111157248 unmapped: 17833984 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:41.859164+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 16293888 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:42.859353+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 16293888 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:43.859563+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 219 heartbeat osd_stat(store_statfs(0x4f9d25000/0x0/0x4ffc00000, data 0x13f90c6/0x1531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 16293888 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:44.859788+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 16293888 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616569 data_alloc: 218103808 data_used: 6594560
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 219 heartbeat osd_stat(store_statfs(0x4f9d25000/0x0/0x4ffc00000, data 0x13f90c6/0x1531000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:45.860014+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 112762880 unmapped: 16228352 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:46.860183+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 219 handle_osd_map epochs [219,220], i have 219, src has [1,220]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 112017408 unmapped: 16973824 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:47.860319+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 220 handle_osd_map epochs [220,221], i have 220, src has [1,221]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 221 ms_handle_reset con 0x55819b3de800 session 0x55819d902000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 16998400 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:48.860507+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 221 handle_osd_map epochs [221,222], i have 221, src has [1,222]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 221 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x141d868/0x1558000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 222 ms_handle_reset con 0x55819a18d400 session 0x55819c1cf860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 222 ms_handle_reset con 0x55819b3df000 session 0x55819c2b45a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 222 ms_handle_reset con 0x55819b3dec00 session 0x55819a133680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 222 ms_handle_reset con 0x55819c31a400 session 0x55819a618b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 16998400 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 222 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x141d868/0x1558000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:49.860699+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.356463432s of 10.147337914s, submitted: 199
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 120397824 unmapped: 8593408 heap: 128991232 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1667319 data_alloc: 218103808 data_used: 6610944
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:50.860893+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 50470912 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:51.861034+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 112271360 unmapped: 54534144 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:52.861206+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 50225152 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:53.861452+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 222 heartbeat osd_stat(store_statfs(0x4ef0ff000/0x0/0x4ffc00000, data 0xc022449/0xc15f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,2,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 222 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 49856512 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:54.862644+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125820928 unmapped: 40984576 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513665 data_alloc: 218103808 data_used: 6631424
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:55.862763+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 53354496 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:56.862911+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:57.863044+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126140416 unmapped: 40665088 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:58.863194+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 117760000 unmapped: 49045504 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 223 heartbeat osd_stat(store_statfs(0x4e6cfc000/0x0/0x4ffc00000, data 0x14423ec8/0x14562000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:53:59.863410+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 122085376 unmapped: 44720128 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 223 handle_osd_map epochs [223,224], i have 223, src has [1,224]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:00.863561+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 122101760 unmapped: 44703744 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3871591 data_alloc: 218103808 data_used: 6639616
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.622306347s of 11.342620850s, submitted: 95
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:01.863723+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 117907456 unmapped: 48898048 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 224 heartbeat osd_stat(store_statfs(0x4e5cf5000/0x0/0x4ffc00000, data 0x1542892b/0x15568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 224 heartbeat osd_stat(store_statfs(0x4e5cf5000/0x0/0x4ffc00000, data 0x1542892b/0x15568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,3,2])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:02.863903+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 36102144 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:03.864020+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126525440 unmapped: 40280064 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:04.864197+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 224 ms_handle_reset con 0x55819b3dec00 session 0x55819a0b4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126550016 unmapped: 40255488 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:05.864351+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 51789824 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3992399 data_alloc: 218103808 data_used: 6647808
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:06.864507+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 123568128 unmapped: 43237376 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:07.864658+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf3800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 34668544 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 224 heartbeat osd_stat(store_statfs(0x4e2cf5000/0x0/0x4ffc00000, data 0x1842893b/0x18569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,4])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:08.864849+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 34668544 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 224 ms_handle_reset con 0x55819dbf3800 session 0x55819b3c1e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:09.865102+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 115433472 unmapped: 51372032 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 224 heartbeat osd_stat(store_statfs(0x4e1cd7000/0x0/0x4ffc00000, data 0x1903693b/0x19177000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [0,0,0,0,1,2])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:10.865280+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 225 ms_handle_reset con 0x55819b3de800 session 0x55819acdd0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 50937856 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425100 data_alloc: 218103808 data_used: 6668288
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf3c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 225 ms_handle_reset con 0x55819dbf3c00 session 0x55819a618d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 225 ms_handle_reset con 0x55819a18d400 session 0x55819b3d90e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 0.793136537s of 10.006286621s, submitted: 125
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 226 ms_handle_reset con 0x55819b3de800 session 0x55819b3b5a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf3800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 226 ms_handle_reset con 0x55819b3dec00 session 0x55819d82c3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 226 ms_handle_reset con 0x55819d40d800 session 0x55819acdde00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:11.865424+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 116064256 unmapped: 50741248 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 226 ms_handle_reset con 0x55819b3df000 session 0x55819a1a0d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 226 ms_handle_reset con 0x55819dbf3800 session 0x55819c05ba40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:12.865574+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 52969472 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 226 heartbeat osd_stat(store_statfs(0x4f88c4000/0x0/0x4ffc00000, data 0x1447519/0x158a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 226 ms_handle_reset con 0x55819b3de800 session 0x55819df78960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 226 handle_osd_map epochs [226,227], i have 226, src has [1,227]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 227 ms_handle_reset con 0x55819b3df000 session 0x55819e732f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:13.865732+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 52805632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 227 heartbeat osd_stat(store_statfs(0x4f98bf000/0x0/0x4ffc00000, data 0x144929e/0x158e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 227 ms_handle_reset con 0x55819d40d800 session 0x55819e733a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf3c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 227 ms_handle_reset con 0x55819b3dec00 session 0x55819c2b4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d316c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 227 ms_handle_reset con 0x55819d316c00 session 0x55819d903c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:14.865896+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 52805632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 227 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:15.866087+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 52805632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1761789 data_alloc: 218103808 data_used: 6676480
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 228 ms_handle_reset con 0x55819dbf3c00 session 0x55819af53e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:16.866226+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 114049024 unmapped: 52756480 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 228 ms_handle_reset con 0x55819b3de800 session 0x55819b3c25a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 229 ms_handle_reset con 0x55819d40d800 session 0x55819c042000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 229 ms_handle_reset con 0x55819b3dec00 session 0x55819b3c54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c885400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:17.866393+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 229 ms_handle_reset con 0x55819c885400 session 0x55819dd35c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 35512320 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 229 ms_handle_reset con 0x55819b3df000 session 0x55819c042960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:18.866587+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 35241984 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:19.866807+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 229 heartbeat osd_stat(store_statfs(0x4fa11d000/0x0/0x4ffc00000, data 0x1c09684/0x1d4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 35233792 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 229 ms_handle_reset con 0x55819b3de800 session 0x55819dacf860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:20.867094+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869121 data_alloc: 234881024 data_used: 16060416
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 42704896 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:21.867316+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 230 ms_handle_reset con 0x55819b3dec00 session 0x55819e732780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 42704896 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.403121948s of 10.724184036s, submitted: 274
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 231 ms_handle_reset con 0x55819d40d800 session 0x55819c1883c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:22.867498+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 42704896 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:23.867691+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf3c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124100608 unmapped: 42704896 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:24.867838+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 42688512 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 231 heartbeat osd_stat(store_statfs(0x4fa117000/0x0/0x4ffc00000, data 0x1c0cee0/0x1d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 231 ms_handle_reset con 0x55819dbf3c00 session 0x55819cd7de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:25.868259+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1877843 data_alloc: 234881024 data_used: 16068608
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 42672128 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:26.868449+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 232 ms_handle_reset con 0x55819b3de800 session 0x55819a1323c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 42672128 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3df000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 232 ms_handle_reset con 0x55819b3dec00 session 0x55819a59f4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:27.868637+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124149760 unmapped: 42655744 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c884000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:28.868812+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 232 heartbeat osd_stat(store_statfs(0x4fa114000/0x0/0x4ffc00000, data 0x1c0e9b7/0x1d59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124166144 unmapped: 42639360 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 232 ms_handle_reset con 0x55819c884000 session 0x55819c043c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 232 ms_handle_reset con 0x55819d40d800 session 0x55819d70d4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:29.869067+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124215296 unmapped: 42590208 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 232 ms_handle_reset con 0x55819b3df000 session 0x55819c034f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:30.869201+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1879012 data_alloc: 234881024 data_used: 16072704
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124223488 unmapped: 42582016 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:31.869375+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c884000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 36192256 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819c884000 session 0x55819df79e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.295181751s of 10.322017670s, submitted: 106
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819b3dec00 session 0x55819d82d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:32.869542+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819b3de800 session 0x55819c17de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f9bd8000/0x0/0x4ffc00000, data 0x21483f1/0x2295000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125427712 unmapped: 41377792 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c885000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819d40d800 session 0x55819990ab40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819c885000 session 0x55819a76b2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f9b69000/0x0/0x4ffc00000, data 0x21b842a/0x2305000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:33.869731+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 41361408 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:34.869874+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 41361408 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819b3de800 session 0x55819d82d0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:35.870039+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1938475 data_alloc: 234881024 data_used: 16080896
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 41361408 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f9b69000/0x0/0x4ffc00000, data 0x21b842a/0x2305000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c884000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819c884000 session 0x55819c019860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:36.870231+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 41361408 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819d40d800 session 0x55819d61cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:37.870384+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125444096 unmapped: 41361408 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819b3dec00 session 0x55819d70d680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819c827000 session 0x55819d82cb40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:38.870532+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 41287680 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f9b26000/0x0/0x4ffc00000, data 0x21fa453/0x2348000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:39.870776+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125517824 unmapped: 41287680 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c884000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819b3de800 session 0x55819c2b5680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819d40d800 session 0x55819dace960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:40.870921+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2035348 data_alloc: 234881024 data_used: 16097280
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125558784 unmapped: 41246720 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f8f5a000/0x0/0x4ffc00000, data 0x2dc648c/0x2f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:41.871044+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 40116224 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:42.871190+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 40116224 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:43.871321+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 40116224 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:44.871463+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f8f5a000/0x0/0x4ffc00000, data 0x2dc648c/0x2f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 40116224 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819c77a800 session 0x55819df78000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e004000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819e004000 session 0x55819c042f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e004400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819e004400 session 0x55819af532c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.661128998s of 13.241044998s, submitted: 45
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 ms_handle_reset con 0x55819c77a800 session 0x55819b3c34a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:45.871599+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2053996 data_alloc: 234881024 data_used: 18636800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f8f5a000/0x0/0x4ffc00000, data 0x2dc648c/0x2f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 40116224 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f8f59000/0x0/0x4ffc00000, data 0x2dc649c/0x2f15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:46.871772+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 233 handle_osd_map epochs [233,234], i have 233, src has [1,234]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 40116224 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:47.871930+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 40116224 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:48.872110+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 heartbeat osd_stat(store_statfs(0x4f8f54000/0x0/0x4ffc00000, data 0x2dc8042/0x2f19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 40116224 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:49.872357+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 126951424 unmapped: 39854080 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:50.872492+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2181770 data_alloc: 234881024 data_used: 18644992
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 139804672 unmapped: 27000832 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:51.872664+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 139804672 unmapped: 27000832 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:52.872851+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819d40d800 session 0x55819dacfc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e004000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819e004000 session 0x55819c17cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e004400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819e004400 session 0x55819d6e4960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e004800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 38756352 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819e004800 session 0x55819d903680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819c77a800 session 0x55819d70c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819b3de800 session 0x55819c188960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:53.873022+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819d40d800 session 0x55819b2ccd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 128049152 unmapped: 38756352 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 heartbeat osd_stat(store_statfs(0x4f841a000/0x0/0x4ffc00000, data 0x39010ed/0x3a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e004000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:54.873941+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819e004000 session 0x55819d70cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e004400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819e004400 session 0x55819df78b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 130433024 unmapped: 36372480 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819c77a800 session 0x55819d70cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819d40d800 session 0x55819a59f4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 1.705877781s of 10.018067360s, submitted: 80
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819b3de800 session 0x55819e7323c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:55.874179+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2201672 data_alloc: 234881024 data_used: 22519808
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 131162112 unmapped: 35643392 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e004000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:56.874490+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fceb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 131178496 unmapped: 35627008 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:57.874693+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fceb400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 132063232 unmapped: 34742272 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:58.875079+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141746176 unmapped: 25059328 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:54:59.875377+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 heartbeat osd_stat(store_statfs(0x4f82ef000/0x0/0x4ffc00000, data 0x3a2c0ae/0x3b7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141746176 unmapped: 25059328 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:00.875536+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2275916 data_alloc: 251658240 data_used: 32968704
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141795328 unmapped: 25010176 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:01.875869+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141991936 unmapped: 24813568 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:02.876064+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819fceb400 session 0x55819a133a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142221312 unmapped: 24584192 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:03.876261+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142221312 unmapped: 24584192 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fceb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:04.876431+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 heartbeat osd_stat(store_statfs(0x4f82ef000/0x0/0x4ffc00000, data 0x3a2c0ae/0x3b7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 ms_handle_reset con 0x55819fceb800 session 0x55819cd7d0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142295040 unmapped: 24510464 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:05.876574+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fceb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.678486824s of 10.341062546s, submitted: 13
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2281612 data_alloc: 251658240 data_used: 32968704
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142336000 unmapped: 24469504 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:06.876744+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142344192 unmapped: 24461312 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:07.876869+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142344192 unmapped: 24461312 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:08.877043+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142475264 unmapped: 24330240 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 heartbeat osd_stat(store_statfs(0x4f82ed000/0x0/0x4ffc00000, data 0x3a2c120/0x3b81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:09.877373+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142508032 unmapped: 24297472 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:10.877592+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2283905 data_alloc: 251658240 data_used: 33042432
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142508032 unmapped: 24297472 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:11.877809+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142508032 unmapped: 24297472 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:12.878063+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 heartbeat osd_stat(store_statfs(0x4f81bb000/0x0/0x4ffc00000, data 0x3b5e120/0x3cb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,10,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144162816 unmapped: 22642688 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:13.878284+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 149610496 unmapped: 17195008 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 ms_handle_reset con 0x55819b3de800 session 0x55819cd7d4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 ms_handle_reset con 0x55819fceb800 session 0x55819d82d4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f81bb000/0x0/0x4ffc00000, data 0x3b5e120/0x3cb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:14.878414+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f7a29000/0x0/0x4ffc00000, data 0x42e4c9d/0x443b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 151576576 unmapped: 15228928 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f7a29000/0x0/0x4ffc00000, data 0x42e4c9d/0x443b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:15.878538+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 ms_handle_reset con 0x55819c77a800 session 0x55819c019860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.254458189s of 10.006518364s, submitted: 118
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2398188 data_alloc: 251658240 data_used: 33546240
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f77fd000/0x0/0x4ffc00000, data 0x4519c9d/0x4670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 ms_handle_reset con 0x55819d40d800 session 0x55819c034f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147611648 unmapped: 19193856 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:16.878954+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147611648 unmapped: 19193856 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:17.879175+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147611648 unmapped: 19193856 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fceb400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 ms_handle_reset con 0x55819fceb400 session 0x55819c308960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 ms_handle_reset con 0x55819b3de800 session 0x55819b302f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:18.879371+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f774c000/0x0/0x4ffc00000, data 0x45ccc8d/0x4722000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147472384 unmapped: 19333120 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:19.879594+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147472384 unmapped: 19333120 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:20.879789+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2390268 data_alloc: 251658240 data_used: 33624064
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147472384 unmapped: 19333120 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:21.879937+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147611648 unmapped: 19193856 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:22.880110+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147611648 unmapped: 19193856 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:23.880307+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f7727000/0x0/0x4ffc00000, data 0x45f1c8d/0x4747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 19185664 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:24.880465+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 19185664 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:25.880652+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2392748 data_alloc: 251658240 data_used: 33632256
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 19185664 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:26.880817+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 19185664 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:27.880986+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 19185664 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f7727000/0x0/0x4ffc00000, data 0x45f1c8d/0x4747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:28.881127+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 19185664 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:29.881303+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 19013632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:30.881452+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2396268 data_alloc: 251658240 data_used: 34258944
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 19013632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f7727000/0x0/0x4ffc00000, data 0x45f1c8d/0x4747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:31.881719+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 19013632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:32.881939+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 19013632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:33.882184+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.360483170s of 17.898168564s, submitted: 48
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 19013632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:34.882317+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f7724000/0x0/0x4ffc00000, data 0x45f4c8d/0x474a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 19013632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:35.882459+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2396136 data_alloc: 251658240 data_used: 34258944
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 19013632 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:36.882646+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147808256 unmapped: 18997248 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:37.882869+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 ms_handle_reset con 0x55819c77a800 session 0x55819c2b4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147808256 unmapped: 18997248 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:38.883041+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f7724000/0x0/0x4ffc00000, data 0x45f4c8d/0x474a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147808256 unmapped: 18997248 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f7724000/0x0/0x4ffc00000, data 0x45f4c8d/0x474a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fceb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:39.883254+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 heartbeat osd_stat(store_statfs(0x4f7724000/0x0/0x4ffc00000, data 0x45f4c8d/0x474a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 ms_handle_reset con 0x55819b3dec00 session 0x55819e732000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 ms_handle_reset con 0x55819c884000 session 0x55819c05be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 147808256 unmapped: 18997248 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fcebc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 236 ms_handle_reset con 0x55819fceb800 session 0x55819d70c3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:40.883480+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 236 ms_handle_reset con 0x55819fcebc00 session 0x55819d70c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2361662 data_alloc: 251658240 data_used: 33476608
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146948096 unmapped: 19857408 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 236 heartbeat osd_stat(store_statfs(0x4f7a71000/0x0/0x4ffc00000, data 0x42a580a/0x43fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:41.883616+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146972672 unmapped: 19832832 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 237 ms_handle_reset con 0x55819b3de800 session 0x55819e733c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:42.883791+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 237 heartbeat osd_stat(store_statfs(0x4f7a71000/0x0/0x4ffc00000, data 0x42a580a/0x43fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146980864 unmapped: 19824640 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:43.884040+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 237 ms_handle_reset con 0x55819c77a800 session 0x55819b3d9680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146989056 unmapped: 19816448 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:44.884244+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c884000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.594800949s of 11.420295715s, submitted: 26
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 148054016 unmapped: 18751488 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 238 ms_handle_reset con 0x55819b3dec00 session 0x55819cd7dc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 238 ms_handle_reset con 0x55819c884000 session 0x55819af52d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:45.884412+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 238 ms_handle_reset con 0x55819b3de800 session 0x55819c162d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2365642 data_alloc: 251658240 data_used: 33480704
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 148062208 unmapped: 18743296 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:46.884583+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 238 heartbeat osd_stat(store_statfs(0x4f7a6d000/0x0/0x4ffc00000, data 0x42a8ef4/0x4401000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 148144128 unmapped: 18661376 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 239 ms_handle_reset con 0x55819b3dec00 session 0x55819c034780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:47.884730+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 239 ms_handle_reset con 0x55819c77a800 session 0x55819d902d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fcebc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 239 ms_handle_reset con 0x55819fcebc00 session 0x55819a59e1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146079744 unmapped: 20725760 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d316c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819da1a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 239 ms_handle_reset con 0x55819da1a800 session 0x55819b2103c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 240 ms_handle_reset con 0x55819d316c00 session 0x55819dacef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:48.884882+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146112512 unmapped: 20692992 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:49.885090+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146112512 unmapped: 20692992 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 241 ms_handle_reset con 0x55819b3de800 session 0x55819d61c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:50.885220+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2274663 data_alloc: 251658240 data_used: 27230208
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146169856 unmapped: 20635648 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:51.885382+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 241 heartbeat osd_stat(store_statfs(0x4f821d000/0x0/0x4ffc00000, data 0x3af5403/0x3c4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 241 ms_handle_reset con 0x55819c6fd400 session 0x55819cd590e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 241 ms_handle_reset con 0x55819b3dec00 session 0x55819d70d4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 241 ms_handle_reset con 0x55819b3dfc00 session 0x55819b3c5680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146169856 unmapped: 20635648 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:52.885504+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146169856 unmapped: 20635648 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:53.886171+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 133349376 unmapped: 33456128 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 ms_handle_reset con 0x55819b3dec00 session 0x55819d82de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:54.886292+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 ms_handle_reset con 0x55819b3de800 session 0x55819d82c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 33423360 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.121215820s of 10.254724503s, submitted: 137
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 ms_handle_reset con 0x55819c6fd400 session 0x55819d903e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:55.886453+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2087216 data_alloc: 234881024 data_used: 12128256
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 ms_handle_reset con 0x55819d40d800 session 0x55819c2b4960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 133382144 unmapped: 33423360 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:56.886580+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 ms_handle_reset con 0x55819e004000 session 0x55819d902f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 ms_handle_reset con 0x55819fceb000 session 0x55819a1a0960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 33382400 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 heartbeat osd_stat(store_statfs(0x4f9089000/0x0/0x4ffc00000, data 0x2c89052/0x2de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 ms_handle_reset con 0x55819b3de800 session 0x55819d902960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:57.886754+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 ms_handle_reset con 0x55819b3dec00 session 0x55819acdc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 ms_handle_reset con 0x55819c6fd400 session 0x55819d82c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 42303488 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:58.886917+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 42303488 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:55:59.887161+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 42303488 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:00.887316+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1839737 data_alloc: 218103808 data_used: 1052672
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124502016 unmapped: 42303488 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:01.887527+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 42287104 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:02.887700+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 243 heartbeat osd_stat(store_statfs(0x4fa589000/0x0/0x4ffc00000, data 0x178ba68/0x18e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 243 ms_handle_reset con 0x55819d40d800 session 0x55819cd7d0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 243 ms_handle_reset con 0x55819b3de800 session 0x55819cd7d4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 243 ms_handle_reset con 0x55819b3dec00 session 0x55819cd7dc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124518400 unmapped: 42287104 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 243 ms_handle_reset con 0x55819c6fd400 session 0x55819b3d9680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fceb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:03.887868+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d316c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 243 ms_handle_reset con 0x55819fceb000 session 0x55819b302f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 41811968 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 243 ms_handle_reset con 0x55819c77a800 session 0x55819d70c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 243 ms_handle_reset con 0x55819d316c00 session 0x55819c019860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:04.888058+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125001728 unmapped: 41803776 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:05.888204+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1838558 data_alloc: 218103808 data_used: 1048576
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125001728 unmapped: 41803776 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 243 handle_osd_map epochs [243,244], i have 243, src has [1,244]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.745172501s of 10.931041718s, submitted: 154
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:06.888379+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 244 ms_handle_reset con 0x55819b3de800 session 0x55819d70da40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125001728 unmapped: 41803776 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 244 ms_handle_reset con 0x55819b3dec00 session 0x55819d70d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 244 heartbeat osd_stat(store_statfs(0x4fa560000/0x0/0x4ffc00000, data 0x16dfa68/0x1838000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:07.888496+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 244 ms_handle_reset con 0x55819c6fd400 session 0x55819d70c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fceb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 244 ms_handle_reset con 0x55819fceb000 session 0x55819d70d680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 244 ms_handle_reset con 0x55819b3de800 session 0x55819d70c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124960768 unmapped: 41844736 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 244 handle_osd_map epochs [244,245], i have 244, src has [1,245]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:08.888629+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3dec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124960768 unmapped: 41844736 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:09.888749+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 245 heartbeat osd_stat(store_statfs(0x4fa62e000/0x0/0x4ffc00000, data 0x16e31b6/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 124968960 unmapped: 41836544 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:10.888878+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1872314 data_alloc: 218103808 data_used: 4956160
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 41107456 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 245 heartbeat osd_stat(store_statfs(0x4fa62e000/0x0/0x4ffc00000, data 0x16e31b6/0x183e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:11.889005+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 245 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 246 ms_handle_reset con 0x55819c6fd400 session 0x55819c1cfc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 41107456 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:12.889256+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 246 heartbeat osd_stat(store_statfs(0x4fa62b000/0x0/0x4ffc00000, data 0x16e4db1/0x1842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 41107456 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:13.889399+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 41107456 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:14.889570+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 41107456 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:15.889698+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1876432 data_alloc: 218103808 data_used: 4956160
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 41107456 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:16.889839+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 41107456 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:17.890029+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 246 heartbeat osd_stat(store_statfs(0x4fa62b000/0x0/0x4ffc00000, data 0x16e4db1/0x1842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125706240 unmapped: 41099264 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:18.890170+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 246 heartbeat osd_stat(store_statfs(0x4fa62b000/0x0/0x4ffc00000, data 0x16e4db1/0x1842000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125706240 unmapped: 41099264 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:19.890366+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.377964973s of 13.675966263s, submitted: 17
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d316c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125706240 unmapped: 41099264 heap: 166805504 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819d316c00 session 0x55819d902780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fcebc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819fcebc00 session 0x55819c308780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819c6fd800 session 0x55819c0185a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819b3de800 session 0x55819d6e4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:20.890529+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819c6fd400 session 0x55819ace21e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d316c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819d316c00 session 0x55819b2cc780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fcebc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819fcebc00 session 0x55819d6e5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fdc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1990678 data_alloc: 218103808 data_used: 4993024
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 125992960 unmapped: 44490752 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:21.890704+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819c6fdc00 session 0x55819d6e5860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819b3de800 session 0x55819c019e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 39862272 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:22.890856+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f803d000/0x0/0x4ffc00000, data 0x2b2e824/0x2c8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129744896 unmapped: 40738816 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:23.891120+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 41279488 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:24.891268+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 41279488 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:25.891437+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2058656 data_alloc: 218103808 data_used: 5799936
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 41279488 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:26.891561+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f9d000/0x0/0x4ffc00000, data 0x2bd1824/0x2d31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 41279488 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:27.891713+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f9d000/0x0/0x4ffc00000, data 0x2bd1824/0x2d31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 41279488 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:28.891858+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819c6fd400 session 0x55819c3094a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd4cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 41132032 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:29.892106+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.081851959s of 10.065759659s, submitted: 148
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 41132032 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:30.892945+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2061546 data_alloc: 218103808 data_used: 5804032
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 41132032 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:31.893371+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 41132032 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:32.894008+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f7a000/0x0/0x4ffc00000, data 0x2bf3847/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 41132032 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:33.899266+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f7a000/0x0/0x4ffc00000, data 0x2bf3847/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 41132032 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:34.899474+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 41132032 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:35.900071+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2061546 data_alloc: 218103808 data_used: 5804032
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 41132032 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:36.900523+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f7a000/0x0/0x4ffc00000, data 0x2bf3847/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f7a000/0x0/0x4ffc00000, data 0x2bf3847/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 41132032 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:37.900673+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f7a000/0x0/0x4ffc00000, data 0x2bf3847/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 33103872 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:38.900784+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 33103872 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:39.900930+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 33103872 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:40.901285+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f7a000/0x0/0x4ffc00000, data 0x2bf3847/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2153386 data_alloc: 234881024 data_used: 18771968
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137412608 unmapped: 33071104 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:41.901546+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f7a000/0x0/0x4ffc00000, data 0x2bf3847/0x2d54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.993530273s of 12.151553154s, submitted: 1
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 33054720 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:42.901704+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819c6fc400 session 0x55819d82d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df70000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819df70000 session 0x55819acdd0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 32563200 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7f77000/0x0/0x4ffc00000, data 0x2bf6847/0x2d57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:43.901820+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 32563200 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:44.902044+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 heartbeat osd_stat(store_statfs(0x4f7772000/0x0/0x4ffc00000, data 0x33fb847/0x355c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 32563200 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:45.902181+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 ms_handle_reset con 0x55819df71400 session 0x55819c2b5a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2220930 data_alloc: 234881024 data_used: 18784256
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137920512 unmapped: 32563200 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:46.902305+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 32555008 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:47.902441+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 248 ms_handle_reset con 0x55819c6fc400 session 0x55819cd590e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137986048 unmapped: 32497664 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:48.902721+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 248 heartbeat osd_stat(store_statfs(0x4f6f32000/0x0/0x4ffc00000, data 0x3c393c4/0x3d9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,30,18])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 248 ms_handle_reset con 0x55819c6fd400 session 0x55819c308960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141172736 unmapped: 29310976 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:49.903050+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141172736 unmapped: 29310976 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:50.903257+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2332472 data_alloc: 234881024 data_used: 18849792
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141189120 unmapped: 29294592 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:51.903400+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 ms_handle_reset con 0x55819b3de800 session 0x55819daceb40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142270464 unmapped: 28213248 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:52.903616+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.268254280s of 10.489465714s, submitted: 119
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 ms_handle_reset con 0x55819df71000 session 0x55819c05a5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 28819456 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:53.903774+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 28819456 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df70000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:54.904015+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 heartbeat osd_stat(store_statfs(0x4f6a3b000/0x0/0x4ffc00000, data 0x412ef81/0x4292000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 ms_handle_reset con 0x55819b3dec00 session 0x55819d70cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 ms_handle_reset con 0x55819df70000 session 0x55819cd58b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:55.904137+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 ms_handle_reset con 0x55819b3de800 session 0x55819acdd0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2207087 data_alloc: 234881024 data_used: 15048704
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 ms_handle_reset con 0x55819c6fd400 session 0x55819cd58780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 ms_handle_reset con 0x55819c6fc400 session 0x55819d902780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 29712384 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:56.904728+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 ms_handle_reset con 0x55819df71400 session 0x55819c1cfc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 ms_handle_reset con 0x55819df71800 session 0x55819a1a0960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 ms_handle_reset con 0x55819b3de800 session 0x55819a6190e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 28819456 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:57.904874+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 ms_handle_reset con 0x55819df71000 session 0x55819cd7de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 28811264 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:58.905077+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 heartbeat osd_stat(store_statfs(0x4f7166000/0x0/0x4ffc00000, data 0x38e9afe/0x3a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 28811264 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:56:59.905281+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 28811264 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:00.905390+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2250410 data_alloc: 234881024 data_used: 15056896
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 ms_handle_reset con 0x55819c6fc400 session 0x55819c05b860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fd400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141672448 unmapped: 28811264 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:01.905581+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 heartbeat osd_stat(store_statfs(0x4f7166000/0x0/0x4ffc00000, data 0x38e9afe/0x3a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [1,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 ms_handle_reset con 0x55819c6fd400 session 0x55819cd581e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:02.908711+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:03.908888+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 heartbeat osd_stat(store_statfs(0x4f7281000/0x0/0x4ffc00000, data 0x38e9a5c/0x3a4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:04.909079+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:05.909260+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246002 data_alloc: 234881024 data_used: 15052800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:06.909455+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:07.909603+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:08.909770+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:09.910028+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 ms_handle_reset con 0x55819b3de800 session 0x55819c2b45a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 heartbeat osd_stat(store_statfs(0x4f7281000/0x0/0x4ffc00000, data 0x38e9a5c/0x3a4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:10.910184+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246002 data_alloc: 234881024 data_used: 15052800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 28794880 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:11.910310+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 ms_handle_reset con 0x55819c6fc400 session 0x55819c189a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 ms_handle_reset con 0x55819df71000 session 0x55819b3c3860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.913255692s of 19.603511810s, submitted: 96
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 ms_handle_reset con 0x55819df71800 session 0x55819c2b50e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 28925952 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:12.911060+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df70000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 28925952 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:13.911202+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 28884992 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:14.911377+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142254080 unmapped: 28229632 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:15.911603+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 heartbeat osd_stat(store_statfs(0x4f725c000/0x0/0x4ffc00000, data 0x390da6b/0x3a72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 251 ms_handle_reset con 0x55819e22f800 session 0x55819e733680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2286648 data_alloc: 234881024 data_used: 19132416
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142262272 unmapped: 28221440 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:16.911766+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142262272 unmapped: 28221440 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:17.911904+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:18.912046+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142262272 unmapped: 28221440 heap: 170483712 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 252 ms_handle_reset con 0x55819f664000 session 0x55819c17cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 252 ms_handle_reset con 0x55819b3de800 session 0x55819d61dc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:19.912201+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141885440 unmapped: 32276480 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f665400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 252 ms_handle_reset con 0x55819f665400 session 0x55819b3c03c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 252 ms_handle_reset con 0x55819e22f400 session 0x55819cd7c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22fc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 252 ms_handle_reset con 0x55819e22fc00 session 0x55819dacf680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:20.912364+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141910016 unmapped: 32251904 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 252 ms_handle_reset con 0x55819e22f000 session 0x55819b3025a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 253 ms_handle_reset con 0x55819b3de800 session 0x55819a133c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 253 heartbeat osd_stat(store_statfs(0x4f687d000/0x0/0x4ffc00000, data 0x42e71c7/0x444f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 253 ms_handle_reset con 0x55819e22f400 session 0x55819c05be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 253 ms_handle_reset con 0x55819f664000 session 0x55819c308000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f665400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2380264 data_alloc: 234881024 data_used: 19136512
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:21.912516+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142032896 unmapped: 32129024 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 253 ms_handle_reset con 0x55819f665400 session 0x55819b2ccd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 254 ms_handle_reset con 0x55819e22f400 session 0x55819d6e5a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.627970695s of 10.049513817s, submitted: 63
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:22.912663+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143876096 unmapped: 30285824 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 254 ms_handle_reset con 0x55819f664000 session 0x55819d6e4b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:23.912834+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146751488 unmapped: 27410432 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 ms_handle_reset con 0x55819b3de800 session 0x55819d70c3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 ms_handle_reset con 0x55819e22f000 session 0x55819a59f860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22ec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:24.913005+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146751488 unmapped: 27410432 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 ms_handle_reset con 0x55819cd4cc00 session 0x55819c1cfe00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 ms_handle_reset con 0x55819c6fc000 session 0x55819aef4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f6852000/0x0/0x4ffc00000, data 0x43104a6/0x447b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [1,2] op hist [0,0,1,3])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 ms_handle_reset con 0x55819e22ec00 session 0x55819c1ce780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:25.913114+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 138018816 unmapped: 36143104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2150439 data_alloc: 218103808 data_used: 6414336
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:26.913300+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 36839424 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:27.913497+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 36839424 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f7cf4000/0x0/0x4ffc00000, data 0x2e83486/0x2bca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,1,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:28.913703+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 38215680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 ms_handle_reset con 0x55819b3de800 session 0x55819d82d4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:29.913901+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 38215680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f7c67000/0x0/0x4ffc00000, data 0x2f09491/0x2c51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:30.914045+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135897088 unmapped: 38264832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 256 heartbeat osd_stat(store_statfs(0x4f7c69000/0x0/0x4ffc00000, data 0x2f0af35/0x2c54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2161500 data_alloc: 218103808 data_used: 6705152
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:31.914223+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135897088 unmapped: 38264832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.242586613s of 10.179738998s, submitted: 185
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:32.914372+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135921664 unmapped: 38240256 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 257 ms_handle_reset con 0x55819d6a8000 session 0x55819d6e4b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:33.914557+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135921664 unmapped: 38240256 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:34.914735+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 136216576 unmapped: 37945344 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 257 heartbeat osd_stat(store_statfs(0x4f7c43000/0x0/0x4ffc00000, data 0x2f2daf1/0x2c79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets getting new tickets!
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:35.914952+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _finish_auth 0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:35.916076+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 257 ms_handle_reset con 0x55819df70000 session 0x55819b3c3c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137289728 unmapped: 36872192 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 258 ms_handle_reset con 0x55819df71c00 session 0x55819cd58960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 258 ms_handle_reset con 0x55819b3de800 session 0x55819b3025a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:36.915102+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2163604 data_alloc: 218103808 data_used: 6598656
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 258 ms_handle_reset con 0x55819c6fc000 session 0x55819b3c03c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 36839424 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:37.915238+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 36839424 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 258 heartbeat osd_stat(store_statfs(0x4f7c67000/0x0/0x4ffc00000, data 0x2f0b522/0x2c56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:38.915411+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 36814848 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:39.915665+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137347072 unmapped: 36814848 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 260 ms_handle_reset con 0x55819d6a8000 session 0x55819e733680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:40.915797+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137363456 unmapped: 36798464 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 260 ms_handle_reset con 0x55819d6a9800 session 0x55819a133c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: mgrc ms_handle_reset ms_handle_reset con 0x55819c770800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/878361048
Nov 29 08:16:55 compute-0 ceph-osd[88831]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/878361048,v1:192.168.122.100:6801/878361048]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: get_auth_request con 0x55819d6a9800 auth_method 0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: mgrc handle_mgr_configure stats_period=5
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 260 ms_handle_reset con 0x55819c6fc000 session 0x55819c05a5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 260 ms_handle_reset con 0x55819b3de800 session 0x55819d902780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22ec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:41.915984+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2170993 data_alloc: 218103808 data_used: 6598656
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 36610048 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 260 ms_handle_reset con 0x55819df71c00 session 0x55819e7323c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:42.916121+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.194535255s of 10.066590309s, submitted: 80
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 36610048 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 261 ms_handle_reset con 0x55819d6a9c00 session 0x55819d82d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 261 ms_handle_reset con 0x55819e22ec00 session 0x55819b3c2b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:43.916258+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 262 ms_handle_reset con 0x55819c2eb400 session 0x55819c188b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 262 ms_handle_reset con 0x55819b3de800 session 0x55819b3c3680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 262 ms_handle_reset con 0x55819d6a8000 session 0x55819dacfe00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 262 heartbeat osd_stat(store_statfs(0x4f7c74000/0x0/0x4ffc00000, data 0x2ad7747/0x2c47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 262 ms_handle_reset con 0x55819c6fc000 session 0x55819df79e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 136986624 unmapped: 37175296 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:44.916410+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 136986624 unmapped: 37175296 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 262 ms_handle_reset con 0x55819d6a9c00 session 0x55819a133a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:45.916563+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134275072 unmapped: 39886848 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:46.916709+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1913118 data_alloc: 218103808 data_used: 1142784
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134275072 unmapped: 39886848 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 263 ms_handle_reset con 0x55819b3de800 session 0x55819d6e41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 263 ms_handle_reset con 0x55819c2eb400 session 0x55819a0b5860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:47.916875+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134291456 unmapped: 39870464 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:48.917011+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 263 ms_handle_reset con 0x55819c6fc000 session 0x55819c1623c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134299648 unmapped: 39862272 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 263 ms_handle_reset con 0x55819d6a8000 session 0x55819c0183c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 263 heartbeat osd_stat(store_statfs(0x4f9547000/0x0/0x4ffc00000, data 0x1206498/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:49.917186+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134332416 unmapped: 39829504 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 263 heartbeat osd_stat(store_statfs(0x4f9547000/0x0/0x4ffc00000, data 0x1206498/0x1376000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 263 handle_osd_map epochs [264,264], i have 264, src has [1,264]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:50.917311+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 264 ms_handle_reset con 0x55819b37dc00 session 0x55819b3c43c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df71c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134340608 unmapped: 39821312 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 264 heartbeat osd_stat(store_statfs(0x4f9544000/0x0/0x4ffc00000, data 0x1207fa3/0x1379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 264 ms_handle_reset con 0x55819b37dc00 session 0x55819c05b680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:51.917485+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1921439 data_alloc: 218103808 data_used: 1155072
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134340608 unmapped: 39821312 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:52.917621+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.664977551s of 10.033009529s, submitted: 217
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 264 ms_handle_reset con 0x55819c2eb400 session 0x55819e732000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134348800 unmapped: 39813120 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 264 heartbeat osd_stat(store_statfs(0x4f9543000/0x0/0x4ffc00000, data 0x1207fb3/0x137a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:53.917854+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 265 ms_handle_reset con 0x55819b3de800 session 0x55819a76be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:54.918108+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:55.918265+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 ms_handle_reset con 0x55819d6a8000 session 0x55819af53860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 ms_handle_reset con 0x55819c6fc000 session 0x55819a76b2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:56.918403+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1929871 data_alloc: 218103808 data_used: 1159168
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:57.918587+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 heartbeat osd_stat(store_statfs(0x4f953b000/0x0/0x4ffc00000, data 0x120b613/0x1381000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:58.918907+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 heartbeat osd_stat(store_statfs(0x4f953b000/0x0/0x4ffc00000, data 0x120b613/0x1381000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 ms_handle_reset con 0x55819b37dc00 session 0x55819dace960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:57:59.919253+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:00.919432+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:01.919632+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1929871 data_alloc: 218103808 data_used: 1159168
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:02.919866+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.338634491s of 10.404091835s, submitted: 30
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:03.920063+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:04.920195+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 heartbeat osd_stat(store_statfs(0x4f953a000/0x0/0x4ffc00000, data 0x120b623/0x1382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:05.921586+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:06.921812+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1930147 data_alloc: 218103808 data_used: 1159168
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:07.923352+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 ms_handle_reset con 0x55819b3de800 session 0x55819cd59680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 39796736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 heartbeat osd_stat(store_statfs(0x4f953c000/0x0/0x4ffc00000, data 0x120b623/0x1382000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:08.924532+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 ms_handle_reset con 0x55819c6fc000 session 0x55819d6e50e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 ms_handle_reset con 0x55819c2eb400 session 0x55819af53e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 134373376 unmapped: 39788544 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:09.924715+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2ea800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 ms_handle_reset con 0x55819c2ea800 session 0x55819c1cfa40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 267 ms_handle_reset con 0x55819d6a8000 session 0x55819d82c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 38731776 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:10.925755+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 38731776 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 267 heartbeat osd_stat(store_statfs(0x4f9537000/0x0/0x4ffc00000, data 0x120d294/0x1386000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 268 ms_handle_reset con 0x55819b3de800 session 0x55819dace960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:11.926002+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1941523 data_alloc: 218103808 data_used: 1175552
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 268 ms_handle_reset con 0x55819b37dc00 session 0x55819af52d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 38731776 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 268 ms_handle_reset con 0x55819c2eac00 session 0x55819a132b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 268 handle_osd_map epochs [268,269], i have 268, src has [1,269]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 268 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 269 ms_handle_reset con 0x55819c2eb400 session 0x55819a76b2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 269 ms_handle_reset con 0x55819b37dc00 session 0x55819a1a0d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:12.926613+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 38707200 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 269 ms_handle_reset con 0x55819c2eac00 session 0x55819e732780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.811828613s of 10.250494957s, submitted: 63
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 269 heartbeat osd_stat(store_statfs(0x4f9530000/0x0/0x4ffc00000, data 0x12109f6/0x138c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 270 ms_handle_reset con 0x55819b3de800 session 0x55819c188f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:13.926903+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135462912 unmapped: 38699008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 270 ms_handle_reset con 0x55819d40d000 session 0x55819a133a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 270 ms_handle_reset con 0x55819c6fc000 session 0x55819a76be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 270 ms_handle_reset con 0x55819d6a8000 session 0x55819d6e4960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 270 ms_handle_reset con 0x55819b37dc00 session 0x55819c188b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:14.927261+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135405568 unmapped: 38756352 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:15.927429+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 271 ms_handle_reset con 0x55819b3de800 session 0x55819e7323c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 38739968 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 271 ms_handle_reset con 0x55819c2eac00 session 0x55819b3c12c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:16.927776+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1951828 data_alloc: 218103808 data_used: 1179648
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 38739968 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 271 heartbeat osd_stat(store_statfs(0x4f952d000/0x0/0x4ffc00000, data 0x121415c/0x1390000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 271 ms_handle_reset con 0x55819d40d000 session 0x55819d6e5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:17.928342+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 38739968 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:18.928743+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135421952 unmapped: 38739968 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:19.929457+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 272 ms_handle_reset con 0x55819b3de800 session 0x55819acdd0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135462912 unmapped: 38699008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:20.929621+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135462912 unmapped: 38699008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 273 ms_handle_reset con 0x55819c2eac00 session 0x55819c057e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:21.929878+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1963440 data_alloc: 218103808 data_used: 1191936
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135462912 unmapped: 38699008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:22.930079+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 273 ms_handle_reset con 0x55819b37dc00 session 0x55819c2b4960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135462912 unmapped: 38699008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 273 heartbeat osd_stat(store_statfs(0x4f9525000/0x0/0x4ffc00000, data 0x1217e35/0x1398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:23.930371+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135462912 unmapped: 38699008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:24.930697+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135462912 unmapped: 38699008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:25.930861+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 273 heartbeat osd_stat(store_statfs(0x4f9525000/0x0/0x4ffc00000, data 0x1217e35/0x1398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135462912 unmapped: 38699008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.212965965s of 13.071446419s, submitted: 108
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:26.931044+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1967183 data_alloc: 218103808 data_used: 1200128
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135471104 unmapped: 38690816 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:27.931221+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135471104 unmapped: 38690816 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:28.931422+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135471104 unmapped: 38690816 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 274 heartbeat osd_stat(store_statfs(0x4f9522000/0x0/0x4ffc00000, data 0x1219898/0x139b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:29.931653+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135471104 unmapped: 38690816 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:30.931800+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135471104 unmapped: 38690816 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:31.932029+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969945 data_alloc: 218103808 data_used: 1200128
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 38682624 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:32.932193+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 274 ms_handle_reset con 0x55819d40c000 session 0x55819cd590e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 38682624 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40c800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:33.932403+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 38682624 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 274 heartbeat osd_stat(store_statfs(0x4f9521000/0x0/0x4ffc00000, data 0x121990a/0x139d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:34.932563+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 38682624 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:35.932709+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 38682624 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:36.932838+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969065 data_alloc: 218103808 data_used: 1200128
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.253353119s of 10.602140427s, submitted: 15
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 38682624 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:37.932998+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135479296 unmapped: 38682624 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 274 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 275 heartbeat osd_stat(store_statfs(0x4f9521000/0x0/0x4ffc00000, data 0x121990a/0x139d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:38.933129+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 38674432 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:39.933567+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 38674432 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:40.933938+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 38674432 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:41.934735+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1976780 data_alloc: 218103808 data_used: 1212416
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 38674432 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:42.934929+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 276 ms_handle_reset con 0x55819d40c800 session 0x55819d82dc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135487488 unmapped: 38674432 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:43.935041+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 276 heartbeat osd_stat(store_statfs(0x4f951b000/0x0/0x4ffc00000, data 0x121d074/0x13a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135495680 unmapped: 38666240 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:44.935503+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135495680 unmapped: 38666240 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:45.935783+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 276 ms_handle_reset con 0x55819d6a8000 session 0x55819dacfc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135495680 unmapped: 38666240 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 276 ms_handle_reset con 0x55819b3de800 session 0x55819c019e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:46.936044+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1978844 data_alloc: 218103808 data_used: 1224704
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.404598713s of 10.078602791s, submitted: 25
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 277 ms_handle_reset con 0x55819c2eac00 session 0x55819c2b4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 38658048 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:47.936185+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 38658048 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:48.936728+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 277 ms_handle_reset con 0x55819b37dc00 session 0x55819cd59a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135503872 unmapped: 38658048 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:49.936904+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 277 heartbeat osd_stat(store_statfs(0x4f9518000/0x0/0x4ffc00000, data 0x121e71c/0x13a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 277 handle_osd_map epochs [278,278], i have 278, src has [1,278]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4cb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135520256 unmapped: 38641664 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:50.937559+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 279 ms_handle_reset con 0x55819c827c00 session 0x55819b2cda40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 279 ms_handle_reset con 0x55819d40c000 session 0x55819c2b41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135536640 unmapped: 38625280 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:51.937746+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 279 handle_osd_map epochs [279,280], i have 279, src has [1,280]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1995698 data_alloc: 218103808 data_used: 1228800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 280 ms_handle_reset con 0x55819b37dc00 session 0x55819b3b5a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 280 ms_handle_reset con 0x55819b3de800 session 0x55819c163c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 280 ms_handle_reset con 0x55819c2eac00 session 0x55819c163c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 280 ms_handle_reset con 0x55819d6a8000 session 0x55819b3b5a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 38600704 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b3de800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 280 ms_handle_reset con 0x55819b37dc00 session 0x55819b2cda40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:52.937910+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 280 ms_handle_reset con 0x55819c827800 session 0x55819dacf2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135577600 unmapped: 38584320 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:53.938076+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 280 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1223976/0x13b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,14])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 33316864 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 280 heartbeat osd_stat(store_statfs(0x4f950a000/0x0/0x4ffc00000, data 0x1223976/0x13b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,11])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:54.938252+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 33308672 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:55.938468+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 37773312 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:56.938627+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 281 ms_handle_reset con 0x55819d40c000 session 0x55819e732780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 281 ms_handle_reset con 0x55819c2eac00 session 0x55819dd341e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2045272 data_alloc: 218103808 data_used: 1249280
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 37773312 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 281 ms_handle_reset con 0x55819b3de800 session 0x55819c2b4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.722382545s of 10.361894608s, submitted: 63
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 281 ms_handle_reset con 0x55819c2eac00 session 0x55819dacef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 281 ms_handle_reset con 0x55819b37dc00 session 0x55819a132b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 281 ms_handle_reset con 0x55819e4cb800 session 0x55819a76a780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:57.938751+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 38731776 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 281 ms_handle_reset con 0x55819c827800 session 0x55819b2cc5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 282 ms_handle_reset con 0x55819e22f400 session 0x55819a76b2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:58.938881+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 38731776 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:58:59.939092+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 282 heartbeat osd_stat(store_statfs(0x4f904e000/0x0/0x4ffc00000, data 0x16dd58d/0x186e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 38731776 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 282 ms_handle_reset con 0x55819c2eac00 session 0x55819dd343c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:00.939290+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 38731776 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:01.939499+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2048317 data_alloc: 218103808 data_used: 1249280
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135430144 unmapped: 38731776 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 282 ms_handle_reset con 0x55819c77a400 session 0x55819b3c30e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 282 heartbeat osd_stat(store_statfs(0x4f904e000/0x0/0x4ffc00000, data 0x16dd58d/0x186e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:02.939696+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 282 ms_handle_reset con 0x55819c77a400 session 0x55819cd585a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 282 ms_handle_reset con 0x55819b37dc00 session 0x55819dace960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4cb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 282 ms_handle_reset con 0x55819e4cb800 session 0x55819c1cfa40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135790592 unmapped: 38371328 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:03.939827+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 283 ms_handle_reset con 0x55819d40c000 session 0x55819cd581e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eac00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 283 ms_handle_reset con 0x55819b37dc00 session 0x55819c189a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135823360 unmapped: 38338560 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:04.939948+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 38215680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:05.940136+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 283 heartbeat osd_stat(store_statfs(0x4f9022000/0x0/0x4ffc00000, data 0x170915e/0x189b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 38215680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:06.940317+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4cb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2088084 data_alloc: 218103808 data_used: 5910528
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 38215680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:07.940513+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 38215680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:08.940750+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 283 heartbeat osd_stat(store_statfs(0x4f9022000/0x0/0x4ffc00000, data 0x170915e/0x189b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135946240 unmapped: 38215680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:09.940983+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 38207488 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 283 ms_handle_reset con 0x55819e4cb800 session 0x55819a0b5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.842524529s of 13.124462128s, submitted: 44
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:10.941236+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 38199296 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:11.941418+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2092874 data_alloc: 218103808 data_used: 5910528
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 38199296 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:12.941569+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 284 ms_handle_reset con 0x55819f664000 session 0x55819b3d8b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 38199296 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:13.941700+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 284 heartbeat osd_stat(store_statfs(0x4f901f000/0x0/0x4ffc00000, data 0x170ad59/0x189f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 284 ms_handle_reset con 0x55819e22f000 session 0x55819af53e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 135987200 unmapped: 38174720 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:14.941810+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 284 heartbeat osd_stat(store_statfs(0x4f901f000/0x0/0x4ffc00000, data 0x170ad59/0x189f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 38158336 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:15.942059+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 284 heartbeat osd_stat(store_statfs(0x4f901f000/0x0/0x4ffc00000, data 0x170ad59/0x189f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 38158336 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:16.942231+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 284 heartbeat osd_stat(store_statfs(0x4f901f000/0x0/0x4ffc00000, data 0x170ad59/0x189f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 284 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 285 ms_handle_reset con 0x55819ccd9400 session 0x55819ace2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2097256 data_alloc: 218103808 data_used: 5947392
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 136011776 unmapped: 38150144 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:17.942405+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 136011776 unmapped: 38150144 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 285 heartbeat osd_stat(store_statfs(0x4f901b000/0x0/0x4ffc00000, data 0x170c92a/0x18a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,4])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:18.942515+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 285 ms_handle_reset con 0x55819d65f800 session 0x55819ace2780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 286 ms_handle_reset con 0x55819b37dc00 session 0x55819dd345a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142557184 unmapped: 31604736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4cb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 286 ms_handle_reset con 0x55819e4cb800 session 0x55819c3083c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:19.942653+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142540800 unmapped: 31621120 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:20.942791+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.063379765s of 10.483123779s, submitted: 115
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 287 heartbeat osd_stat(store_statfs(0x4f72fc000/0x0/0x4ffc00000, data 0x227c4b3/0x2412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141860864 unmapped: 32301056 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:21.942942+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 287 ms_handle_reset con 0x55819e22f000 session 0x55819b3c3c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 288 ms_handle_reset con 0x55819f664000 session 0x55819c3094a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2201026 data_alloc: 218103808 data_used: 6565888
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 288 ms_handle_reset con 0x55819f664000 session 0x55819d70d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141877248 unmapped: 32284672 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:22.943144+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141877248 unmapped: 32284672 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:23.943288+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141877248 unmapped: 32284672 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:24.943471+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 288 ms_handle_reset con 0x55819ccd9000 session 0x55819b303680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142147584 unmapped: 32014336 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:25.943654+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141983744 unmapped: 32178176 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:26.943827+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2221139 data_alloc: 218103808 data_used: 6557696
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 289 heartbeat osd_stat(store_statfs(0x4f723b000/0x0/0x4ffc00000, data 0x2330a20/0x24cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143523840 unmapped: 30638080 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:27.944035+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 289 heartbeat osd_stat(store_statfs(0x4f7235000/0x0/0x4ffc00000, data 0x2336a20/0x24d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143523840 unmapped: 30638080 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:28.944175+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 289 heartbeat osd_stat(store_statfs(0x4f7235000/0x0/0x4ffc00000, data 0x2336a20/0x24d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 289 ms_handle_reset con 0x55819d65f800 session 0x55819d70c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143532032 unmapped: 30629888 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:29.944363+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 290 ms_handle_reset con 0x55819e22f000 session 0x55819b2103c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 30801920 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:30.944486+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4cb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 291 ms_handle_reset con 0x55819e4cb800 session 0x55819dd34d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.847853184s of 10.105860710s, submitted: 107
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 291 ms_handle_reset con 0x55819d65f800 session 0x55819c0421e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 291 ms_handle_reset con 0x55819b37dc00 session 0x55819b2cc780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143106048 unmapped: 31055872 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:31.944784+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 292 ms_handle_reset con 0x55819ccd9000 session 0x55819cd58b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2234754 data_alloc: 218103808 data_used: 6565888
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 292 heartbeat osd_stat(store_statfs(0x4f723c000/0x0/0x4ffc00000, data 0x233c2bf/0x24df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143335424 unmapped: 30826496 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:32.945039+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 292 ms_handle_reset con 0x55819e22f000 session 0x55819ace3e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 293 ms_handle_reset con 0x55819f664000 session 0x55819df79a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 293 heartbeat osd_stat(store_statfs(0x4f721e000/0x0/0x4ffc00000, data 0x235a25d/0x24fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 293 ms_handle_reset con 0x55819b37dc00 session 0x55819c162000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143360000 unmapped: 30801920 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:33.945255+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 293 ms_handle_reset con 0x55819ccd9000 session 0x55819aef41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 293 ms_handle_reset con 0x55819d65f800 session 0x55819dacf0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 293 ms_handle_reset con 0x55819ccd9c00 session 0x55819d61da40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd8800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 293 ms_handle_reset con 0x55819ccd8800 session 0x55819d61c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143376384 unmapped: 30785536 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:34.945464+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 294 ms_handle_reset con 0x55819e22f000 session 0x55819b3c52c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 294 ms_handle_reset con 0x55819b37dc00 session 0x55819d61cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 294 ms_handle_reset con 0x55819ccd9000 session 0x55819d82cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 294 ms_handle_reset con 0x55819ccd9c00 session 0x55819c2b5e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 294 heartbeat osd_stat(store_statfs(0x4f7217000/0x0/0x4ffc00000, data 0x235d59f/0x2505000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 30711808 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:35.945640+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 295 ms_handle_reset con 0x55819d6a8400 session 0x55819d902960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 295 ms_handle_reset con 0x55819d65f800 session 0x55819ace21e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 30695424 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:36.945829+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 295 ms_handle_reset con 0x55819d6a8400 session 0x55819a6183c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2246323 data_alloc: 218103808 data_used: 6565888
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 295 ms_handle_reset con 0x55819ccd9000 session 0x55819d82c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 30695424 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:37.946066+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 296 heartbeat osd_stat(store_statfs(0x4f721a000/0x0/0x4ffc00000, data 0x235eb9a/0x2503000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 296 ms_handle_reset con 0x55819ccd9c00 session 0x55819c0430e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 296 ms_handle_reset con 0x55819b37dc00 session 0x55819d902f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 296 ms_handle_reset con 0x55819ccd9000 session 0x55819c1ce5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143474688 unmapped: 30687232 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 296 ms_handle_reset con 0x55819ccd9c00 session 0x55819d70c960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:38.946169+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 296 ms_handle_reset con 0x55819d65f800 session 0x55819e733860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143491072 unmapped: 30670848 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:39.946419+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 297 ms_handle_reset con 0x55819e22f000 session 0x55819d903a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 297 heartbeat osd_stat(store_statfs(0x4f7216000/0x0/0x4ffc00000, data 0x2364d82/0x2507000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 297 ms_handle_reset con 0x55819d6a8800 session 0x55819d9023c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143581184 unmapped: 30580736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:40.946619+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 298 ms_handle_reset con 0x55819ccd9000 session 0x55819dd35680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.086851120s of 10.001296997s, submitted: 248
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 30498816 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 298 handle_osd_map epochs [299,299], i have 299, src has [1,299]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:41.946783+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 299 ms_handle_reset con 0x55819ccd9c00 session 0x55819cd58d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2252627 data_alloc: 218103808 data_used: 6574080
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 299 ms_handle_reset con 0x55819d65f800 session 0x55819c042f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144826368 unmapped: 29335552 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 299 ms_handle_reset con 0x55819e22f000 session 0x55819df79c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 300 handle_osd_map epochs [300,300], i have 300, src has [1,300]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:42.946921+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 300 ms_handle_reset con 0x55819d6a8400 session 0x55819d903e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 300 ms_handle_reset con 0x55819ccd9000 session 0x55819a1a0780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 29278208 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:43.947101+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 300 heartbeat osd_stat(store_statfs(0x4f720f000/0x0/0x4ffc00000, data 0x236a12d/0x250d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 29278208 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:44.947320+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 301 ms_handle_reset con 0x55819d65f800 session 0x5581a0d38780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144924672 unmapped: 29237248 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:45.947436+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 301 ms_handle_reset con 0x55819e22f000 session 0x5581a0d394a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144957440 unmapped: 29204480 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:46.947550+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2258937 data_alloc: 218103808 data_used: 6574080
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144998400 unmapped: 29163520 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:47.947688+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819d6a8c00 session 0x55819c1ce780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819d6a9000 session 0x55819d82c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819d6a9000 session 0x55819a1a1680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 145063936 unmapped: 29097984 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:48.948087+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819ccd9c00 session 0x55819df781e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:49.948266+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 145063936 unmapped: 29097984 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 heartbeat osd_stat(store_statfs(0x4f7204000/0x0/0x4ffc00000, data 0x237434e/0x2519000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819ccd9000 session 0x55819df79c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819c77a400 session 0x55819c162960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819c2eac00 session 0x55819c018f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 heartbeat osd_stat(store_statfs(0x4f7204000/0x0/0x4ffc00000, data 0x237434e/0x2519000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819c77a400 session 0x55819c042f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:50.948383+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 145088512 unmapped: 29073408 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819ccd9c00 session 0x55819d82cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819d6a9000 session 0x55819c05be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819ccd9000 session 0x55819dd35680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.844482422s of 10.105117798s, submitted: 253
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819d65f800 session 0x55819c0430e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:51.948643+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141901824 unmapped: 32260096 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 heartbeat osd_stat(store_statfs(0x4f8326000/0x0/0x4ffc00000, data 0x124b3d0/0x13f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2098350 data_alloc: 218103808 data_used: 1298432
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 ms_handle_reset con 0x55819c77a400 session 0x55819a6183c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:52.948800+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141901824 unmapped: 32260096 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:53.949035+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141901824 unmapped: 32260096 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819ccd9000 session 0x55819ace21e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819ccd9c00 session 0x55819d6e41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:54.949179+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141918208 unmapped: 32243712 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819d65f800 session 0x55819b3c52c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819d6a9000 session 0x55819b3023c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819c77a400 session 0x55819d61da40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819ccd9000 session 0x55819d9023c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819ccd9c00 session 0x55819a1a0f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:55.949329+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141942784 unmapped: 32219136 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:56.949462+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141959168 unmapped: 32202752 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819d65f800 session 0x55819f3d21e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819d6a9000 session 0x55819d70cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 ms_handle_reset con 0x55819d6a8c00 session 0x55819c05b680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2099111 data_alloc: 218103808 data_used: 1310720
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 heartbeat osd_stat(store_statfs(0x4f7f1a000/0x0/0x4ffc00000, data 0x124cdab/0x13f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:57.949686+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141975552 unmapped: 32186368 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819c77a400 session 0x55819cd58f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819ccd9c00 session 0x55819df78b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819ccd9000 session 0x55819d9030e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819d65f800 session 0x55819a1874a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:58.949872+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141991936 unmapped: 32169984 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819ccd9000 session 0x55819b3c25a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819c77a400 session 0x55819b2cc3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T07:59:59.950039+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819ccd9c00 session 0x55819cd59680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142000128 unmapped: 32161792 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 heartbeat osd_stat(store_statfs(0x4f7f18000/0x0/0x4ffc00000, data 0x124e7ee/0x13f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:00.950218+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142000128 unmapped: 32161792 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:01.950387+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142000128 unmapped: 32161792 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a8c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.300768852s of 10.990090370s, submitted: 159
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2100002 data_alloc: 218103808 data_used: 1323008
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819d65f800 session 0x55819b3c0960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819d6a8c00 session 0x55819a1874a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:02.950566+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142000128 unmapped: 32161792 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819c77a400 session 0x55819b3d9c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 ms_handle_reset con 0x55819ccd9000 session 0x55819d70c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:03.950712+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142016512 unmapped: 32145408 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 heartbeat osd_stat(store_statfs(0x4f7f1a000/0x0/0x4ffc00000, data 0x124e78c/0x13f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:04.950930+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142016512 unmapped: 32145408 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:05.951068+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142016512 unmapped: 32145408 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:06.951239+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142016512 unmapped: 32145408 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2098562 data_alloc: 218103808 data_used: 1318912
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:07.951458+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142024704 unmapped: 32137216 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 ms_handle_reset con 0x55819ccd9c00 session 0x55819c019860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:08.951683+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142032896 unmapped: 32129024 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:09.952528+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142032896 unmapped: 32129024 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f7f13000/0x0/0x4ffc00000, data 0x125074f/0x13fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:10.952774+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142032896 unmapped: 32129024 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 ms_handle_reset con 0x55819d65f800 session 0x55819c2b5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f7f13000/0x0/0x4ffc00000, data 0x125074f/0x13fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 ms_handle_reset con 0x55819e22f000 session 0x55819a1a0f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 ms_handle_reset con 0x55819d6a9000 session 0x55819acdc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:11.952951+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 ms_handle_reset con 0x55819c77a400 session 0x55819c2b4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2110061 data_alloc: 218103808 data_used: 1327104
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:12.953162+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:13.953344+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f7f12000/0x0/0x4ffc00000, data 0x125075f/0x13fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 ms_handle_reset con 0x55819ccd9000 session 0x55819cd59860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:14.953550+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:15.953695+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:16.953842+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2110061 data_alloc: 218103808 data_used: 1327104
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:17.954087+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f7f12000/0x0/0x4ffc00000, data 0x125075f/0x13fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:18.954230+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f7f12000/0x0/0x4ffc00000, data 0x125075f/0x13fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:19.954446+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:20.954630+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:21.954779+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2110061 data_alloc: 218103808 data_used: 1327104
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:22.955251+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:23.955343+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.680963516s of 21.843381882s, submitted: 50
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:24.955507+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f7f12000/0x0/0x4ffc00000, data 0x125075f/0x13fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 ms_handle_reset con 0x55819ccd9c00 session 0x55819a0b4000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:25.955689+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:26.955912+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2106905 data_alloc: 218103808 data_used: 1331200
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:27.956087+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f7f14000/0x0/0x4ffc00000, data 0x125074f/0x13fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:28.956257+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:29.956457+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 heartbeat osd_stat(store_statfs(0x4f7f14000/0x0/0x4ffc00000, data 0x125072c/0x13f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:30.956570+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:31.956703+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142057472 unmapped: 32104448 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2110365 data_alloc: 218103808 data_used: 1339392
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:32.956786+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142057472 unmapped: 32104448 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:33.957117+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142057472 unmapped: 32104448 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.748094559s of 10.117201805s, submitted: 38
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 307 ms_handle_reset con 0x55819d40cc00 session 0x55819b3c52c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 307 ms_handle_reset con 0x55819d65f800 session 0x55819d82d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:34.957263+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142065664 unmapped: 32096256 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:35.957421+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 307 ms_handle_reset con 0x55819ccd9000 session 0x55819ace21e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142073856 unmapped: 32088064 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 307 heartbeat osd_stat(store_statfs(0x4f7f11000/0x0/0x4ffc00000, data 0x1251f4c/0x13fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 307 heartbeat osd_stat(store_statfs(0x4f7f11000/0x0/0x4ffc00000, data 0x1251f4c/0x13fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:36.957567+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 32071680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 ms_handle_reset con 0x55819c77a400 session 0x55819c1ce3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 ms_handle_reset con 0x55819ccd9c00 session 0x55819dd35680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2117665 data_alloc: 218103808 data_used: 1351680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:37.957829+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142098432 unmapped: 32063488 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 ms_handle_reset con 0x55819d6a9000 session 0x55819b303680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:38.958033+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 ms_handle_reset con 0x55819d40cc00 session 0x55819acdde00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142098432 unmapped: 32063488 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 ms_handle_reset con 0x55819c77a400 session 0x55819d9032c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:39.958322+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 ms_handle_reset con 0x55819ccd9000 session 0x55819a1863c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 ms_handle_reset con 0x55819ccd9c00 session 0x55819c17d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 ms_handle_reset con 0x55819d65f800 session 0x55819d70d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:40.958509+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:41.958684+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 heartbeat osd_stat(store_statfs(0x4f7f0f000/0x0/0x4ffc00000, data 0x1253aa6/0x13ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2117838 data_alloc: 218103808 data_used: 1359872
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:42.958883+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 ms_handle_reset con 0x55819c77a400 session 0x55819c162000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:43.959085+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 ms_handle_reset con 0x55819ccd9000 session 0x55819ace2f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:44.959302+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:45.959441+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 ms_handle_reset con 0x55819ccd9c00 session 0x55819e733860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 heartbeat osd_stat(store_statfs(0x4f7f0a000/0x0/0x4ffc00000, data 0x125569d/0x1403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:46.959635+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 32038912 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.403540611s of 12.996227264s, submitted: 73
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 ms_handle_reset con 0x55819d40cc00 session 0x55819c034b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 heartbeat osd_stat(store_statfs(0x4f7f0a000/0x0/0x4ffc00000, data 0x125569d/0x1403000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2121865 data_alloc: 218103808 data_used: 1363968
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:47.959834+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 32038912 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 heartbeat osd_stat(store_statfs(0x4f7f0c000/0x0/0x4ffc00000, data 0x125568d/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d6a9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 heartbeat osd_stat(store_statfs(0x4f7f0c000/0x0/0x4ffc00000, data 0x125568d/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:48.960000+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 32038912 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 ms_handle_reset con 0x55819d6a9000 session 0x55819d903c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:49.960193+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 32038912 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:50.960364+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 309 handle_osd_map epochs [310,310], i have 310, src has [1,310]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 32038912 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 ms_handle_reset con 0x55819c77a400 session 0x55819d902780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 heartbeat osd_stat(store_statfs(0x4f7f07000/0x0/0x4ffc00000, data 0x125760a/0x1406000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:51.960499+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 32194560 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 ms_handle_reset con 0x55819ccd9c00 session 0x55819dd345a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2132952 data_alloc: 218103808 data_used: 1372160
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 ms_handle_reset con 0x55819d40cc00 session 0x55819ace2780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:52.960639+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 ms_handle_reset con 0x55819d448400 session 0x55819af53e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 32194560 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 ms_handle_reset con 0x55819d448800 session 0x55819b3d8b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:53.960776+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 ms_handle_reset con 0x55819c77a400 session 0x55819a0b5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 32194560 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:54.961000+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 32194560 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:55.961211+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 32194560 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 heartbeat osd_stat(store_statfs(0x4f7f05000/0x0/0x4ffc00000, data 0x125761a/0x1407000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:56.961368+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 ms_handle_reset con 0x55819ccd9000 session 0x55819c2b4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 ms_handle_reset con 0x55819d40cc00 session 0x5581a0d38f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142000128 unmapped: 32161792 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819ccd9c00 session 0x55819dace960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819d448400 session 0x55819af53e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.671887398s of 10.360893250s, submitted: 41
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819c77a400 session 0x55819d9032c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2135912 data_alloc: 218103808 data_used: 1384448
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:57.961510+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819ccd9000 session 0x55819dd35680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142032896 unmapped: 32129024 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819d40cc00 session 0x55819b3c30e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819ccd9c00 session 0x55819d70c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:58.961686+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819d448400 session 0x55819a1874a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819c77a400 session 0x55819cd59680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:00:59.961933+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819ccd9000 session 0x55819c1ce780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142041088 unmapped: 32120832 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819ccd9c00 session 0x55819a132d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:00.962116+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819d40cc00 session 0x55819b3c21e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d449000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142073856 unmapped: 32088064 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57c800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819d449000 session 0x55819c057e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819d57c800 session 0x55819a76b2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:01.962299+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142073856 unmapped: 32088064 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 heartbeat osd_stat(store_statfs(0x4f7f02000/0x0/0x4ffc00000, data 0x125921b/0x140c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2138376 data_alloc: 218103808 data_used: 1396736
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:02.962469+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 ms_handle_reset con 0x55819d57d400 session 0x55819a76a780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142082048 unmapped: 32079872 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 311 handle_osd_map epochs [311,312], i have 311, src has [1,312]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:03.962633+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 32071680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 312 ms_handle_reset con 0x55819c77a400 session 0x55819b2cc3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:04.962854+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 312 heartbeat osd_stat(store_statfs(0x4f7f00000/0x0/0x4ffc00000, data 0x125ad68/0x140d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 32071680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:05.963071+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 32071680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:06.963323+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 32071680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.462087631s of 10.085801125s, submitted: 64
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2139924 data_alloc: 218103808 data_used: 1396736
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:07.963482+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 312 ms_handle_reset con 0x55819d57d800 session 0x55819c0183c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 312 ms_handle_reset con 0x55819d57dc00 session 0x55819a132b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 32071680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:08.963794+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 32071680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 312 ms_handle_reset con 0x55819c77a400 session 0x55819b3c2b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 312 heartbeat osd_stat(store_statfs(0x4f7f01000/0x0/0x4ffc00000, data 0x125ad68/0x140d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:09.964030+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57c800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 32071680 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:10.964226+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142098432 unmapped: 32063488 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 312 ms_handle_reset con 0x55819d57c800 session 0x55819a59f4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:11.964399+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142098432 unmapped: 32063488 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:12.964569+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2143218 data_alloc: 218103808 data_used: 1400832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 313 ms_handle_reset con 0x55819d57d400 session 0x55819c2b4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:13.964749+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142131200 unmapped: 32030720 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 314 ms_handle_reset con 0x55819d57d800 session 0x55819b211680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:14.964920+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 314 ms_handle_reset con 0x55819d57cc00 session 0x55819b2cda40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 314 ms_handle_reset con 0x55819d57d000 session 0x55819ace3e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 314 heartbeat osd_stat(store_statfs(0x4f7efa000/0x0/0x4ffc00000, data 0x125e3c4/0x1412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 143130624 unmapped: 31031296 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 314 ms_handle_reset con 0x55819c77a400 session 0x55819d902f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:15.965142+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:16.965302+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 32047104 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57c800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:17.965492+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 315 ms_handle_reset con 0x55819d57c800 session 0x55819c1cef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2225127 data_alloc: 218103808 data_used: 1413120
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.519370079s of 10.102592468s, submitted: 90
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 316 heartbeat osd_stat(store_statfs(0x4f763f000/0x0/0x4ffc00000, data 0x1b19f1b/0x1cce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 32038912 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:18.965664+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 316 heartbeat osd_stat(store_statfs(0x4f763a000/0x0/0x4ffc00000, data 0x1b1b9e0/0x1cd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 32022528 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:19.966813+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 32022528 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:20.968185+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 32022528 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 316 heartbeat osd_stat(store_statfs(0x4f763a000/0x0/0x4ffc00000, data 0x1b1b9e0/0x1cd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 316 ms_handle_reset con 0x55819d57d400 session 0x55819d82cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:21.969294+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 32022528 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:22.969463+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2230555 data_alloc: 218103808 data_used: 1413120
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 32022528 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:23.970106+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 32022528 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 317 ms_handle_reset con 0x55819d57d800 session 0x55819f3d25a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:24.970540+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 317 ms_handle_reset con 0x55819c77a400 session 0x55819cd59c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142147584 unmapped: 32014336 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 317 heartbeat osd_stat(store_statfs(0x4f7637000/0x0/0x4ffc00000, data 0x1b1d5bf/0x1cd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57c800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 317 ms_handle_reset con 0x55819d57c800 session 0x55819d6e5e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:25.970764+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142147584 unmapped: 32014336 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:26.970952+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142147584 unmapped: 32014336 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:27.971223+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2233304 data_alloc: 218103808 data_used: 1437696
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142147584 unmapped: 32014336 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:28.971385+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142147584 unmapped: 32014336 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:29.971793+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.138035774s of 12.176263809s, submitted: 18
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142163968 unmapped: 31997952 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 318 ms_handle_reset con 0x55819d57d000 session 0x55819d70d0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:30.971995+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142163968 unmapped: 31997952 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 318 ms_handle_reset con 0x55819d57d400 session 0x55819cd585a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 318 heartbeat osd_stat(store_statfs(0x4f7633000/0x0/0x4ffc00000, data 0x1b1f13c/0x1cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:31.972112+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 318 ms_handle_reset con 0x55819ccd9c00 session 0x55819a132780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142163968 unmapped: 31997952 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 318 heartbeat osd_stat(store_statfs(0x4f7633000/0x0/0x4ffc00000, data 0x1b1f13c/0x1cd9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:32.972315+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2236950 data_alloc: 218103808 data_used: 1437696
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142163968 unmapped: 31997952 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57c800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 318 ms_handle_reset con 0x55819d57c800 session 0x55819c3094a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 318 ms_handle_reset con 0x55819c77a400 session 0x55819d61da40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:33.972506+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142172160 unmapped: 31989760 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:34.972635+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142237696 unmapped: 31924224 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:35.972762+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 319 ms_handle_reset con 0x55819ccd9000 session 0x55819df781e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 29278208 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:36.972903+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 319 ms_handle_reset con 0x55819d40cc00 session 0x55819d6e41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 319 heartbeat osd_stat(store_statfs(0x4f7630000/0x0/0x4ffc00000, data 0x1b20d1d/0x1cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 29278208 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:37.973037+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2303610 data_alloc: 234881024 data_used: 10301440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 29278208 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:38.973164+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 319 heartbeat osd_stat(store_statfs(0x4f7631000/0x0/0x4ffc00000, data 0x1b20d1d/0x1cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 29278208 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 319 ms_handle_reset con 0x55819d57d000 session 0x55819a1a1680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 319 ms_handle_reset con 0x55819d57d400 session 0x55819dacef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 320 ms_handle_reset con 0x55819d448c00 session 0x55819c05be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:39.973325+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144883712 unmapped: 29278208 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.195242882s of 10.539639473s, submitted: 38
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 320 ms_handle_reset con 0x55819c77a400 session 0x55819a59ef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:40.973497+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144908288 unmapped: 29253632 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 320 ms_handle_reset con 0x55819ccd9000 session 0x55819c2b5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 320 heartbeat osd_stat(store_statfs(0x4f762e000/0x0/0x4ffc00000, data 0x1b228de/0x1cdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:41.973715+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144908288 unmapped: 29253632 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40cc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 320 ms_handle_reset con 0x55819d40cc00 session 0x55819dd350e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:42.973914+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2306115 data_alloc: 234881024 data_used: 10305536
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144916480 unmapped: 29245440 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:43.974067+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144924672 unmapped: 29237248 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:44.974241+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 321 ms_handle_reset con 0x55819c77a400 session 0x55819c2b5a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144924672 unmapped: 29237248 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:45.974440+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 321 heartbeat osd_stat(store_statfs(0x4f762c000/0x0/0x4ffc00000, data 0x1b242fb/0x1ce1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 321 ms_handle_reset con 0x55819ccd9000 session 0x55819c057680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144924672 unmapped: 29237248 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 321 heartbeat osd_stat(store_statfs(0x4f762c000/0x0/0x4ffc00000, data 0x1b242fb/0x1ce1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 321 ms_handle_reset con 0x55819d448c00 session 0x55819b2114a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:46.974618+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 321 ms_handle_reset con 0x55819d57d400 session 0x55819b2cd0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144924672 unmapped: 29237248 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57c800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:47.974860+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2313337 data_alloc: 234881024 data_used: 10309632
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144941056 unmapped: 29220864 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 322 ms_handle_reset con 0x55819d57c800 session 0x55819acddc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 322 handle_osd_map epochs [322,323], i have 322, src has [1,323]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:48.975038+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 144957440 unmapped: 29204480 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:49.975227+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140451840 unmapped: 33710080 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 323 handle_osd_map epochs [323,324], i have 323, src has [1,324]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:50.975399+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.058774471s of 10.358276367s, submitted: 78
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 324 ms_handle_reset con 0x55819ccd9000 session 0x55819d902960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 324 ms_handle_reset con 0x55819c77a400 session 0x55819a76a1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140451840 unmapped: 33710080 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:51.975603+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 324 heartbeat osd_stat(store_statfs(0x4f7edb000/0x0/0x4ffc00000, data 0x126f4f4/0x1431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140451840 unmapped: 33710080 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 324 ms_handle_reset con 0x55819d448c00 session 0x55819c05a960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:52.975924+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2191768 data_alloc: 218103808 data_used: 1474560
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 33693696 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 325 ms_handle_reset con 0x55819d57d400 session 0x5581a0d38780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:53.976217+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 33685504 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:54.976443+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d44b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819d44b400 session 0x55819c188d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819c77a400 session 0x55819a1874a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 33652736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:55.977056+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819ccd9000 session 0x55819df79860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 33652736 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819d448c00 session 0x55819f3d3680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819d57d400 session 0x55819b3c4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:56.977258+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 33644544 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:57.977604+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2198425 data_alloc: 218103808 data_used: 1482752
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 heartbeat osd_stat(store_statfs(0x4f7ed7000/0x0/0x4ffc00000, data 0x1272af2/0x1436000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 33644544 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f665000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:58.977754+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140787712 unmapped: 33374208 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819f664400 session 0x55819df78b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819c77a400 session 0x55819af53e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:01:59.978043+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142024704 unmapped: 32137216 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819f665000 session 0x5581a0d38b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819ccd9000 session 0x55819d903c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 heartbeat osd_stat(store_statfs(0x4f7825000/0x0/0x4ffc00000, data 0x1925af2/0x1ae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:00.978307+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 ms_handle_reset con 0x55819d448c00 session 0x55819cd590e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 32612352 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:01.978926+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 heartbeat osd_stat(store_statfs(0x4f7722000/0x0/0x4ffc00000, data 0x1a28af2/0x1bec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 32612352 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:02.979138+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2267048 data_alloc: 218103808 data_used: 1486848
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 32612352 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.935033798s of 12.687599182s, submitted: 110
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:03.979287+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 327 ms_handle_reset con 0x55819d57d400 session 0x55819b210f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 32604160 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:04.979468+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 32604160 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:05.979825+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 327 ms_handle_reset con 0x55819c77a400 session 0x55819df785a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 328 ms_handle_reset con 0x55819d448c00 session 0x55819ace21e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141582336 unmapped: 32579584 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:06.980052+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 328 handle_osd_map epochs [328,329], i have 328, src has [1,329]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 329 ms_handle_reset con 0x55819d57d400 session 0x55819d902960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 329 ms_handle_reset con 0x55819ccd9000 session 0x55819c0565a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 32555008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:07.980423+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 329 heartbeat osd_stat(store_statfs(0x4f7715000/0x0/0x4ffc00000, data 0x1a2dd75/0x1bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2282036 data_alloc: 218103808 data_used: 1499136
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 32555008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:08.980610+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 32555008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:09.980907+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 32555008 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:10.981091+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 329 heartbeat osd_stat(store_statfs(0x4f7715000/0x0/0x4ffc00000, data 0x1a2dd75/0x1bf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f665000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 329 ms_handle_reset con 0x55819f665000 session 0x55819c043680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141647872 unmapped: 32514048 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:11.981235+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 329 handle_osd_map epochs [329,330], i have 329, src has [1,330]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 330 ms_handle_reset con 0x55819d448c00 session 0x55819aef41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 32505856 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:12.981444+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2289150 data_alloc: 218103808 data_used: 1511424
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 331 ms_handle_reset con 0x55819ccd9000 session 0x55819b2114a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 331 ms_handle_reset con 0x55819c77a400 session 0x55819dd34d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 32505856 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:13.981694+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.079732895s of 10.232794762s, submitted: 45
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 331 ms_handle_reset con 0x55819d57d400 session 0x55819acdde00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 331 ms_handle_reset con 0x55819f664400 session 0x55819b302000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 150405120 unmapped: 23756800 heap: 174161920 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:14.982166+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 331 heartbeat osd_stat(store_statfs(0x4f774d000/0x0/0x4ffc00000, data 0x19f14e1/0x1bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 331 handle_osd_map epochs [331,332], i have 331, src has [1,332]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 332 ms_handle_reset con 0x55819c77a400 session 0x55819b3c21e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 142032896 unmapped: 44728320 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 332 ms_handle_reset con 0x55819d448c00 session 0x55819c0421e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 332 ms_handle_reset con 0x55819ccd9000 session 0x55819a59ef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:15.982468+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 332 ms_handle_reset con 0x55819d57d400 session 0x55819cd594a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 46276608 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:16.982716+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 46276608 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:17.983036+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2605094 data_alloc: 218103808 data_used: 1519616
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 46276608 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:18.983185+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 46276608 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:19.983355+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 46276608 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:20.983567+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 332 heartbeat osd_stat(store_statfs(0x4f4b0c000/0x0/0x4ffc00000, data 0x4633050/0x4802000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 332 ms_handle_reset con 0x55819f664800 session 0x55819b3c3680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 46276608 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 333 ms_handle_reset con 0x55819c77a400 session 0x55819c0574a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:21.983729+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 46260224 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 334 ms_handle_reset con 0x55819ccd9000 session 0x55819f3d32c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 334 ms_handle_reset con 0x55819f664c00 session 0x55819d6e43c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:22.983867+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2614714 data_alloc: 218103808 data_used: 1531904
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 334 ms_handle_reset con 0x55819d57d400 session 0x55819a133860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 46252032 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:23.984026+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f665800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.057774544s of 10.049736023s, submitted: 105
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 335 ms_handle_reset con 0x55819f665800 session 0x55819e732000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 46252032 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:24.984167+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 335 ms_handle_reset con 0x55819e22e800 session 0x55819b3c52c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 336 ms_handle_reset con 0x55819ccd9000 session 0x55819b3c4b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 336 ms_handle_reset con 0x55819d448c00 session 0x55819b3c05a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140402688 unmapped: 46358528 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:25.984294+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 336 ms_handle_reset con 0x55819d57d400 session 0x55819cd7cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 46055424 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:26.984475+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 336 heartbeat osd_stat(store_statfs(0x4f4aff000/0x0/0x4ffc00000, data 0x4639f5c/0x480e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 46055424 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:27.984604+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2666642 data_alloc: 218103808 data_used: 8142848
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 46055424 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 336 heartbeat osd_stat(store_statfs(0x4f4aff000/0x0/0x4ffc00000, data 0x4639f5c/0x480e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819f664c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 336 ms_handle_reset con 0x55819e22f800 session 0x55819b211860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:28.984752+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 336 handle_osd_map epochs [336,337], i have 336, src has [1,337]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 337 ms_handle_reset con 0x55819ccd9000 session 0x55819a1874a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 46055424 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:29.985139+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 338 ms_handle_reset con 0x55819d448c00 session 0x55819f3d3860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 338 ms_handle_reset con 0x55819e22e800 session 0x5581a0d39c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 338 ms_handle_reset con 0x55819d57d400 session 0x55819a133a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 338 ms_handle_reset con 0x55819f664c00 session 0x55819cd7c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 46055424 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:30.985272+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 338 ms_handle_reset con 0x55819ccd9000 session 0x55819b3c4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 338 ms_handle_reset con 0x55819d57d400 session 0x5581a0d394a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 46055424 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:31.985402+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 46055424 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:32.985527+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2675255 data_alloc: 218103808 data_used: 8151040
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 338 heartbeat osd_stat(store_statfs(0x4f4af8000/0x0/0x4ffc00000, data 0x463dc1c/0x4816000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 140705792 unmapped: 46055424 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:33.985643+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 338 ms_handle_reset con 0x55819e22e800 session 0x55819a76a780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.827129364s of 10.002007484s, submitted: 52
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 339 ms_handle_reset con 0x55819e22f800 session 0x55819a1a0780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 339 heartbeat osd_stat(store_statfs(0x4f4af8000/0x0/0x4ffc00000, data 0x463dc1c/0x4816000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 141074432 unmapped: 45686784 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:34.985748+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 19K writes, 70K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 19K writes, 6532 syncs, 2.97 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8126 writes, 29K keys, 8126 commit groups, 1.0 writes per commit group, ingest: 21.48 MB, 0.04 MB/s
                                           Interval WAL: 8126 writes, 3262 syncs, 2.49 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146235392 unmapped: 40525824 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 339 ms_handle_reset con 0x55819e0d6400 session 0x55819c2b5860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:35.985868+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 40443904 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:36.986023+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 151576576 unmapped: 35184640 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:37.986160+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 339 heartbeat osd_stat(store_statfs(0x4f4462000/0x0/0x4ffc00000, data 0x4cd18be/0x4eac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,2,3])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2800357 data_alloc: 234881024 data_used: 17358848
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 153272320 unmapped: 33488896 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:38.986387+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 339 ms_handle_reset con 0x55819d57d400 session 0x55819cd7cb40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 340 ms_handle_reset con 0x55819e0d6400 session 0x55819c3090e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 153665536 unmapped: 33095680 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:39.986579+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 341 ms_handle_reset con 0x55819ccd9000 session 0x5581a0d383c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 155017216 unmapped: 31744000 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:40.986729+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 155025408 unmapped: 31735808 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:41.986878+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 342 ms_handle_reset con 0x55819e22e800 session 0x55819dace5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22f800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 155074560 unmapped: 31686656 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 342 ms_handle_reset con 0x55819e22f800 session 0x55819dd35860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:42.987024+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2828892 data_alloc: 234881024 data_used: 19120128
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 155090944 unmapped: 31670272 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 342 heartbeat osd_stat(store_statfs(0x4f436a000/0x0/0x4ffc00000, data 0x4dc7b60/0x4fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:43.987188+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.715765953s of 10.629107475s, submitted: 154
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 155090944 unmapped: 31670272 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:44.987328+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 343 ms_handle_reset con 0x55819ccd9000 session 0x55819dd34000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 160817152 unmapped: 25944064 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:45.987461+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168435712 unmapped: 18325504 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 343 heartbeat osd_stat(store_statfs(0x4f4035000/0x0/0x4ffc00000, data 0x4dc9651/0x4fa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:46.987592+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d57d400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 343 ms_handle_reset con 0x55819e0d6400 session 0x55819dd34780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 343 ms_handle_reset con 0x55819e22e800 session 0x55819c188b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 170811392 unmapped: 15949824 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:47.987742+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 343 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 344 ms_handle_reset con 0x55819e0d7c00 session 0x55819e733a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 344 ms_handle_reset con 0x55819d57d400 session 0x55819c1883c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2980997 data_alloc: 234881024 data_used: 20914176
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 170958848 unmapped: 15802368 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:48.987865+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165675008 unmapped: 21086208 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:49.988065+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165675008 unmapped: 21086208 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 345 heartbeat osd_stat(store_statfs(0x4f3483000/0x0/0x4ffc00000, data 0x5ca7c4d/0x5e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:50.988216+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 346 ms_handle_reset con 0x55819ccd9000 session 0x55819b3c4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 346 ms_handle_reset con 0x55819e0d6400 session 0x55819df792c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165691392 unmapped: 21069824 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:51.988383+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165724160 unmapped: 21037056 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:52.988538+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 346 handle_osd_map epochs [346,347], i have 346, src has [1,347]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 347 ms_handle_reset con 0x55819e0d7c00 session 0x55819b3c43c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2972735 data_alloc: 234881024 data_used: 20885504
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165748736 unmapped: 21012480 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:53.988724+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 347 heartbeat osd_stat(store_statfs(0x4f347f000/0x0/0x4ffc00000, data 0x5cab399/0x5e8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.420515060s of 10.162627220s, submitted: 217
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165756928 unmapped: 21004288 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:54.988853+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 348 ms_handle_reset con 0x55819e22e800 session 0x55819cd7d680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 348 ms_handle_reset con 0x55819e0d7400 session 0x55819a0b5860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165797888 unmapped: 20963328 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:55.989042+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 348 ms_handle_reset con 0x55819d448c00 session 0x55819c1ced20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165797888 unmapped: 20963328 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:56.989154+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 348 ms_handle_reset con 0x55819ccd9000 session 0x55819dacf680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165806080 unmapped: 20955136 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:57.989815+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976387 data_alloc: 234881024 data_used: 20885504
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165822464 unmapped: 20938752 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:58.990173+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 350 ms_handle_reset con 0x55819e0d6400 session 0x55819dd34f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 350 heartbeat osd_stat(store_statfs(0x4f347c000/0x0/0x4ffc00000, data 0x5cae5cb/0x5e91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165830656 unmapped: 20930560 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:02:59.990390+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165830656 unmapped: 20930560 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:00.990829+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 351 ms_handle_reset con 0x55819e0d7c00 session 0x55819d903680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165855232 unmapped: 20905984 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:01.991133+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 351 heartbeat osd_stat(store_statfs(0x4f3475000/0x0/0x4ffc00000, data 0x5cb1c9f/0x5e98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 352 ms_handle_reset con 0x55819e22e800 session 0x55819d61d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 352 ms_handle_reset con 0x55819e0d7000 session 0x55819f3d2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 352 ms_handle_reset con 0x55819ccd9000 session 0x55819acdda40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165888000 unmapped: 20873216 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:02.991249+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2989612 data_alloc: 234881024 data_used: 20885504
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165888000 unmapped: 20873216 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:03.991570+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:04.991784+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165888000 unmapped: 20873216 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.287139893s of 10.403340340s, submitted: 165
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:05.991937+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 20856832 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:06.992171+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 20856832 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:07.992320+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 20856832 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 353 heartbeat osd_stat(store_statfs(0x4f3470000/0x0/0x4ffc00000, data 0x5cb5427/0x5e9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2991882 data_alloc: 234881024 data_used: 20893696
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:08.992504+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 20856832 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 354 ms_handle_reset con 0x55819d448c00 session 0x55819b210b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:09.992735+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165904384 unmapped: 20856832 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:10.992867+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165920768 unmapped: 20840448 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 354 ms_handle_reset con 0x55819e0d7c00 session 0x55819d82c960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:11.993046+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 165928960 unmapped: 20832256 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 355 ms_handle_reset con 0x55819e0d6c00 session 0x55819df78b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 355 ms_handle_reset con 0x55819ccd9000 session 0x5581a0d39c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 355 ms_handle_reset con 0x55819e0d6c00 session 0x55819b3c52c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:12.993203+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167018496 unmapped: 19742720 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 356 ms_handle_reset con 0x55819e22e800 session 0x55819c0423c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 356 ms_handle_reset con 0x55819e0d6400 session 0x5581a0d38960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3032529 data_alloc: 234881024 data_used: 20914176
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:13.993370+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167018496 unmapped: 19742720 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 356 heartbeat osd_stat(store_statfs(0x4f3052000/0x0/0x4ffc00000, data 0x5fc5690/0x5eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:14.993530+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167034880 unmapped: 19726336 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:15.993723+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167043072 unmapped: 19718144 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 356 heartbeat osd_stat(store_statfs(0x4f3052000/0x0/0x4ffc00000, data 0x5fc5690/0x5eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.843559265s of 10.957581520s, submitted: 87
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 357 ms_handle_reset con 0x55819d448c00 session 0x55819dacf860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:16.994056+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167116800 unmapped: 19644416 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:17.994228+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167124992 unmapped: 19636224 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 358 handle_osd_map epochs [358,358], i have 358, src has [1,358]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 358 ms_handle_reset con 0x55819ccd9000 session 0x55819cd7d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 358 ms_handle_reset con 0x55819e0d6400 session 0x55819d61dc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3036539 data_alloc: 234881024 data_used: 20914176
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:18.994410+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 19619840 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:19.994618+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 19619840 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:20.994823+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 19619840 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:21.995041+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 358 heartbeat osd_stat(store_statfs(0x4f304d000/0x0/0x4ffc00000, data 0x5fc8d8a/0x5eaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167141376 unmapped: 19619840 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 358 ms_handle_reset con 0x55819e0d6c00 session 0x55819b210d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:22.995246+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167493632 unmapped: 19267584 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038341 data_alloc: 234881024 data_used: 20914176
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 358 heartbeat osd_stat(store_statfs(0x4f3025000/0x0/0x4ffc00000, data 0x5ff2d8a/0x5ed9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:23.995414+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167493632 unmapped: 19267584 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:24.995586+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167493632 unmapped: 19267584 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:25.995706+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 359 heartbeat osd_stat(store_statfs(0x4f3021000/0x0/0x4ffc00000, data 0x5ff4809/0x5edc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167575552 unmapped: 19185664 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 359 heartbeat osd_stat(store_statfs(0x4f3021000/0x0/0x4ffc00000, data 0x5ff4809/0x5edc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:26.995927+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 359 ms_handle_reset con 0x55819e0d7c00 session 0x55819e732000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167591936 unmapped: 19169280 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:27.996112+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167591936 unmapped: 19169280 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3044115 data_alloc: 234881024 data_used: 21065728
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:28.996314+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167591936 unmapped: 19169280 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.884469032s of 13.183445930s, submitted: 103
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 360 ms_handle_reset con 0x55819e0d6800 session 0x55819a133860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:29.996528+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 360 heartbeat osd_stat(store_statfs(0x4f301e000/0x0/0x4ffc00000, data 0x5ff626c/0x5edf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167600128 unmapped: 19161088 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 360 ms_handle_reset con 0x55819e0d6800 session 0x55819f3d32c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:30.996634+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 360 ms_handle_reset con 0x55819ccd9000 session 0x55819c0574a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167903232 unmapped: 18857984 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:31.996760+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167903232 unmapped: 18857984 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:32.996933+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167903232 unmapped: 18857984 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 360 heartbeat osd_stat(store_statfs(0x4f2ffb000/0x0/0x4ffc00000, data 0x601a26c/0x5f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 360 ms_handle_reset con 0x55819e0d7c00 session 0x55819b2cc3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3052513 data_alloc: 234881024 data_used: 21065728
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:33.997173+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167919616 unmapped: 18841600 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:34.997325+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167919616 unmapped: 18841600 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf3400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 360 ms_handle_reset con 0x55819dbf3400 session 0x55819cd594a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:35.997511+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167927808 unmapped: 18833408 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 361 ms_handle_reset con 0x55819dbf2000 session 0x55819a59ef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:36.997666+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168099840 unmapped: 18661376 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 362 ms_handle_reset con 0x55819c31b000 session 0x55819d61c960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 362 ms_handle_reset con 0x55819a1a2000 session 0x55819c162960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 362 heartbeat osd_stat(store_statfs(0x4f2ff5000/0x0/0x4ffc00000, data 0x601be5b/0x5f08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:37.997789+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168566784 unmapped: 18194432 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080519 data_alloc: 234881024 data_used: 23416832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:38.998008+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168583168 unmapped: 18178048 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 362 ms_handle_reset con 0x55819ccd9000 session 0x55819d82c960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 362 heartbeat osd_stat(store_statfs(0x4f2ff1000/0x0/0x4ffc00000, data 0x601da3a/0x5f0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.398385048s of 10.505170822s, submitted: 38
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:39.998365+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf3400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168198144 unmapped: 18563072 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 363 ms_handle_reset con 0x55819e0d6800 session 0x55819cd59680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 363 heartbeat osd_stat(store_statfs(0x4f2fec000/0x0/0x4ffc00000, data 0x601f5c6/0x5f10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:40.998547+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167837696 unmapped: 18923520 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 364 ms_handle_reset con 0x55819dbf3400 session 0x55819acdda40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:41.998690+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 364 ms_handle_reset con 0x55819dbf2000 session 0x55819b3b50e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167903232 unmapped: 18857984 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf3400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:42.998836+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167919616 unmapped: 18841600 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 364 ms_handle_reset con 0x55819a1a2000 session 0x55819c1ced20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3104593 data_alloc: 234881024 data_used: 23826432
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 365 ms_handle_reset con 0x55819dbf3400 session 0x55819b3c0960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:43.999023+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 17956864 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:44.999187+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168804352 unmapped: 17956864 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 365 heartbeat osd_stat(store_statfs(0x4f2f7b000/0x0/0x4ffc00000, data 0x608fd05/0x5f82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 365 ms_handle_reset con 0x55819ccd9000 session 0x55819df792c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:45.999300+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 366 ms_handle_reset con 0x55819e0d6800 session 0x55819a1a0780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169459712 unmapped: 17301504 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:46.999445+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 367 ms_handle_reset con 0x55819e0d7c00 session 0x55819a133c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 367 ms_handle_reset con 0x55819a1a2000 session 0x55819d82c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169697280 unmapped: 17063936 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 367 ms_handle_reset con 0x55819c31b000 session 0x5581a0d39a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:47.999586+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169762816 unmapped: 16998400 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131025 data_alloc: 234881024 data_used: 25280512
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:48.999681+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 16949248 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 367 handle_osd_map epochs [368,368], i have 368, src has [1,368]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 368 ms_handle_reset con 0x55819dbf2000 session 0x55819c17d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf3400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 368 heartbeat osd_stat(store_statfs(0x4f2f6e000/0x0/0x4ffc00000, data 0x60966b2/0x5f8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:49.999838+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.848918438s of 10.073059082s, submitted: 134
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169844736 unmapped: 16916480 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 368 ms_handle_reset con 0x55819dbf3400 session 0x55819d82d4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 368 ms_handle_reset con 0x55819ccd9000 session 0x55819b3b4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 368 ms_handle_reset con 0x55819a1a2000 session 0x55819d70c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:51.000041+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169902080 unmapped: 16859136 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:52.000221+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 368 ms_handle_reset con 0x55819c31b000 session 0x55819d70d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169902080 unmapped: 16859136 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:53.000396+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169902080 unmapped: 16859136 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131935 data_alloc: 234881024 data_used: 25280512
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:54.000543+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169910272 unmapped: 16850944 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 369 ms_handle_reset con 0x55819e0d7c00 session 0x55819d6e4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 369 heartbeat osd_stat(store_statfs(0x4f2f6b000/0x0/0x4ffc00000, data 0x6098e0d/0x5f91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:55.000751+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169926656 unmapped: 16834560 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c6fcc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 370 ms_handle_reset con 0x55819c6fcc00 session 0x55819c034f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 370 ms_handle_reset con 0x55819dbf2000 session 0x55819d70da40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:56.000939+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169934848 unmapped: 16826368 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:57.001127+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 169951232 unmapped: 16809984 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 370 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 371 ms_handle_reset con 0x55819a1a2000 session 0x55819b2ccd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 371 ms_handle_reset con 0x55819c31b000 session 0x55819d6e4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:58.001254+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 371 heartbeat osd_stat(store_statfs(0x4f2f69000/0x0/0x4ffc00000, data 0x609ae54/0x5f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 170041344 unmapped: 16719872 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3144014 data_alloc: 234881024 data_used: 25292800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:03:59.001435+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 170049536 unmapped: 16711680 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 371 ms_handle_reset con 0x55819ccd9000 session 0x55819b3b50e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:00.001662+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 170049536 unmapped: 16711680 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.347164154s of 10.841417313s, submitted: 100
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 372 heartbeat osd_stat(store_statfs(0x4f2f65000/0x0/0x4ffc00000, data 0x609cac3/0x5f99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:01.001801+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 170065920 unmapped: 16695296 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:02.002012+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 171229184 unmapped: 15532032 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d316000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 373 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x609e57a/0x5f9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 373 ms_handle_reset con 0x55819d316000 session 0x55819c034960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:03.002189+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 373 ms_handle_reset con 0x55819e22e800 session 0x55819c05b680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 373 ms_handle_reset con 0x55819e0d7000 session 0x55819b3c3c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 171261952 unmapped: 15499264 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 373 ms_handle_reset con 0x55819a1a2000 session 0x55819e733860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 374 ms_handle_reset con 0x55819c31b000 session 0x55819cd581e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 374 ms_handle_reset con 0x55819ccd9000 session 0x55819d61c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 374 ms_handle_reset con 0x55819e0d7c00 session 0x55819a59ef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3154112 data_alloc: 234881024 data_used: 25165824
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:04.002530+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 171278336 unmapped: 15482880 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:05.002721+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 171327488 unmapped: 15433728 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 376 ms_handle_reset con 0x55819c31b000 session 0x55819b2110e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 376 ms_handle_reset con 0x55819a1a2000 session 0x55819dd35860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:06.003030+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 376 ms_handle_reset con 0x55819ccd9000 session 0x55819b3c4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 376 ms_handle_reset con 0x55819e0d7000 session 0x55819c1ce5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 171368448 unmapped: 15392768 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 376 ms_handle_reset con 0x55819a1a2000 session 0x55819af53e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:07.003200+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 171376640 unmapped: 15384576 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 376 heartbeat osd_stat(store_statfs(0x4f2fee000/0x0/0x4ffc00000, data 0x600c7b5/0x5f0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 376 ms_handle_reset con 0x55819c31b000 session 0x55819f3d3c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 377 ms_handle_reset con 0x55819ccd9000 session 0x55819d61cb40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:08.003597+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 171384832 unmapped: 15376384 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 377 ms_handle_reset con 0x55819c77a400 session 0x55819c0185a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819dbf2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 377 ms_handle_reset con 0x55819dbf2000 session 0x55819a0b5e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 377 ms_handle_reset con 0x55819e22e800 session 0x55819dd345a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 377 ms_handle_reset con 0x55819a1a2000 session 0x55819dd341e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140990 data_alloc: 234881024 data_used: 25157632
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:09.003749+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 377 ms_handle_reset con 0x55819c77a400 session 0x55819b210b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 15368192 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 378 ms_handle_reset con 0x55819c31b000 session 0x55819cd58000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 378 ms_handle_reset con 0x55819ccd9000 session 0x55819c0570e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 378 ms_handle_reset con 0x55819e0d7c00 session 0x55819a618000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:10.004278+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 378 heartbeat osd_stat(store_statfs(0x4f2fef000/0x0/0x4ffc00000, data 0x5d03334/0x5f0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 171409408 unmapped: 15351808 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 378 ms_handle_reset con 0x55819a1a2000 session 0x55819dace000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.979020119s of 10.004898071s, submitted: 269
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 378 ms_handle_reset con 0x55819c31b000 session 0x55819d61c3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:11.004471+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167247872 unmapped: 19513344 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 378 heartbeat osd_stat(store_statfs(0x4f3ee6000/0x0/0x4ffc00000, data 0x4e0bf2f/0x5018000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 379 ms_handle_reset con 0x55819c77a400 session 0x55819c042d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:12.004618+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 379 ms_handle_reset con 0x55819e22e800 session 0x55819b2cc3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 19472384 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:13.004826+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167313408 unmapped: 19447808 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 379 ms_handle_reset con 0x55819e0d6400 session 0x55819b3c3680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 379 ms_handle_reset con 0x55819e0d6c00 session 0x55819c1cef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2959392 data_alloc: 234881024 data_used: 15929344
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:14.004904+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 380 ms_handle_reset con 0x55819a1a2000 session 0x55819aef4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 380 ms_handle_reset con 0x55819c31b000 session 0x55819c17d680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 380 ms_handle_reset con 0x55819e0d7c00 session 0x55819d903c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 380 ms_handle_reset con 0x55819c77a400 session 0x55819a618b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168558592 unmapped: 18202624 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:15.005314+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168558592 unmapped: 18202624 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:16.005493+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 382 ms_handle_reset con 0x55819a1a2000 session 0x55819af52f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 382 ms_handle_reset con 0x55819c31b000 session 0x55819d61d0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 18186240 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 382 heartbeat osd_stat(store_statfs(0x4f3f00000/0x0/0x4ffc00000, data 0x4dee4f6/0x4ffb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:17.005861+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 383 ms_handle_reset con 0x55819e0d6400 session 0x55819b3c3680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 18186240 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:18.008928+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 168574976 unmapped: 18186240 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 383 ms_handle_reset con 0x55819e0d7c00 session 0x55819d70d4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2786632 data_alloc: 218103808 data_used: 8916992
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:19.009080+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 384 ms_handle_reset con 0x55819a1a2000 session 0x55819dd343c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157712384 unmapped: 29048832 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 384 handle_osd_map epochs [385,385], i have 385, src has [1,385]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 385 ms_handle_reset con 0x55819e0d6c00 session 0x55819dace000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 385 ms_handle_reset con 0x55819c31b000 session 0x55819d61cb40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:20.009515+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f7a18000/0x0/0x4ffc00000, data 0x12d6cc6/0x14e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 28991488 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:21.009702+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f7a17000/0x0/0x4ffc00000, data 0x12d88b5/0x14e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 28991488 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:22.009914+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 28991488 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:23.010105+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 28991488 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.028052330s of 12.923688889s, submitted: 287
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 385 ms_handle_reset con 0x55819c77a400 session 0x55819b3b50e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:24.010259+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2460950 data_alloc: 218103808 data_used: 1679360
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 28991488 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 385 heartbeat osd_stat(store_statfs(0x4f7a16000/0x0/0x4ffc00000, data 0x12d8917/0x14e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:25.010433+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 386 ms_handle_reset con 0x55819e0d6400 session 0x55819d6e4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 27942912 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:26.010586+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 27942912 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 386 ms_handle_reset con 0x55819a1a2000 session 0x55819b2ccd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:27.010803+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 386 ms_handle_reset con 0x55819c31b000 session 0x55819b3c2780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 386 ms_handle_reset con 0x55819c77a400 session 0x55819c309c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157827072 unmapped: 28934144 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d6c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 387 ms_handle_reset con 0x55819e0d7c00 session 0x55819df79680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:28.011046+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157835264 unmapped: 28925952 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 388 ms_handle_reset con 0x55819e0d6c00 session 0x55819a186780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 388 heartbeat osd_stat(store_statfs(0x4f7347000/0x0/0x4ffc00000, data 0x19a5f25/0x1bb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:29.011277+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2519400 data_alloc: 218103808 data_used: 1691648
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157843456 unmapped: 28917760 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f7344000/0x0/0x4ffc00000, data 0x19a7abe/0x1bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 389 ms_handle_reset con 0x55819a1a2000 session 0x55819ace2780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:30.011582+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 27860992 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:31.011806+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 27860992 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 389 ms_handle_reset con 0x55819c77a400 session 0x55819a0b54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:32.012026+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 389 heartbeat osd_stat(store_statfs(0x4f733e000/0x0/0x4ffc00000, data 0x19a9593/0x1bbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 27860992 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4cb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 390 ms_handle_reset con 0x55819e0d7c00 session 0x55819f3d32c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:33.012178+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158916608 unmapped: 27844608 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 391 ms_handle_reset con 0x55819e4cb000 session 0x55819a76af00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.800370216s of 10.054938316s, submitted: 82
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 391 ms_handle_reset con 0x55819c827000 session 0x55819dd34b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 391 ms_handle_reset con 0x55819c31b000 session 0x55819b2114a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:34.012351+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2543686 data_alloc: 218103808 data_used: 1708032
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158916608 unmapped: 27844608 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:35.012546+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 391 ms_handle_reset con 0x55819d65e000 session 0x55819acdde00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 391 ms_handle_reset con 0x55819c77a400 session 0x55819dd350e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158924800 unmapped: 27836416 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 392 ms_handle_reset con 0x55819a1a2000 session 0x55819c308780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4cb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 392 ms_handle_reset con 0x55819e4cb000 session 0x55819c056780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:36.012694+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 392 ms_handle_reset con 0x55819e0d7c00 session 0x55819d902b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158941184 unmapped: 27820032 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 392 ms_handle_reset con 0x55819a1a2000 session 0x55819a6183c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 392 ms_handle_reset con 0x55819c31b000 session 0x55819cd581e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:37.012836+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157523968 unmapped: 29237248 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:38.013054+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 392 heartbeat osd_stat(store_statfs(0x4f72f5000/0x0/0x4ffc00000, data 0x19eed01/0x1c08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 157523968 unmapped: 29237248 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:39.013191+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2573520 data_alloc: 218103808 data_used: 5697536
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 27967488 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 392 ms_handle_reset con 0x55819c77b400 session 0x55819d61cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:40.013405+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 27967488 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd40400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 393 ms_handle_reset con 0x55819cd40400 session 0x55819d61cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:41.013654+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 27951104 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 394 ms_handle_reset con 0x55819c77a800 session 0x55819b3c4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:42.013882+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 27934720 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:43.014041+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 27934720 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.754673004s of 10.097013474s, submitted: 71
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:44.014159+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 395 heartbeat osd_stat(store_statfs(0x4f72ec000/0x0/0x4ffc00000, data 0x19f24f7/0x1c10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 395 ms_handle_reset con 0x55819a1a2000 session 0x55819c056780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2609680 data_alloc: 218103808 data_used: 8847360
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 27926528 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:45.014378+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 27918336 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:46.014488+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 397 ms_handle_reset con 0x55819c31b000 session 0x55819b2114a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 27877376 heap: 186761216 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:47.014603+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 397 ms_handle_reset con 0x55819c77b400 session 0x55819c17d680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e0d7c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c748400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 175775744 unmapped: 19390464 heap: 195166208 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:48.014777+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 397 handle_osd_map epochs [398,398], i have 398, src has [1,398]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 398 heartbeat osd_stat(store_statfs(0x4f3f22000/0x0/0x4ffc00000, data 0x4db9297/0x4fdb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,2,2])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 163315712 unmapped: 36052992 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 398 ms_handle_reset con 0x55819e0d7c00 session 0x55819dd343c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 398 ms_handle_reset con 0x55819c748400 session 0x55819af52960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 398 ms_handle_reset con 0x55819a1a2000 session 0x55819cd7c3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 398 ms_handle_reset con 0x55819c31b000 session 0x55819c0183c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:49.015000+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3008182 data_alloc: 218103808 data_used: 8855552
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 39378944 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 399 heartbeat osd_stat(store_statfs(0x4f3b25000/0x0/0x4ffc00000, data 0x51b8dca/0x53d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:50.015183+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 399 heartbeat osd_stat(store_statfs(0x4f34da000/0x0/0x4ffc00000, data 0x5801881/0x5a23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161857536 unmapped: 37511168 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 399 handle_osd_map epochs [400,400], i have 400, src has [1,400]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 400 ms_handle_reset con 0x55819c77a800 session 0x55819b3c0960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:51.015411+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161988608 unmapped: 37380096 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:52.015579+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161988608 unmapped: 37380096 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:53.015815+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161734656 unmapped: 37634048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 401 ms_handle_reset con 0x55819c77b400 session 0x55819f3d3e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:54.015982+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073050 data_alloc: 218103808 data_used: 8941568
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161734656 unmapped: 37634048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:55.016135+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161742848 unmapped: 37625856 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:56.016270+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 401 heartbeat osd_stat(store_statfs(0x4f309f000/0x0/0x4ffc00000, data 0x582a03f/0x5a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.486104965s of 12.348516464s, submitted: 311
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161751040 unmapped: 37617664 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 402 ms_handle_reset con 0x55819a1a2000 session 0x55819d6e50e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:57.016445+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161751040 unmapped: 37617664 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:58.016603+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161751040 unmapped: 37617664 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:04:59.016759+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 402 ms_handle_reset con 0x55819c77a400 session 0x55819df79860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 402 ms_handle_reset con 0x55819c31b000 session 0x55819c1621e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 402 ms_handle_reset con 0x55819d65e000 session 0x55819d6e4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3077180 data_alloc: 218103808 data_used: 8941568
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c748400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161751040 unmapped: 37617664 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 402 ms_handle_reset con 0x55819c77a800 session 0x55819d902f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:00.017028+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 402 heartbeat osd_stat(store_statfs(0x4f3099000/0x0/0x4ffc00000, data 0x582fada/0x5a55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161767424 unmapped: 37601280 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819a1a2000 session 0x55819a6183c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:01.017222+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161783808 unmapped: 37584896 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819c748400 session 0x55819e7323c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:02.017403+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161792000 unmapped: 37576704 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:03.017522+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161800192 unmapped: 37568512 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f3096000/0x0/0x4ffc00000, data 0x5831565/0x5a57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:04.017629+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080790 data_alloc: 218103808 data_used: 8949760
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f3096000/0x0/0x4ffc00000, data 0x5831565/0x5a57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 161800192 unmapped: 37568512 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:05.017774+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f3096000/0x0/0x4ffc00000, data 0x5831565/0x5a57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:06.018066+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:07.018200+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:08.018436+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f3096000/0x0/0x4ffc00000, data 0x5831565/0x5a57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:09.018704+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151510 data_alloc: 234881024 data_used: 18104320
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:10.019070+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f3096000/0x0/0x4ffc00000, data 0x5831565/0x5a57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:11.019227+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:12.019406+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:13.019585+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f3096000/0x0/0x4ffc00000, data 0x5831565/0x5a57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:14.020423+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151510 data_alloc: 234881024 data_used: 18104320
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 167288832 unmapped: 32079872 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:15.021034+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.521654129s of 19.203334808s, submitted: 54
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 21659648 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:16.021276+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1ac7000/0x0/0x4ffc00000, data 0x5831565/0x5a57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 179281920 unmapped: 20086784 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:17.021497+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 179822592 unmapped: 19546112 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:18.022033+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 175169536 unmapped: 24199168 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819d65e000 session 0x5581a0d383c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:19.022217+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3271274 data_alloc: 234881024 data_used: 19111936
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 175169536 unmapped: 24199168 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c826000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819c826000 session 0x55819dd34f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:20.022588+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd42c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819cd42c00 session 0x55819c309860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819a1a2000 session 0x55819c05a780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 175177728 unmapped: 24190976 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:21.022758+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c748400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 175185920 unmapped: 24182784 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:22.022904+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c826000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 175202304 unmapped: 24166400 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:23.023040+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 175964160 unmapped: 23404544 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:24.023184+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3270661 data_alloc: 234881024 data_used: 19668992
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176054272 unmapped: 23314432 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819c31b000 session 0x55819c05ba40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819c77a400 session 0x55819990a960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:25.023326+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd42c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819cd42c00 session 0x55819b2101e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 23298048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:26.023484+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 23298048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:27.023621+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 23298048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:28.023779+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 23298048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:29.023929+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3268057 data_alloc: 234881024 data_used: 19668992
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 23298048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:30.024110+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 23298048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:31.024234+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 23298048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:32.024356+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176070656 unmapped: 23298048 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:33.024589+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176185344 unmapped: 23183360 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:34.025104+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275737 data_alloc: 234881024 data_used: 20852736
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 23126016 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:35.025267+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 23126016 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:36.025403+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 23126016 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:37.025567+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 23126016 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:38.025713+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 23126016 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1192000/0x0/0x4ffc00000, data 0x6595575/0x67bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.471292496s of 23.438802719s, submitted: 180
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:39.025867+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3278459 data_alloc: 234881024 data_used: 20852736
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 23126016 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:40.026021+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 23126016 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:41.026162+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1186000/0x0/0x4ffc00000, data 0x65a1575/0x67c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 23126016 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:42.026296+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 23117824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1186000/0x0/0x4ffc00000, data 0x65a1575/0x67c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:43.026409+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1186000/0x0/0x4ffc00000, data 0x65a1575/0x67c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 23117824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:44.026541+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3278459 data_alloc: 234881024 data_used: 20852736
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 23117824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:45.026664+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1186000/0x0/0x4ffc00000, data 0x65a1575/0x67c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 23117824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:46.026834+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1186000/0x0/0x4ffc00000, data 0x65a1575/0x67c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1186000/0x0/0x4ffc00000, data 0x65a1575/0x67c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 23117824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:47.027026+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1186000/0x0/0x4ffc00000, data 0x65a1575/0x67c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 23117824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:48.027183+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 23117824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:49.027318+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3278031 data_alloc: 234881024 data_used: 20848640
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.485398293s of 10.539136887s, submitted: 8
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 ms_handle_reset con 0x55819d65e000 session 0x55819dd34b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176250880 unmapped: 23117824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:50.027488+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1184000/0x0/0x4ffc00000, data 0x65a2598/0x67ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1184000/0x0/0x4ffc00000, data 0x65a2598/0x67ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176259072 unmapped: 23109632 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:51.027655+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176259072 unmapped: 23109632 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:52.027785+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176259072 unmapped: 23109632 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1184000/0x0/0x4ffc00000, data 0x65a2598/0x67ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:53.027944+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176275456 unmapped: 23093248 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:54.028135+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3280104 data_alloc: 234881024 data_used: 20963328
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176275456 unmapped: 23093248 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:55.028254+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176275456 unmapped: 23093248 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:56.028406+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176275456 unmapped: 23093248 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:57.028621+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176275456 unmapped: 23093248 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:58.028828+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f1184000/0x0/0x4ffc00000, data 0x65a2598/0x67ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176275456 unmapped: 23093248 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:05:59.029050+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3283784 data_alloc: 234881024 data_used: 21794816
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176308224 unmapped: 23060480 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:00.029312+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.792782784s of 10.818713188s, submitted: 7
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f1184000/0x0/0x4ffc00000, data 0x65a2598/0x67ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c31b000 session 0x55819b210b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c77a400 session 0x55819a132f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd42c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819cd42c00 session 0x55819c05b0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819d65e000 session 0x55819b3c2d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176308224 unmapped: 23060480 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c76d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:01.029805+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c76d800 session 0x55819cd581e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c31b000 session 0x55819d61d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c77a400 session 0x55819b3c3c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd42c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819cd42c00 session 0x55819d9034a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819d65e000 session 0x55819d61d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f0456000/0x0/0x4ffc00000, data 0x6ead187/0x70d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x86cf9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c75a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c75a800 session 0x55819c1623c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819d40d800 session 0x55819cd581e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176816128 unmapped: 22552576 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:02.029921+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176816128 unmapped: 22552576 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:03.030045+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 176939008 unmapped: 22429696 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:04.030206+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378968 data_alloc: 234881024 data_used: 22667264
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f18b5000/0x0/0x4ffc00000, data 0x6ead1e9/0x70d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 177160192 unmapped: 22208512 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:05.030376+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 21659648 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:06.030526+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 21659648 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:07.030691+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 21659648 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:08.030840+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c31b000 session 0x55819c05b0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c75a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f18b4000/0x0/0x4ffc00000, data 0x6ead20c/0x70da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 177725440 unmapped: 21643264 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:09.030993+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f18b4000/0x0/0x4ffc00000, data 0x6ead20c/0x70da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390026 data_alloc: 234881024 data_used: 23478272
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 178716672 unmapped: 20652032 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:10.031136+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f18b4000/0x0/0x4ffc00000, data 0x6ead20c/0x70da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185565184 unmapped: 13803520 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:11.031269+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd42c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819cd42c00 session 0x55819aef4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185565184 unmapped: 13803520 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:12.031539+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185565184 unmapped: 13803520 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:13.031686+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185565184 unmapped: 13803520 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819d65e000 session 0x55819dacf860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:14.031817+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3448746 data_alloc: 251658240 data_used: 31645696
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.951912880s of 14.376818657s, submitted: 53
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185565184 unmapped: 13803520 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:15.032377+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c2eb000 session 0x55819dd35680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 13115392 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:16.033134+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f18b4000/0x0/0x4ffc00000, data 0x6ead20c/0x70da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819a1a2000 session 0x55819b3c10e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c2eb000 session 0x55819c188960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186449920 unmapped: 12918784 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd42c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:17.033329+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c31b000 session 0x55819b303680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 12877824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:18.033455+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186490880 unmapped: 12877824 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:19.033646+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3458829 data_alloc: 251658240 data_used: 32477184
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f188e000/0x0/0x4ffc00000, data 0x6ed121b/0x70ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186523648 unmapped: 12845056 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:20.034210+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186982400 unmapped: 12386304 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:21.034331+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 192995328 unmapped: 6373376 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:22.034455+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 195715072 unmapped: 3653632 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:23.034641+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 195715072 unmapped: 3653632 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:24.034990+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580251 data_alloc: 251658240 data_used: 34861056
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 195715072 unmapped: 3653632 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:25.035258+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.595991135s of 10.558649063s, submitted: 125
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819d65e000 session 0x55819d82c3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c760400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4ef8a8000/0x0/0x4ffc00000, data 0x7d1721b/0x7f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819c760400 session 0x55819c042960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 13500416 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:26.035489+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 13500416 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:27.035676+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 13500416 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:28.036047+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 13500416 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:29.036367+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995699 data_alloc: 234881024 data_used: 21200896
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 13500416 heap: 199368704 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:30.036704+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 ms_handle_reset con 0x55819a1a2000 session 0x55819c056780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f3d06000/0x0/0x4ffc00000, data 0x37b31b9/0x39e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186515456 unmapped: 16572416 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:31.036838+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186540032 unmapped: 16547840 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:32.036944+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186540032 unmapped: 16547840 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:33.037134+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186540032 unmapped: 16547840 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:34.037386+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035832 data_alloc: 234881024 data_used: 21409792
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186540032 unmapped: 16547840 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:35.037574+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186540032 unmapped: 16547840 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:36.037764+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f3b88000/0x0/0x4ffc00000, data 0x3aa71b9/0x3c66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.089320183s of 11.323429108s, submitted: 64
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186703872 unmapped: 16384000 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:37.037920+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186703872 unmapped: 16384000 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:38.038085+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f3b88000/0x0/0x4ffc00000, data 0x3aa71b9/0x3c66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:39.038274+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186703872 unmapped: 16384000 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f3b88000/0x0/0x4ffc00000, data 0x3aa71b9/0x3c66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035704 data_alloc: 234881024 data_used: 21409792
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:40.038457+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186703872 unmapped: 16384000 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:41.038616+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186703872 unmapped: 16384000 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f3b88000/0x0/0x4ffc00000, data 0x3aa71b9/0x3c66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:42.038770+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186703872 unmapped: 16384000 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:43.039100+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186810368 unmapped: 16277504 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:44.039303+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186826752 unmapped: 16261120 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3044048 data_alloc: 234881024 data_used: 21434368
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f3b63000/0x0/0x4ffc00000, data 0x3acc1b9/0x3c8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 404 handle_osd_map epochs [405,405], i have 405, src has [1,405]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:45.039548+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186826752 unmapped: 16261120 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 405 ms_handle_reset con 0x55819cd42c00 session 0x55819a133a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 405 ms_handle_reset con 0x55819d40d800 session 0x55819dace960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:46.039719+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186859520 unmapped: 16228352 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 405 ms_handle_reset con 0x55819c31b000 session 0x55819d9030e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.609722137s of 10.191322327s, submitted: 29
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:47.039874+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186884096 unmapped: 16203776 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:48.040065+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186884096 unmapped: 16203776 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:49.040284+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186908672 unmapped: 16179200 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d65e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 407 ms_handle_reset con 0x55819d65e000 session 0x55819b3d9860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053905 data_alloc: 234881024 data_used: 21340160
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 407 ms_handle_reset con 0x55819a1a2000 session 0x5581a0d38d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 407 heartbeat osd_stat(store_statfs(0x4f3b61000/0x0/0x4ffc00000, data 0x3ad2460/0x3c8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:50.040582+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186916864 unmapped: 16171008 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 407 ms_handle_reset con 0x55819c2eb000 session 0x55819c17d680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 407 ms_handle_reset con 0x55819c31b000 session 0x55819d902f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:51.040742+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186916864 unmapped: 16171008 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd42c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 heartbeat osd_stat(store_statfs(0x4f3de7000/0x0/0x4ffc00000, data 0x37dff6d/0x3a05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 ms_handle_reset con 0x55819cd42c00 session 0x55819a132f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:52.040919+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186916864 unmapped: 16171008 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:53.041112+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186916864 unmapped: 16171008 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 ms_handle_reset con 0x55819c748400 session 0x55819b3c2f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 ms_handle_reset con 0x55819c826000 session 0x55819c019680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 heartbeat osd_stat(store_statfs(0x4f3de7000/0x0/0x4ffc00000, data 0x37dff6d/0x3a05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:54.041249+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186916864 unmapped: 16171008 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028899 data_alloc: 234881024 data_used: 21258240
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 ms_handle_reset con 0x55819a1a2000 session 0x55819cd7c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:55.041384+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 heartbeat osd_stat(store_statfs(0x4f3de9000/0x0/0x4ffc00000, data 0x37dff6d/0x3a05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186933248 unmapped: 16154624 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 heartbeat osd_stat(store_statfs(0x4f3dea000/0x0/0x4ffc00000, data 0x37dff5d/0x3a04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:56.041543+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186933248 unmapped: 16154624 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:57.041806+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186933248 unmapped: 16154624 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 heartbeat osd_stat(store_statfs(0x4f3dea000/0x0/0x4ffc00000, data 0x37dff5d/0x3a04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:58.042061+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186933248 unmapped: 16154624 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.054027557s of 11.583190918s, submitted: 62
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:06:59.042228+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186941440 unmapped: 16146432 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028145 data_alloc: 234881024 data_used: 21364736
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:00.042433+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 ms_handle_reset con 0x55819c2eb000 session 0x55819dace5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186941440 unmapped: 16146432 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819c31b000 session 0x55819b3c2960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:01.042654+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:02.042847+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c748400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd42c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819cd42c00 session 0x55819d902b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819c748400 session 0x55819d70cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f4b2c000/0x0/0x4ffc00000, data 0x2a9aa32/0x2cc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:03.043125+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:04.043315+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2871997 data_alloc: 234881024 data_used: 12140544
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:05.043519+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:06.043699+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:07.043854+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f4b2c000/0x0/0x4ffc00000, data 0x2a9aa32/0x2cc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:08.044043+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.588365555s of 10.696976662s, submitted: 35
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:09.044192+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2872129 data_alloc: 234881024 data_used: 12140544
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:10.044457+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180953088 unmapped: 22134784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f4b2c000/0x0/0x4ffc00000, data 0x2a9aa32/0x2cc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:11.044620+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181518336 unmapped: 21569536 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:12.044787+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181518336 unmapped: 21569536 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819a1a2000 session 0x55819b3c21e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:13.044996+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182566912 unmapped: 20520960 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f4aa3000/0x0/0x4ffc00000, data 0x2b22a32/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:14.045167+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819c2eb000 session 0x55819df79e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 21143552 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2881199 data_alloc: 234881024 data_used: 12247040
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:15.045413+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 21143552 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:16.072119+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 21143552 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:17.072294+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 21143552 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:18.072439+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 21143552 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:19.072614+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 21143552 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f4aa4000/0x0/0x4ffc00000, data 0x2b22a32/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2881606 data_alloc: 234881024 data_used: 12251136
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:20.072759+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181952512 unmapped: 21135360 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.581878662s of 11.680472374s, submitted: 21
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819c31b000 session 0x55819f3d3680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:21.072880+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181952512 unmapped: 21135360 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c826000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819c826000 session 0x55819d6e45a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:22.073008+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181968896 unmapped: 21118976 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819a1a2000 session 0x55819d82c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f4aa4000/0x0/0x4ffc00000, data 0x2b22a32/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:23.073128+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181968896 unmapped: 21118976 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819c2eb000 session 0x55819b3c2b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 ms_handle_reset con 0x55819c31b000 session 0x55819b3025a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:24.073249+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181985280 unmapped: 21102592 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2876634 data_alloc: 234881024 data_used: 12255232
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:25.073672+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c748400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 410 ms_handle_reset con 0x55819c748400 session 0x55819a76a780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c826000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 21078016 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 410 ms_handle_reset con 0x55819c826000 session 0x55819af52d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 410 heartbeat osd_stat(store_statfs(0x4f4b2d000/0x0/0x4ffc00000, data 0x2a9a9c0/0x2cc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:26.073873+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 21078016 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 410 ms_handle_reset con 0x55819a1a2000 session 0x55819b3d92c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:27.074028+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 411 ms_handle_reset con 0x55819c2eb000 session 0x55819c1883c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 411 ms_handle_reset con 0x55819c31b000 session 0x55819d6e5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182018048 unmapped: 21069824 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c748400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 411 ms_handle_reset con 0x55819c748400 session 0x55819c057a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:28.074280+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 411 heartbeat osd_stat(store_statfs(0x4f4b27000/0x0/0x4ffc00000, data 0x2a9e10e/0x2cc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182018048 unmapped: 21069824 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c767400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 411 ms_handle_reset con 0x55819c767400 session 0x55819dd354a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 412 ms_handle_reset con 0x55819d40d800 session 0x55819c162d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 412 ms_handle_reset con 0x55819a1a2000 session 0x55819d61c960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:29.074498+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 21012480 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 412 ms_handle_reset con 0x55819c31b000 session 0x55819d70de00
Nov 29 08:16:55 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 08:16:55 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3541332177' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2881188 data_alloc: 234881024 data_used: 12267520
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:30.074718+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 21012480 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.489787102s of 10.035515785s, submitted: 159
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 413 ms_handle_reset con 0x55819c2eb000 session 0x55819aef41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:31.074899+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c748400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182083584 unmapped: 21004288 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 413 ms_handle_reset con 0x55819c748400 session 0x55819c3090e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:32.075143+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182083584 unmapped: 21004288 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 413 heartbeat osd_stat(store_statfs(0x4f4b63000/0x0/0x4ffc00000, data 0x2a579f4/0x2c8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 413 ms_handle_reset con 0x55819c75a800 session 0x55819990a960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 413 ms_handle_reset con 0x55819c77a400 session 0x55819dd345a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:33.075307+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 29474816 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 413 ms_handle_reset con 0x55819a1a2000 session 0x55819cd7de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:34.075486+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173621248 unmapped: 29466624 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2632852 data_alloc: 218103808 data_used: 1843200
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:35.075730+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173621248 unmapped: 29466624 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f62b0000/0x0/0x4ffc00000, data 0x130a45c/0x153d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:36.075894+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 414 ms_handle_reset con 0x55819c2eb000 session 0x55819b3c54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173621248 unmapped: 29466624 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:37.076106+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173621248 unmapped: 29466624 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:38.076312+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f62ae000/0x0/0x4ffc00000, data 0x130a4ce/0x153f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173645824 unmapped: 29442048 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 414 ms_handle_reset con 0x55819d40d800 session 0x55819c0574a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 415 handle_osd_map epochs [415,415], i have 415, src has [1,415]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 415 ms_handle_reset con 0x55819a1a2000 session 0x55819d82d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:39.076506+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173670400 unmapped: 29417472 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 416 ms_handle_reset con 0x55819c2eb000 session 0x55819c05be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2652621 data_alloc: 218103808 data_used: 1867776
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 416 ms_handle_reset con 0x55819c31b000 session 0x55819cd58b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 416 heartbeat osd_stat(store_statfs(0x4f62a9000/0x0/0x4ffc00000, data 0x130c147/0x1544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:40.076750+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173678592 unmapped: 29409280 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:41.076893+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c75a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.113725662s of 10.441638947s, submitted: 102
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 417 ms_handle_reset con 0x55819c75a800 session 0x55819c056000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c77a400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173694976 unmapped: 29392896 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 417 ms_handle_reset con 0x55819c77a400 session 0x55819d902960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 417 ms_handle_reset con 0x55819a1a2000 session 0x55819c05a5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 417 heartbeat osd_stat(store_statfs(0x4f62a1000/0x0/0x4ffc00000, data 0x130f85b/0x154c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 417 ms_handle_reset con 0x55819c2eb000 session 0x55819a1a0780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 417 ms_handle_reset con 0x55819c31b000 session 0x55819aef52c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:42.077086+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c75a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173735936 unmapped: 29351936 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:43.077280+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 ms_handle_reset con 0x55819c75a800 session 0x55819d902d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173735936 unmapped: 29351936 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d40d800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 ms_handle_reset con 0x55819d40d800 session 0x55819cd7c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:44.077477+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173735936 unmapped: 29351936 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2664896 data_alloc: 218103808 data_used: 1871872
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:45.077680+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 ms_handle_reset con 0x55819a1a2000 session 0x55819d902d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 heartbeat osd_stat(store_statfs(0x4f62a0000/0x0/0x4ffc00000, data 0x13113ca/0x154e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 ms_handle_reset con 0x55819c2eb000 session 0x55819a1a0780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173760512 unmapped: 29327360 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:46.077827+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 heartbeat osd_stat(store_statfs(0x4f62a0000/0x0/0x4ffc00000, data 0x13113ca/0x154e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173760512 unmapped: 29327360 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 ms_handle_reset con 0x55819c31b000 session 0x55819aef41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c75a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 ms_handle_reset con 0x55819c75a800 session 0x55819d61c960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:47.078031+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173776896 unmapped: 29310976 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:48.078218+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 ms_handle_reset con 0x55819c22e800 session 0x55819f3d3680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 ms_handle_reset con 0x55819a1a2000 session 0x55819d70cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173785088 unmapped: 29302784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:49.078460+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 heartbeat osd_stat(store_statfs(0x4f62a1000/0x0/0x4ffc00000, data 0x1311368/0x154d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 173785088 unmapped: 29302784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 ms_handle_reset con 0x55819c22e800 session 0x55819a133a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2661955 data_alloc: 218103808 data_used: 1875968
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:50.078734+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 28237824 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:51.078931+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.492691040s of 10.031978607s, submitted: 170
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 420 ms_handle_reset con 0x55819c2eb000 session 0x55819c188960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 28246016 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 420 ms_handle_reset con 0x55819c31b000 session 0x55819d61d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:52.079132+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 28246016 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:53.079374+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 28246016 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 420 heartbeat osd_stat(store_statfs(0x4f629d000/0x0/0x4ffc00000, data 0x1314866/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:54.079578+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 28246016 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2666717 data_alloc: 218103808 data_used: 1875968
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:55.079779+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 28246016 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 420 heartbeat osd_stat(store_statfs(0x4f629d000/0x0/0x4ffc00000, data 0x1314866/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:56.080100+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 28237824 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:57.080365+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 28237824 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 420 heartbeat osd_stat(store_statfs(0x4f629d000/0x0/0x4ffc00000, data 0x1314866/0x154f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:58.080558+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 28237824 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:07:59.080866+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 28237824 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2666717 data_alloc: 218103808 data_used: 1875968
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:00.081164+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 28237824 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:01.081501+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 28278784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f629b000/0x0/0x4ffc00000, data 0x13162c9/0x1552000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:02.081893+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 28278784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:03.082221+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 28278784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:04.082532+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 28278784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2669019 data_alloc: 218103808 data_used: 1875968
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:05.082774+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f629b000/0x0/0x4ffc00000, data 0x13162c9/0x1552000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 28278784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:06.083058+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 28278784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:07.083314+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f629b000/0x0/0x4ffc00000, data 0x13162c9/0x1552000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 28278784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c75a800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.246875763s of 16.451944351s, submitted: 58
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 421 ms_handle_reset con 0x55819c75a800 session 0x55819a59e780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:08.083509+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 28278784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:09.083760+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174809088 unmapped: 28278784 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a1a2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 421 ms_handle_reset con 0x55819c22e800 session 0x55819d82c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2676900 data_alloc: 218103808 data_used: 1875968
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:10.084133+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174850048 unmapped: 28237824 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 422 ms_handle_reset con 0x55819c31b000 session 0x55819c1ced20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:11.084339+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 423 handle_osd_map epochs [423,423], i have 423, src has [1,423]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 423 ms_handle_reset con 0x55819c2eb000 session 0x55819c043c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df70800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 423 ms_handle_reset con 0x55819df70800 session 0x55819c188b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 423 ms_handle_reset con 0x55819a1a2000 session 0x55819df78f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174882816 unmapped: 28205056 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:12.084587+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 423 heartbeat osd_stat(store_statfs(0x4f6290000/0x0/0x4ffc00000, data 0x1319feb/0x155d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174882816 unmapped: 28205056 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:13.084765+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df70800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 423 ms_handle_reset con 0x55819df70800 session 0x55819dd34f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 423 ms_handle_reset con 0x55819c31b000 session 0x55819c0574a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 423 ms_handle_reset con 0x55819c22e800 session 0x55819b3c54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 174882816 unmapped: 28205056 heap: 203087872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:14.084935+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 213737472 unmapped: 31334400 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3213758 data_alloc: 218103808 data_used: 1900544
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:15.085147+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 67796992 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:16.085373+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180584448 unmapped: 64487424 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 heartbeat osd_stat(store_statfs(0x4f1691000/0x0/0x4ffc00000, data 0x5f19feb/0x615d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:17.085575+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180690944 unmapped: 64380928 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.696062565s of 10.041905403s, submitted: 176
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:18.085779+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 heartbeat osd_stat(store_statfs(0x4ed27d000/0x0/0x4ffc00000, data 0x9f1bb68/0xa160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [1,0,1,0,0,0,2])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181854208 unmapped: 63217664 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:19.086095+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819c827c00 session 0x55819e732b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 58818560 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 heartbeat osd_stat(store_statfs(0x4e867e000/0x0/0x4ffc00000, data 0xeb1bb68/0xed60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:20.086297+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4472348 data_alloc: 218103808 data_used: 1908736
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182353920 unmapped: 62717952 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:21.086444+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186810368 unmapped: 58261504 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:22.086748+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 182804480 unmapped: 62267392 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:23.086918+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 183123968 unmapped: 61947904 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:24.087064+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819ccd9000 session 0x55819d903680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819c22e800 session 0x55819a59f860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819c2eb000 session 0x55819c17de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180289536 unmapped: 64782336 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 heartbeat osd_stat(store_statfs(0x4de67e000/0x0/0x4ffc00000, data 0x18b1bb68/0x18d60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:25.087234+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5327717 data_alloc: 218103808 data_used: 1912832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180289536 unmapped: 64782336 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819c827c00 session 0x55819b2103c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819ccd9000 session 0x55819c17d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df70800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819df70800 session 0x55819dd35e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819c22e800 session 0x55819aef4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:26.087364+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819c2eb000 session 0x55819d61d0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 ms_handle_reset con 0x55819c827c00 session 0x55819c2b5680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 ms_handle_reset con 0x55819ccd9000 session 0x55819dd35860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c884400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181329920 unmapped: 63741952 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 ms_handle_reset con 0x55819c884400 session 0x55819c035680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 ms_handle_reset con 0x55819c22e800 session 0x55819e7334a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 ms_handle_reset con 0x55819c31b000 session 0x55819b3d9a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 ms_handle_reset con 0x55819c2eb000 session 0x55819a132b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:27.087511+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181387264 unmapped: 63684608 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:28.087654+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.160766602s of 10.667559624s, submitted: 302
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 ms_handle_reset con 0x55819c827c00 session 0x55819a6192c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181395456 unmapped: 63676416 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:29.087792+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccd9000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 ms_handle_reset con 0x55819ccd9000 session 0x5581a0d392c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181403648 unmapped: 63668224 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 heartbeat osd_stat(store_statfs(0x4de4de000/0x0/0x4ffc00000, data 0x18cba1f5/0x18eff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 ms_handle_reset con 0x55819c22e800 session 0x55819dace5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:30.087955+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5355369 data_alloc: 218103808 data_used: 1908736
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 426 ms_handle_reset con 0x55819c31b000 session 0x55819c05be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 426 ms_handle_reset con 0x55819c827c00 session 0x55819b3d8b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181346304 unmapped: 63725568 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 426 ms_handle_reset con 0x55819a172400 session 0x55819a1a1860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:31.088090+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c771800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c768000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd44800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 427 ms_handle_reset con 0x55819cd44800 session 0x55819c05a780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180494336 unmapped: 64577536 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:32.088225+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 428 ms_handle_reset con 0x55819c2eb000 session 0x55819b3c12c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180527104 unmapped: 64544768 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:33.088350+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 428 heartbeat osd_stat(store_statfs(0x4de4a9000/0x0/0x4ffc00000, data 0x18ce93b4/0x18f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180527104 unmapped: 64544768 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:34.088465+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 429 ms_handle_reset con 0x55819c22e800 session 0x55819b2103c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180535296 unmapped: 64536576 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:35.088615+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5384607 data_alloc: 218103808 data_used: 3346432
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 430 ms_handle_reset con 0x55819c31b000 session 0x55819a133a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 430 ms_handle_reset con 0x55819a172400 session 0x55819b3c43c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 64528384 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:36.088809+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180543488 unmapped: 64528384 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:37.088997+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 430 ms_handle_reset con 0x55819d448c00 session 0x55819d70cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 201646080 unmapped: 43425792 heap: 245071872 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:38.089149+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 430 heartbeat osd_stat(store_statfs(0x4de4a1000/0x0/0x4ffc00000, data 0x18cecb72/0x18f3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,7,4])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.120478630s of 10.069161415s, submitted: 38
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 80887808 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:39.089281+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 181174272 unmapped: 80699392 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:40.089448+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6033269 data_alloc: 218103808 data_used: 3346432
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185622528 unmapped: 76251136 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:41.089563+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 185753600 unmapped: 76120064 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:42.089667+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 190251008 unmapped: 71622656 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:43.089879+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 194732032 unmapped: 67141632 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 430 heartbeat osd_stat(store_statfs(0x4d3ca3000/0x0/0x4ffc00000, data 0x234ecb72/0x2373b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [0,0,1,1,3,0,0,13,2])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:44.090122+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 198811648 unmapped: 63062016 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:45.090263+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7079725 data_alloc: 218103808 data_used: 4186112
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 430 heartbeat osd_stat(store_statfs(0x4cf5da000/0x0/0x4ffc00000, data 0x27bb5b72/0x27e04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2,2])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 204201984 unmapped: 57671680 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:46.090387+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 190439424 unmapped: 71434240 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:47.090844+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186982400 unmapped: 74891264 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:48.091048+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.814848185s of 10.014036179s, submitted: 174
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 191520768 unmapped: 70352896 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 430 ms_handle_reset con 0x55819d448c00 session 0x55819f3d3680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 430 ms_handle_reset con 0x55819c827c00 session 0x55819a76b2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:49.091210+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187383808 unmapped: 74489856 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 431 heartbeat osd_stat(store_statfs(0x4c7555000/0x0/0x4ffc00000, data 0x2fc3ab72/0x2fe89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 431 ms_handle_reset con 0x55819a172400 session 0x55819b302f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:50.091401+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7981417 data_alloc: 218103808 data_used: 4595712
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187408384 unmapped: 74465280 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 431 heartbeat osd_stat(store_statfs(0x4c7552000/0x0/0x4ffc00000, data 0x2fc3c6e1/0x2fe8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:51.091554+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187408384 unmapped: 74465280 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 ms_handle_reset con 0x55819c22e800 session 0x55819aef52c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:52.091712+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 74448896 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:53.091861+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187564032 unmapped: 74309632 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 heartbeat osd_stat(store_statfs(0x4c752d000/0x0/0x4ffc00000, data 0x2fc602b2/0x2feb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:54.092043+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 heartbeat osd_stat(store_statfs(0x4c752d000/0x0/0x4ffc00000, data 0x2fc602b2/0x2feb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187572224 unmapped: 74301440 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:55.092171+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7985423 data_alloc: 218103808 data_used: 4595712
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187572224 unmapped: 74301440 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:56.092279+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187572224 unmapped: 74301440 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c2eb000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 ms_handle_reset con 0x55819c2eb000 session 0x55819b3c3680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:57.092444+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187572224 unmapped: 74301440 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:58.092607+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187572224 unmapped: 74301440 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:08:59.092774+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 ms_handle_reset con 0x55819a172400 session 0x55819d903c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.430104256s of 11.012052536s, submitted: 47
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 ms_handle_reset con 0x55819c22e800 session 0x55819c162000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 heartbeat osd_stat(store_statfs(0x4c752c000/0x0/0x4ffc00000, data 0x2fc602c2/0x2feb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c827c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187998208 unmapped: 73875456 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 ms_handle_reset con 0x55819c827c00 session 0x55819a76a780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d448c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:00.092950+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 ms_handle_reset con 0x55819d448c00 session 0x55819a0b5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8005674 data_alloc: 218103808 data_used: 4595712
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 heartbeat osd_stat(store_statfs(0x4c73ba000/0x0/0x4ffc00000, data 0x2fdd32c2/0x30024000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187572224 unmapped: 74301440 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:01.093228+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187572224 unmapped: 74301440 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819df70c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 433 ms_handle_reset con 0x55819df70c00 session 0x55819d61c960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:02.093387+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187613184 unmapped: 74260480 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 434 heartbeat osd_stat(store_statfs(0x4c73b6000/0x0/0x4ffc00000, data 0x2fdd4d87/0x30028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:03.093542+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 434 ms_handle_reset con 0x55819a172400 session 0x55819aef41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188522496 unmapped: 73351168 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c22e800 session 0x5581a0d383c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c31b000 session 0x55819c162d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:04.093696+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188489728 unmapped: 73383936 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:05.093842+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8066740 data_alloc: 218103808 data_used: 4628480
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 73375744 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:06.093996+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 73375744 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:07.094164+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c6e67000/0x0/0x4ffc00000, data 0x3031f545/0x30577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 73375744 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:08.094353+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 73375744 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:09.094524+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c742400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c6e67000/0x0/0x4ffc00000, data 0x3031f545/0x30577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 73375744 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c6e67000/0x0/0x4ffc00000, data 0x3031f545/0x30577000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:10.094686+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8066740 data_alloc: 218103808 data_used: 4628480
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 73367552 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819cd48000 session 0x55819c018d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c767800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c767800 session 0x55819c018f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819a172400 session 0x55819d82d0e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:11.094805+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c22e800 session 0x55819d82c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.005547523s of 11.725274086s, submitted: 87
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c31b000 session 0x55819d82c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819cd48000 session 0x55819c1892c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188538880 unmapped: 73334784 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:12.095071+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188538880 unmapped: 73334784 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:13.095276+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c742400 session 0x55819d61cb40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188538880 unmapped: 73334784 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:14.095455+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c6793000/0x0/0x4ffc00000, data 0x309f25a7/0x30c4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c771800 session 0x55819b3c2960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c768000 session 0x55819b210b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188407808 unmapped: 73465856 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:15.095597+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819a172400 session 0x55819a59f860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8019811 data_alloc: 218103808 data_used: 2109440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c747f000/0x0/0x4ffc00000, data 0x2fd07597/0x2ff5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c764c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186834944 unmapped: 75038720 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c22e800 session 0x55819b2101e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:16.095756+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186834944 unmapped: 75038720 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:17.095885+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186834944 unmapped: 75038720 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:18.096158+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186843136 unmapped: 75030528 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:19.096466+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187342848 unmapped: 74530816 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c316000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c316000 session 0x55819d82c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:20.096660+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819a172400 session 0x5581a0d394a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8053333 data_alloc: 218103808 data_used: 7688192
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 74522624 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:21.096838+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c22e800 session 0x55819b3c3860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c74aa000/0x0/0x4ffc00000, data 0x2fcdd535/0x2ff34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 74522624 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:22.097008+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 74522624 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:23.097180+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 74522624 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:24.097395+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 74522624 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:25.097911+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8053333 data_alloc: 218103808 data_used: 7688192
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 74522624 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:26.099192+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 74522624 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:27.100099+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c74aa000/0x0/0x4ffc00000, data 0x2fcdd535/0x2ff34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 74522624 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:28.100790+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 74522624 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:29.101265+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c74aa000/0x0/0x4ffc00000, data 0x2fcdd535/0x2ff34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.970823288s of 18.435731888s, submitted: 89
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 186867712 unmapped: 75005952 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:30.101480+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8135927 data_alloc: 218103808 data_used: 8417280
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 188956672 unmapped: 72916992 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:31.101913+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 72851456 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:32.102123+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 72851456 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:33.102475+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c6d37000/0x0/0x4ffc00000, data 0x305a5535/0x306a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c768000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 72851456 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:34.102657+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 72851456 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:35.102834+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8145653 data_alloc: 218103808 data_used: 8912896
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c768000 session 0x55819c17d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 72851456 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:36.103039+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c6d36000/0x0/0x4ffc00000, data 0x305a5545/0x306a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 72810496 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:37.103303+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 72810496 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:38.103448+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 72810496 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:39.103655+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 72810496 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:40.103875+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8144037 data_alloc: 218103808 data_used: 8912896
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 72810496 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:41.104029+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 72810496 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:42.104189+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c6d3a000/0x0/0x4ffc00000, data 0x305a5545/0x306a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 72802304 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:43.104352+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 72802304 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:44.104587+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 72802304 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:45.104787+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8144037 data_alloc: 218103808 data_used: 8912896
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c6d3a000/0x0/0x4ffc00000, data 0x305a5545/0x306a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 72802304 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:46.105014+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 72802304 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c771800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c771800 session 0x55819c05a5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819ccbec00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:47.105249+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.193216324s of 17.546350479s, submitted: 65
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819ccbec00 session 0x55819df79c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 193634304 unmapped: 68239360 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:48.105401+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819a172400 session 0x55819b3c1e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c22e800 session 0x55819cd7de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189628416 unmapped: 72245248 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:49.105608+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189628416 unmapped: 72245248 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:50.105872+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8371565 data_alloc: 218103808 data_used: 8912896
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c4cfb000/0x0/0x4ffc00000, data 0x325e5545/0x326e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189636608 unmapped: 72237056 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:51.106040+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c31b000 session 0x55819c17de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819cd48000 session 0x55819a132000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189636608 unmapped: 72237056 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c764c00 session 0x55819c189a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:52.106171+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c764c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c764c00 session 0x55819b3c0f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187990016 unmapped: 73883648 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:53.106420+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187990016 unmapped: 73883648 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:54.106604+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 heartbeat osd_stat(store_statfs(0x4c5373000/0x0/0x4ffc00000, data 0x31eed4e3/0x31fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187990016 unmapped: 73883648 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:55.106759+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8264365 data_alloc: 218103808 data_used: 3018752
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187990016 unmapped: 73883648 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:56.107024+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 187990016 unmapped: 73883648 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:57.108987+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a172400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.335269928s of 10.154501915s, submitted: 38
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 ms_handle_reset con 0x55819c22e800 session 0x55819c1cef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189644800 unmapped: 72228864 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:58.109122+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 436 ms_handle_reset con 0x55819c31b000 session 0x5581a0d39860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 436 heartbeat osd_stat(store_statfs(0x4c4e19000/0x0/0x4ffc00000, data 0x32628060/0x325c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 437 handle_osd_map epochs [437,437], i have 437, src has [1,437]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 437 ms_handle_reset con 0x55819cd48000 session 0x55819c162960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 437 ms_handle_reset con 0x55819a172400 session 0x55819df78960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189464576 unmapped: 72409088 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:09:59.109506+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 437 ms_handle_reset con 0x55819c22e800 session 0x55819c1ced20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 437 heartbeat osd_stat(store_statfs(0x4c4e15000/0x0/0x4ffc00000, data 0x32629bdd/0x325c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 72089600 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c764c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:00.109699+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8346353 data_alloc: 218103808 data_used: 3043328
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 438 ms_handle_reset con 0x55819c764c00 session 0x55819acdde00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 72073216 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:01.110417+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 438 heartbeat osd_stat(store_statfs(0x4c4ded000/0x0/0x4ffc00000, data 0x3264f7ae/0x325ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189800448 unmapped: 72073216 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:02.110752+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c768000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:03.110939+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189734912 unmapped: 72138752 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:04.111086+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189767680 unmapped: 72105984 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 439 ms_handle_reset con 0x55819c768000 session 0x55819c0183c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:05.111282+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189767680 unmapped: 72105984 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 439 heartbeat osd_stat(store_statfs(0x4c583a000/0x0/0x4ffc00000, data 0x319d42bb/0x31ad4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8277542 data_alloc: 218103808 data_used: 7032832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c771800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 439 ms_handle_reset con 0x55819c771800 session 0x55819c1cf860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:06.111472+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189784064 unmapped: 72089600 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819d44b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 440 ms_handle_reset con 0x55819d44b400 session 0x55819b3c0780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:07.111663+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189816832 unmapped: 72056832 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c764c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:08.111905+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.206139565s of 10.809267044s, submitted: 187
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189816832 unmapped: 72056832 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 440 ms_handle_reset con 0x55819c764c00 session 0x55819c042d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 440 ms_handle_reset con 0x55819c22e800 session 0x55819aef41e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c768000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c771800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:09.112057+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 189816832 unmapped: 72056832 heap: 261873664 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:10.112219+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 249102336 unmapped: 29564928 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 440 handle_osd_map epochs [440,441], i have 440, src has [1,441]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8823552 data_alloc: 218103808 data_used: 7053312
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 heartbeat osd_stat(store_statfs(0x4c1900000/0x0/0x4ffc00000, data 0x359d7939/0x35add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [0,0,0,0,0,1,0,1,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:11.112353+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 194756608 unmapped: 83910656 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:12.112516+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 200441856 unmapped: 78225408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:13.112628+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 200622080 unmapped: 78045184 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:14.112750+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 193683456 unmapped: 84983808 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:15.112918+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 217636864 unmapped: 61030400 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9926864 data_alloc: 218103808 data_used: 7802880
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 heartbeat osd_stat(store_statfs(0x4b7901000/0x0/0x4ffc00000, data 0x3f9d7939/0x3fadd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:16.113128+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 205414400 unmapped: 73252864 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 heartbeat osd_stat(store_statfs(0x4b5d01000/0x0/0x4ffc00000, data 0x415d7939/0x416dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [0,0,0,0,0,0,1,3])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:17.113243+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 202596352 unmapped: 76070912 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:18.113391+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.695312977s of 10.004959106s, submitted: 357
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 71442432 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:19.113718+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208740352 unmapped: 69926912 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 ms_handle_reset con 0x55819c771800 session 0x55819dace3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 ms_handle_reset con 0x55819c768000 session 0x55819a0b54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c835000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 ms_handle_reset con 0x55819c835000 session 0x55819d61cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c764c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 ms_handle_reset con 0x55819c22e800 session 0x55819af52960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:20.113903+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203661312 unmapped: 75005952 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 ms_handle_reset con 0x55819c764c00 session 0x5581a0d390e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8469800 data_alloc: 218103808 data_used: 8278016
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:21.189139+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c768000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 ms_handle_reset con 0x55819c768000 session 0x55819d82d680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 204693504 unmapped: 73973760 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c771800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 handle_osd_map epochs [441,442], i have 441, src has [1,442]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 441 handle_osd_map epochs [442,442], i have 442, src has [1,442]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 ms_handle_reset con 0x55819c771800 session 0x55819b3d8960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 heartbeat osd_stat(store_statfs(0x4c17fe000/0x0/0x4ffc00000, data 0x322db8c7/0x323df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:22.189292+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 205758464 unmapped: 72908800 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 heartbeat osd_stat(store_statfs(0x4c17fe000/0x0/0x4ffc00000, data 0x322db8c7/0x323df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 ms_handle_reset con 0x55819c31b000 session 0x55819dd34960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 ms_handle_reset con 0x55819cd48000 session 0x55819b2103c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 ms_handle_reset con 0x55819c745000 session 0x55819c0574a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 ms_handle_reset con 0x55819c22e800 session 0x55819d61d680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 ms_handle_reset con 0x55819c31b000 session 0x55819a0b54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:23.189422+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 205807616 unmapped: 72859648 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 heartbeat osd_stat(store_statfs(0x4c140f000/0x0/0x4ffc00000, data 0x322b9488/0x323bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c764c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:24.189620+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 205807616 unmapped: 72859648 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 443 heartbeat osd_stat(store_statfs(0x4c4c12000/0x0/0x4ffc00000, data 0x322b9426/0x323bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:25.189984+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 72851456 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 444 ms_handle_reset con 0x55819c764c00 session 0x55819cd592c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8459712 data_alloc: 218103808 data_used: 8183808
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 444 ms_handle_reset con 0x55819c22e800 session 0x55819d61da40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:26.190177+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203841536 unmapped: 74825728 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:27.190337+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203841536 unmapped: 74825728 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 445 ms_handle_reset con 0x55819c31b000 session 0x55819c1cfa40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:28.190520+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203890688 unmapped: 74776576 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.958477020s of 10.719792366s, submitted: 299
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 446 ms_handle_reset con 0x55819cd48000 session 0x55819c042780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:29.190687+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 73719808 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 447 ms_handle_reset con 0x55819c745000 session 0x55819c162960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c768000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 447 ms_handle_reset con 0x55819c768000 session 0x55819d903680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:30.191093+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 447 handle_osd_map epochs [447,448], i have 447, src has [1,448]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 202465280 unmapped: 76201984 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5991418 data_alloc: 218103808 data_used: 7143424
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 448 heartbeat osd_stat(store_statfs(0x4db8e2000/0x0/0x4ffc00000, data 0x1b4888ce/0x1b6ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:31.191261+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 202481664 unmapped: 76185600 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 449 ms_handle_reset con 0x55819c22e800 session 0x55819d6e50e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:32.191402+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 449 ms_handle_reset con 0x55819c31b000 session 0x5581a0d38b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 75530240 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:33.192032+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 75530240 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:34.194009+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 75530240 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:35.194170+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 75530240 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 449 heartbeat osd_stat(store_statfs(0x4f14de000/0x0/0x4ffc00000, data 0x3c8a44d/0x3eec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460608 data_alloc: 218103808 data_used: 7147520
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:36.194371+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 75530240 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:37.194807+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 75530240 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:38.195486+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 75530240 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 449 heartbeat osd_stat(store_statfs(0x4f14de000/0x0/0x4ffc00000, data 0x3c8a44d/0x3eec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:39.195925+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203137024 unmapped: 75530240 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:40.196572+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 449 handle_osd_map epochs [449,450], i have 449, src has [1,450]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.621573448s of 11.386336327s, submitted: 280
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 75481088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3462062 data_alloc: 218103808 data_used: 7147520
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 450 heartbeat osd_stat(store_statfs(0x4f14de000/0x0/0x4ffc00000, data 0x3c8a44d/0x3eec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:41.196764+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 75481088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 450 ms_handle_reset con 0x55819c745000 session 0x55819d9034a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:42.196988+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 75481088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:43.197122+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203186176 unmapped: 75481088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 450 ms_handle_reset con 0x55819cd48000 session 0x55819df783c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c771800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 450 ms_handle_reset con 0x55819c771800 session 0x55819d70cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:44.197257+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 450 ms_handle_reset con 0x55819c22e800 session 0x55819d902f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203350016 unmapped: 75317248 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:45.197490+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203358208 unmapped: 75309056 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 451 heartbeat osd_stat(store_statfs(0x4f30b5000/0x0/0x4ffc00000, data 0x3cb1a80/0x3f18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [0,1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 451 ms_handle_reset con 0x55819cd48000 session 0x55819d82c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475545 data_alloc: 218103808 data_used: 7163904
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:46.197614+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203374592 unmapped: 75292672 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 451 heartbeat osd_stat(store_statfs(0x4f30b4000/0x0/0x4ffc00000, data 0x3cb1a90/0x3f19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 451 handle_osd_map epochs [451,452], i have 451, src has [1,452]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 451 handle_osd_map epochs [452,452], i have 452, src has [1,452]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 452 ms_handle_reset con 0x55819c763c00 session 0x55819b3c43c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:47.197755+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203399168 unmapped: 75268096 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:48.198027+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd44400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203431936 unmapped: 75235328 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 452 ms_handle_reset con 0x55819c22f400 session 0x55819d70d680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 452 ms_handle_reset con 0x55819cd44400 session 0x55819a618b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:49.198171+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203431936 unmapped: 75235328 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 452 handle_osd_map epochs [452,453], i have 452, src has [1,453]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 handle_osd_map epochs [453,453], i have 453, src has [1,453]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:50.198346+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.906236649s of 10.176679611s, submitted: 51
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 ms_handle_reset con 0x55819c22e800 session 0x55819a6183c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203440128 unmapped: 75227136 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 heartbeat osd_stat(store_statfs(0x4f30aa000/0x0/0x4ffc00000, data 0x3cb52c0/0x3f23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490861 data_alloc: 218103808 data_used: 7200768
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:51.198493+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 ms_handle_reset con 0x55819c22f400 session 0x55819b3c5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 heartbeat osd_stat(store_statfs(0x4f30aa000/0x0/0x4ffc00000, data 0x3cb52c0/0x3f23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203374592 unmapped: 75292672 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:52.198657+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203374592 unmapped: 75292672 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 ms_handle_reset con 0x55819c763c00 session 0x55819b3c34a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:53.199139+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203382784 unmapped: 75284480 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 ms_handle_reset con 0x55819cd48000 session 0x55819a132000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:54.199443+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c758c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 ms_handle_reset con 0x55819c758c00 session 0x55819c1cef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 202366976 unmapped: 76300288 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 ms_handle_reset con 0x55819c22e800 session 0x55819df79e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:55.199742+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 202375168 unmapped: 76292096 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3490897 data_alloc: 218103808 data_used: 7208960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 ms_handle_reset con 0x55819c763c00 session 0x55819e732000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:56.200093+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 202383360 unmapped: 76283904 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 454 ms_handle_reset con 0x55819cd48000 session 0x55819e733860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 454 ms_handle_reset con 0x55819c22f400 session 0x55819aef4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:57.200294+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 454 heartbeat osd_stat(store_statfs(0x4f30ab000/0x0/0x4ffc00000, data 0x3cb52c0/0x3f23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 202391552 unmapped: 76275712 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819b37c400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 454 ms_handle_reset con 0x55819b37c400 session 0x55819b2114a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:58.200435+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 454 ms_handle_reset con 0x55819c22f400 session 0x55819e7325a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 202432512 unmapped: 76234752 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 455 ms_handle_reset con 0x55819c763c00 session 0x55819e7330e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 455 ms_handle_reset con 0x55819c22e800 session 0x55819b2cda40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:10:59.200596+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203030528 unmapped: 75636736 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:00.200791+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 456 ms_handle_reset con 0x55819cd48000 session 0x55819d70c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c76e400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.872498512s of 10.000863075s, submitted: 129
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203202560 unmapped: 75464704 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 456 ms_handle_reset con 0x55819c76e400 session 0x55819c05a960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548089 data_alloc: 218103808 data_used: 9342976
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:01.201049+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203202560 unmapped: 75464704 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 456 ms_handle_reset con 0x55819c22e800 session 0x55819e7321e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 456 heartbeat osd_stat(store_statfs(0x4f2ca4000/0x0/0x4ffc00000, data 0x40ba40d/0x4329000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:02.201316+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203202560 unmapped: 75464704 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:03.201480+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203202560 unmapped: 75464704 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 457 ms_handle_reset con 0x55819c22f400 session 0x5581a0d39e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:04.201672+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 203202560 unmapped: 75464704 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:05.201862+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 457 ms_handle_reset con 0x55819c763c00 session 0x55819d82cf00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206422016 unmapped: 72245248 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3568328 data_alloc: 234881024 data_used: 13545472
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:06.202029+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c76e400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206422016 unmapped: 72245248 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 457 heartbeat osd_stat(store_statfs(0x4f2c9e000/0x0/0x4ffc00000, data 0x40bc07a/0x432f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 458 ms_handle_reset con 0x55819c76e400 session 0x55819a59f4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:07.202148+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206438400 unmapped: 72228864 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 458 ms_handle_reset con 0x55819a18dc00 session 0x55819cd7c1e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 458 ms_handle_reset con 0x55819cd48000 session 0x5581a0d38780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:08.202276+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206446592 unmapped: 72220672 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 458 ms_handle_reset con 0x55819a18dc00 session 0x55819c0350e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:09.202442+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 458 ms_handle_reset con 0x55819c22e800 session 0x55819b3c0b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 458 heartbeat osd_stat(store_statfs(0x4f2c9b000/0x0/0x4ffc00000, data 0x40bdc75/0x4333000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206446592 unmapped: 72220672 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:10.202747+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.810508728s of 10.004974365s, submitted: 67
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 459 ms_handle_reset con 0x55819c22f400 session 0x55819e733a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206454784 unmapped: 72212480 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 459 ms_handle_reset con 0x55819c763c00 session 0x55819d61c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578002 data_alloc: 234881024 data_used: 13570048
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:11.203063+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206454784 unmapped: 72212480 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 459 ms_handle_reset con 0x55819c22e800 session 0x55819d903a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:12.203276+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206454784 unmapped: 72212480 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 460 ms_handle_reset con 0x55819c22f400 session 0x55819dd345a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 460 ms_handle_reset con 0x55819a18dc00 session 0x55819c043c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:13.203538+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207511552 unmapped: 71155712 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 460 ms_handle_reset con 0x55819cd48000 session 0x55819b3c54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c76e400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c765400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:14.203686+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 460 ms_handle_reset con 0x55819c765400 session 0x55819c17c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c765400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 71131136 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 461 ms_handle_reset con 0x55819c765400 session 0x55819c057680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 461 heartbeat osd_stat(store_statfs(0x4f2c95000/0x0/0x4ffc00000, data 0x40c1247/0x4338000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 461 ms_handle_reset con 0x55819c76e400 session 0x55819c189a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:15.203845+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 71057408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3581412 data_alloc: 234881024 data_used: 13574144
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:16.204037+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207609856 unmapped: 71057408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 461 ms_handle_reset con 0x55819a18dc00 session 0x55819dd354a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 461 ms_handle_reset con 0x55819c22e800 session 0x55819d82c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:17.204183+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207650816 unmapped: 71016448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22f400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 461 ms_handle_reset con 0x55819c22f400 session 0x55819dace960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:18.204351+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207650816 unmapped: 71016448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:19.204539+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207650816 unmapped: 71016448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 461 handle_osd_map epochs [461,462], i have 461, src has [1,462]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 462 ms_handle_reset con 0x55819a18dc00 session 0x55819c17d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:20.204816+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207650816 unmapped: 71016448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f2c91000/0x0/0x4ffc00000, data 0x40c493f/0x433c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.452528954s of 10.317635536s, submitted: 154
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590886 data_alloc: 234881024 data_used: 13582336
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:21.205022+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c765400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 ms_handle_reset con 0x55819c765400 session 0x55819c2b5860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 ms_handle_reset con 0x55819c22e800 session 0x5581a0d381e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207650816 unmapped: 71016448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 heartbeat osd_stat(store_statfs(0x4f2c8d000/0x0/0x4ffc00000, data 0x40c6404/0x4340000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c76e400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 ms_handle_reset con 0x55819c76e400 session 0x55819af53860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd48000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 ms_handle_reset con 0x55819cd48000 session 0x55819d70d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:22.205217+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207667200 unmapped: 71000064 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:23.205376+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 ms_handle_reset con 0x55819a18dc00 session 0x55819c05b860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c765400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207839232 unmapped: 70828032 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 ms_handle_reset con 0x55819c765400 session 0x55819df79c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c76e400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 ms_handle_reset con 0x55819c31b000 session 0x55819c1623c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:24.205586+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 ms_handle_reset con 0x55819c745000 session 0x55819b3c0f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 464 ms_handle_reset con 0x55819c76e400 session 0x55819dacf680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207872000 unmapped: 70795264 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 464 heartbeat osd_stat(store_statfs(0x4f2c8a000/0x0/0x4ffc00000, data 0x40c7fd5/0x4343000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [1])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 464 ms_handle_reset con 0x55819a18dc00 session 0x55819c056f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 464 ms_handle_reset con 0x55819c22e800 session 0x55819ace3c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:25.205737+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207888384 unmapped: 70778880 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590392 data_alloc: 234881024 data_used: 14131200
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 464 ms_handle_reset con 0x55819c31b000 session 0x55819dd345a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:26.205870+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207888384 unmapped: 70778880 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:27.206071+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207888384 unmapped: 70778880 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:28.206309+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 464 heartbeat osd_stat(store_statfs(0x4f2caf000/0x0/0x4ffc00000, data 0x40a3f50/0x431d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 465 ms_handle_reset con 0x55819c745000 session 0x55819d903a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 70746112 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c765400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:29.206532+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 465 ms_handle_reset con 0x55819c765400 session 0x55819e7330e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 465 ms_handle_reset con 0x55819a18dc00 session 0x55819aef4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 465 heartbeat osd_stat(store_statfs(0x4f2cb1000/0x0/0x4ffc00000, data 0x40a3f50/0x431d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207921152 unmapped: 70746112 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 465 ms_handle_reset con 0x55819c22e800 session 0x55819dd341e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 465 ms_handle_reset con 0x55819c31b000 session 0x55819d61d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:30.206862+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 465 heartbeat osd_stat(store_statfs(0x4f30af000/0x0/0x4ffc00000, data 0x3ca5b2b/0x3f1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 ms_handle_reset con 0x55819c745000 session 0x55819cd585a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c317000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206577664 unmapped: 72089600 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.111997604s of 10.005883217s, submitted: 159
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 ms_handle_reset con 0x55819c317000 session 0x55819c163a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3561533 data_alloc: 234881024 data_used: 13127680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:31.207118+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206610432 unmapped: 72056832 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 ms_handle_reset con 0x55819a18dc00 session 0x55819d902b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 ms_handle_reset con 0x55819c22e800 session 0x55819d70c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:32.207264+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 ms_handle_reset con 0x55819c31b000 session 0x55819e732780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:33.207514+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:34.208086+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:35.208831+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f30ad000/0x0/0x4ffc00000, data 0x3ca74ca/0x3f20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3562489 data_alloc: 234881024 data_used: 13127680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:36.209492+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f30ad000/0x0/0x4ffc00000, data 0x3ca74ca/0x3f20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:37.209660+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f30ad000/0x0/0x4ffc00000, data 0x3ca74ca/0x3f20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f30ad000/0x0/0x4ffc00000, data 0x3ca74ca/0x3f20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:38.210081+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:39.210648+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:40.211496+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:41.212014+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.393603325s of 10.484591484s, submitted: 28
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563963 data_alloc: 234881024 data_used: 13127680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 ms_handle_reset con 0x55819c745000 session 0x55819ace2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 heartbeat osd_stat(store_statfs(0x4f30ad000/0x0/0x4ffc00000, data 0x3ca74ca/0x3f20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:42.212293+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206618624 unmapped: 72048640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:43.212625+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c768400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819af6b800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 ms_handle_reset con 0x55819af6b800 session 0x55819a76b2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206635008 unmapped: 72032256 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:44.212793+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 466 handle_osd_map epochs [466,467], i have 466, src has [1,467]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 467 handle_osd_map epochs [467,467], i have 467, src has [1,467]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 467 ms_handle_reset con 0x55819c22e800 session 0x55819c018f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206643200 unmapped: 72024064 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:45.213176+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 467 handle_osd_map epochs [467,468], i have 467, src has [1,468]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 468 ms_handle_reset con 0x55819a18dc00 session 0x55819b3c2780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 468 ms_handle_reset con 0x55819c768400 session 0x55819af532c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206651392 unmapped: 72015872 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:46.213418+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573599 data_alloc: 234881024 data_used: 13139968
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f30a5000/0x0/0x4ffc00000, data 0x3caac36/0x3f28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206659584 unmapped: 72007680 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:47.213567+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 469 ms_handle_reset con 0x55819c31b000 session 0x5581a0d394a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206675968 unmapped: 71991296 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:48.213853+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206675968 unmapped: 71991296 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 469 heartbeat osd_stat(store_statfs(0x4f30a3000/0x0/0x4ffc00000, data 0x3cac795/0x3f29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:49.214128+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 470 ms_handle_reset con 0x55819c745000 session 0x55819e7323c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 470 ms_handle_reset con 0x55819a18dc00 session 0x55819af530e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 71958528 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:50.214388+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 71950336 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:51.214609+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3577799 data_alloc: 234881024 data_used: 13148160
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 71950336 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:52.214812+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 71950336 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:53.215110+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 71950336 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:54.215305+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 71950336 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 470 heartbeat osd_stat(store_statfs(0x4f30a1000/0x0/0x4ffc00000, data 0x3cae382/0x3f2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:55.215603+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 71950336 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.626390457s of 14.917722702s, submitted: 83
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 471 ms_handle_reset con 0x55819c22e800 session 0x55819c2b5c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:56.215801+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3582601 data_alloc: 234881024 data_used: 13148160
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 71925760 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:57.215948+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 71925760 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:58.216101+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 471 handle_osd_map epochs [471,472], i have 471, src has [1,472]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 472 heartbeat osd_stat(store_statfs(0x4f309d000/0x0/0x4ffc00000, data 0x3cafe11/0x3f30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 472 ms_handle_reset con 0x55819c745000 session 0x55819c2b4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206774272 unmapped: 71892992 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 472 heartbeat osd_stat(store_statfs(0x4f309d000/0x0/0x4ffc00000, data 0x3cafe11/0x3f30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:11:59.216465+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 473 ms_handle_reset con 0x55819c31b000 session 0x55819dacf2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206774272 unmapped: 71892992 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:00.216761+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c768400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206831616 unmapped: 71835648 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:01.217043+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3591827 data_alloc: 234881024 data_used: 13160448
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 475 ms_handle_reset con 0x55819c768400 session 0x55819c056b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206864384 unmapped: 71802880 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:02.217192+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 475 ms_handle_reset con 0x55819a18dc00 session 0x55819c034b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 476 ms_handle_reset con 0x55819c22e800 session 0x55819c1ced20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206872576 unmapped: 71794688 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:03.217328+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206872576 unmapped: 71794688 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:04.217576+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 476 heartbeat osd_stat(store_statfs(0x4f308d000/0x0/0x4ffc00000, data 0x3cb8754/0x3f3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206872576 unmapped: 71794688 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:05.217820+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 476 heartbeat osd_stat(store_statfs(0x4f308d000/0x0/0x4ffc00000, data 0x3cb8754/0x3f3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206872576 unmapped: 71794688 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:06.218033+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3598399 data_alloc: 234881024 data_used: 13160448
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206880768 unmapped: 71786496 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:07.218180+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206880768 unmapped: 71786496 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:08.218334+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206880768 unmapped: 71786496 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:09.218551+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206880768 unmapped: 71786496 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:10.218771+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206880768 unmapped: 71786496 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 476 heartbeat osd_stat(store_statfs(0x4f308d000/0x0/0x4ffc00000, data 0x3cb8754/0x3f3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 476 handle_osd_map epochs [477,477], i have 477, src has [1,477]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.826770782s of 15.001076698s, submitted: 80
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:11.219063+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3600029 data_alloc: 234881024 data_used: 13160448
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 71778304 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:12.219281+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 71778304 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:13.219536+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 477 ms_handle_reset con 0x55819c31b000 session 0x55819a1874a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 71778304 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:14.219748+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 71778304 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:15.219907+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 478 ms_handle_reset con 0x55819c745000 session 0x55819dacef00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206938112 unmapped: 71729152 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:16.220082+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3608580 data_alloc: 234881024 data_used: 13172736
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 478 ms_handle_reset con 0x55819bf7b400 session 0x55819c05be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 478 handle_osd_map epochs [478,479], i have 478, src has [1,479]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 479 ms_handle_reset con 0x55819a18dc00 session 0x55819d82d4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206954496 unmapped: 71712768 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 479 heartbeat osd_stat(store_statfs(0x4f3087000/0x0/0x4ffc00000, data 0x3cbbdde/0x3f46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:17.220274+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 479 ms_handle_reset con 0x55819bf7b400 session 0x55819d70c960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206987264 unmapped: 71680000 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 479 ms_handle_reset con 0x55819c22e800 session 0x55819a1a1680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:18.220427+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206987264 unmapped: 71680000 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 479 ms_handle_reset con 0x55819c31b000 session 0x55819af532c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:19.220589+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206987264 unmapped: 71680000 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:20.220802+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c745000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 479 heartbeat osd_stat(store_statfs(0x4f3086000/0x0/0x4ffc00000, data 0x3cbd99f/0x3f48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 206987264 unmapped: 71680000 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 479 handle_osd_map epochs [479,480], i have 479, src has [1,480]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 480 handle_osd_map epochs [480,480], i have 480, src has [1,480]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 480 ms_handle_reset con 0x55819c745000 session 0x55819c018f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:21.220929+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614706 data_alloc: 234881024 data_used: 13180928
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.110676765s of 10.364025116s, submitted: 114
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 480 ms_handle_reset con 0x55819a18dc00 session 0x55819d61d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207011840 unmapped: 71655424 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 480 handle_osd_map epochs [480,481], i have 480, src has [1,481]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 481 ms_handle_reset con 0x55819bf7b400 session 0x55819e7330e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:22.221107+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207044608 unmapped: 71622656 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 481 ms_handle_reset con 0x55819c22e800 session 0x55819b3c0f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:23.221266+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 481 ms_handle_reset con 0x55819c31b000 session 0x55819c05b860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207069184 unmapped: 71598080 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f307e000/0x0/0x4ffc00000, data 0x3cc1119/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:24.221473+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819fceb800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 481 ms_handle_reset con 0x55819fceb800 session 0x55819d6e4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207069184 unmapped: 71598080 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f307e000/0x0/0x4ffc00000, data 0x3cc1119/0x3f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:25.221636+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207093760 unmapped: 71573504 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:26.221861+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 483 ms_handle_reset con 0x55819a18dc00 session 0x55819c05a960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3624793 data_alloc: 234881024 data_used: 13180928
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 71548928 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 483 ms_handle_reset con 0x55819bf7b400 session 0x55819c05ab40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:27.222015+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 483 handle_osd_map epochs [483,484], i have 483, src has [1,484]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 484 ms_handle_reset con 0x55819c22e800 session 0x55819c189a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207134720 unmapped: 71532544 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f3078000/0x0/0x4ffc00000, data 0x3cc4731/0x3f54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:28.222173+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 484 ms_handle_reset con 0x55819c31b000 session 0x55819c0565a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207151104 unmapped: 71516160 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:29.222373+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 484 ms_handle_reset con 0x55819c22e400 session 0x55819d82cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207159296 unmapped: 71507968 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:30.222602+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 207183872 unmapped: 71483392 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:31.222817+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3631135 data_alloc: 234881024 data_used: 13193216
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208240640 unmapped: 70426624 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:32.223127+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.474407196s of 10.855543137s, submitted: 118
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 486 ms_handle_reset con 0x55819a18dc00 session 0x55819d82de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208257024 unmapped: 70410240 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:33.223297+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 486 heartbeat osd_stat(store_statfs(0x4f2c60000/0x0/0x4ffc00000, data 0x3cc98c8/0x3f5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 486 ms_handle_reset con 0x55819bf7b400 session 0x55819dacfe00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208265216 unmapped: 70402048 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:34.223478+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 487 ms_handle_reset con 0x55819c22e800 session 0x55819d6e4780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 487 ms_handle_reset con 0x55819c31b000 session 0x55819df79680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c771c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208338944 unmapped: 70328320 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 487 ms_handle_reset con 0x55819c771c00 session 0x55819d70c5a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 113K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.75 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 33.49 MB, 0.06 MB/s
                                           Interval WAL: 10K writes, 4444 syncs, 2.41 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:35.223734+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208347136 unmapped: 70320128 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:36.223913+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 488 ms_handle_reset con 0x55819a18dc00 session 0x55819c2b54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641762 data_alloc: 234881024 data_used: 13213696
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208355328 unmapped: 70311936 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:37.224112+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 488 handle_osd_map epochs [488,489], i have 488, src has [1,489]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 489 handle_osd_map epochs [489,489], i have 489, src has [1,489]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 489 ms_handle_reset con 0x55819bf7b400 session 0x55819b3c3860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208379904 unmapped: 70287360 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:38.224242+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 489 ms_handle_reset con 0x55819c22e800 session 0x55819b3c2780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208412672 unmapped: 70254592 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:39.224415+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 489 heartbeat osd_stat(store_statfs(0x4f2c56000/0x0/0x4ffc00000, data 0x3cceac1/0x3f65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 490 ms_handle_reset con 0x55819c31b000 session 0x55819e733a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c316c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208429056 unmapped: 70238208 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 490 ms_handle_reset con 0x55819c316c00 session 0x55819dd341e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:40.224623+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 490 ms_handle_reset con 0x55819a18dc00 session 0x55819b3c2f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f2c55000/0x0/0x4ffc00000, data 0x3cd06a2/0x3f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208461824 unmapped: 70205440 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:41.225053+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 491 ms_handle_reset con 0x55819bf7b400 session 0x55819c05ba40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654397 data_alloc: 234881024 data_used: 13213696
Nov 29 08:16:55 compute-0 ceph-osd[88831]: mgrc ms_handle_reset ms_handle_reset con 0x55819d6a9800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/878361048
Nov 29 08:16:55 compute-0 ceph-osd[88831]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/878361048,v1:192.168.122.100:6801/878361048]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: get_auth_request con 0x55819c316c00 auth_method 0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: mgrc handle_mgr_configure stats_period=5
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208543744 unmapped: 70123520 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 491 heartbeat osd_stat(store_statfs(0x4f2c51000/0x0/0x4ffc00000, data 0x3cd2121/0x3f6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:42.225454+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.827903748s of 10.206711769s, submitted: 132
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208551936 unmapped: 70115328 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:43.225881+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 492 ms_handle_reset con 0x55819c22e800 session 0x55819d61c3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 492 ms_handle_reset con 0x55819c31b000 session 0x55819b3c0960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c76a000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208576512 unmapped: 70090752 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:44.226148+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 493 ms_handle_reset con 0x55819c76a000 session 0x55819a59f4a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f1aac000/0x0/0x4ffc00000, data 0x3cd3d2c/0x3f70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208584704 unmapped: 70082560 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 493 ms_handle_reset con 0x55819a18dc00 session 0x55819d61c3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:45.226344+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 493 ms_handle_reset con 0x55819bf7b400 session 0x55819b3c2f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208609280 unmapped: 70057984 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:46.226622+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3657344 data_alloc: 234881024 data_used: 13221888
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 493 ms_handle_reset con 0x55819c22e800 session 0x55819c2b54a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208617472 unmapped: 70049792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:47.226942+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208633856 unmapped: 70033408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:48.227239+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 493 handle_osd_map epochs [493,494], i have 493, src has [1,494]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 494 handle_osd_map epochs [494,494], i have 494, src has [1,494]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 494 ms_handle_reset con 0x55819c31b000 session 0x55819df79680
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208658432 unmapped: 70008832 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4ca400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 494 ms_handle_reset con 0x55819e4ca400 session 0x55819d6e4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:49.227426+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 495 ms_handle_reset con 0x55819a18dc00 session 0x55819d61d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 495 heartbeat osd_stat(store_statfs(0x4f1aa7000/0x0/0x4ffc00000, data 0x3cd74b6/0x3f76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208674816 unmapped: 69992448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:50.228234+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 496 ms_handle_reset con 0x55819bf7b400 session 0x55819c05be00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 496 ms_handle_reset con 0x55819c22e800 session 0x55819b3c2780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 496 ms_handle_reset con 0x55819df71c00 session 0x55819d6e4d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c31b000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208715776 unmapped: 69951488 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:51.228424+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3671556 data_alloc: 234881024 data_used: 13230080
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208715776 unmapped: 69951488 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:52.228638+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 496 heartbeat osd_stat(store_statfs(0x4f1aa1000/0x0/0x4ffc00000, data 0x3cdaad4/0x3f7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208715776 unmapped: 69951488 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:53.228859+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 496 handle_osd_map epochs [496,497], i have 496, src has [1,497]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.682062149s of 11.037563324s, submitted: 135
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4ca400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 497 ms_handle_reset con 0x55819e4ca400 session 0x55819c17de00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208764928 unmapped: 69902336 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:54.229137+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c767000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 497 ms_handle_reset con 0x55819c767000 session 0x55819b3c0780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 497 ms_handle_reset con 0x55819a18dc00 session 0x55819ace2000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208773120 unmapped: 69894144 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:55.229276+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 498 ms_handle_reset con 0x55819bf7b400 session 0x55819af530e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208805888 unmapped: 69861376 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:56.229443+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3676320 data_alloc: 234881024 data_used: 13234176
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 498 heartbeat osd_stat(store_statfs(0x4f1a9d000/0x0/0x4ffc00000, data 0x3cde24e/0x3f80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 498 ms_handle_reset con 0x55819c22e800 session 0x55819c05ad20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208805888 unmapped: 69861376 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:57.229812+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208805888 unmapped: 69861376 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4ca400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:58.230034+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 498 handle_osd_map epochs [499,499], i have 498, src has [1,499]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 499 ms_handle_reset con 0x55819e4ca400 session 0x5581a0d39c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208855040 unmapped: 69812224 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:12:59.230200+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 499 ms_handle_reset con 0x55819c763800 session 0x55819a1a0780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 499 handle_osd_map epochs [499,500], i have 499, src has [1,500]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 500 ms_handle_reset con 0x55819a18dc00 session 0x55819c308d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208871424 unmapped: 69795840 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:00.230479+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 500 handle_osd_map epochs [500,501], i have 500, src has [1,501]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 501 ms_handle_reset con 0x55819bf7b400 session 0x55819c043e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 501 ms_handle_reset con 0x55819c22e800 session 0x55819cd7c780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208928768 unmapped: 69738496 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:01.230652+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686627 data_alloc: 234881024 data_used: 13246464
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 501 heartbeat osd_stat(store_statfs(0x4f1a94000/0x0/0x4ffc00000, data 0x3ce3487/0x3f89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208904192 unmapped: 69763072 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:02.230847+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 501 heartbeat osd_stat(store_statfs(0x4f1a94000/0x0/0x4ffc00000, data 0x3ce3487/0x3f89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208904192 unmapped: 69763072 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:03.231032+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208904192 unmapped: 69763072 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:04.231316+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208904192 unmapped: 69763072 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:05.231547+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208904192 unmapped: 69763072 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:06.231808+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.473033905s of 12.745605469s, submitted: 97
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 501 ms_handle_reset con 0x55819c763800 session 0x55819c034f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689357 data_alloc: 234881024 data_used: 13246464
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208904192 unmapped: 69763072 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 501 heartbeat osd_stat(store_statfs(0x4f1a93000/0x0/0x4ffc00000, data 0x3ce34f9/0x3f8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:07.231930+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819e4ca400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208904192 unmapped: 69763072 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 502 ms_handle_reset con 0x55819e4ca400 session 0x55819dd34b40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:08.232102+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 502 ms_handle_reset con 0x55819a18dc00 session 0x55819c17d860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208928768 unmapped: 69738496 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 502 handle_osd_map epochs [502,503], i have 502, src has [1,503]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 503 ms_handle_reset con 0x55819bf7b400 session 0x55819d61cd20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:09.232266+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208945152 unmapped: 69722112 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 503 ms_handle_reset con 0x55819c22e800 session 0x55819c0430e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:10.232442+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 504 ms_handle_reset con 0x55819c763800 session 0x55819c057860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 208986112 unmapped: 69681152 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:11.232589+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf94c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 504 ms_handle_reset con 0x55819bf94c00 session 0x55819cd7cb40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3711976 data_alloc: 234881024 data_used: 13254656
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209010688 unmapped: 69656576 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:12.232764+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 504 heartbeat osd_stat(store_statfs(0x4f1a88000/0x0/0x4ffc00000, data 0x3ce86f2/0x3f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209010688 unmapped: 69656576 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 504 handle_osd_map epochs [504,505], i have 504, src has [1,505]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:13.233235+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 505 ms_handle_reset con 0x55819a18dc00 session 0x55819a6192c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 505 ms_handle_reset con 0x55819bf7b400 session 0x55819b3d90e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209035264 unmapped: 69632000 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:14.233390+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 505 handle_osd_map epochs [505,506], i have 505, src has [1,506]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 506 ms_handle_reset con 0x55819c22e800 session 0x55819e733a40
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209051648 unmapped: 69615616 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:15.233611+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 506 ms_handle_reset con 0x55819c763800 session 0x55819dacfe00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd41800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 506 ms_handle_reset con 0x55819cd41800 session 0x5581a0d385a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 506 heartbeat osd_stat(store_statfs(0x4f1a82000/0x0/0x4ffc00000, data 0x3cebe6c/0x3f9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209051648 unmapped: 69615616 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:16.233867+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3714923 data_alloc: 234881024 data_used: 13262848
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209051648 unmapped: 69615616 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:17.234007+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209051648 unmapped: 69615616 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:18.234156+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.922269821s of 12.215317726s, submitted: 89
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 507 ms_handle_reset con 0x55819a18dc00 session 0x55819dd345a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209068032 unmapped: 69599232 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:19.234314+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209068032 unmapped: 69599232 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:20.234525+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _renew_subs
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 507 handle_osd_map epochs [509,509], i have 507, src has [1,509]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 507 handle_osd_map epochs [508,509], i have 507, src has [1,509]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 509 ms_handle_reset con 0x55819bf7b400 session 0x55819dd35e00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209182720 unmapped: 69484544 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:21.234661+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3726245 data_alloc: 234881024 data_used: 13271040
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 509 heartbeat osd_stat(store_statfs(0x4f1a7b000/0x0/0x4ffc00000, data 0x3cf1033/0x3fa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 509 ms_handle_reset con 0x55819c22e800 session 0x55819c1ced20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209190912 unmapped: 69476352 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:22.234800+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 209190912 unmapped: 69476352 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:23.235045+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 509 handle_osd_map epochs [509,510], i have 509, src has [1,510]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 510 ms_handle_reset con 0x55819c763800 session 0x55819a1a0d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210255872 unmapped: 68411392 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:24.235292+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819cd41800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 510 ms_handle_reset con 0x55819cd41800 session 0x55819aef4f00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210255872 unmapped: 68411392 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:25.235479+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 510 handle_osd_map epochs [510,511], i have 510, src has [1,511]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 511 ms_handle_reset con 0x55819a18dc00 session 0x55819d70dc20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 511 ms_handle_reset con 0x55819bf7b400 session 0x55819b211860
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 511 ms_handle_reset con 0x55819c22e800 session 0x55819acdc000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210272256 unmapped: 68395008 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:26.235665+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 511 heartbeat osd_stat(store_statfs(0x4f1a76000/0x0/0x4ffc00000, data 0x3cf47ad/0x3fa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733634 data_alloc: 234881024 data_used: 13291520
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210288640 unmapped: 68378624 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:27.235812+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 511 ms_handle_reset con 0x55819c763800 session 0x55819af52d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210288640 unmapped: 68378624 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:28.236017+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210288640 unmapped: 68378624 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:29.236128+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 511 heartbeat osd_stat(store_statfs(0x4f1a76000/0x0/0x4ffc00000, data 0x3cf47ad/0x3fa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 511 handle_osd_map epochs [511,512], i have 511, src has [1,512]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.403182983s of 10.895571709s, submitted: 71
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:30.236391+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210313216 unmapped: 68354048 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 512 ms_handle_reset con 0x55819c22e000 session 0x55819c308000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 512 handle_osd_map epochs [512,513], i have 512, src has [1,513]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:31.236599+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210337792 unmapped: 68329472 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739742 data_alloc: 234881024 data_used: 13295616
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 513 heartbeat osd_stat(store_statfs(0x4f1a70000/0x0/0x4ffc00000, data 0x3cf7dd9/0x3fad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 513 handle_osd_map epochs [514,514], i have 513, src has [1,514]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 513 handle_osd_map epochs [514,514], i have 514, src has [1,514]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:32.236744+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210354176 unmapped: 68313088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 514 ms_handle_reset con 0x55819a18dc00 session 0x55819c1ce3c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:33.236893+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210354176 unmapped: 68313088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:34.237056+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210354176 unmapped: 68313088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:35.237301+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210354176 unmapped: 68313088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 514 heartbeat osd_stat(store_statfs(0x4f1a6c000/0x0/0x4ffc00000, data 0x3cf99aa/0x3fb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:36.237471+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210354176 unmapped: 68313088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3743548 data_alloc: 234881024 data_used: 13299712
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 514 heartbeat osd_stat(store_statfs(0x4f1a6c000/0x0/0x4ffc00000, data 0x3cf99aa/0x3fb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:37.237636+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210362368 unmapped: 68304896 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 514 heartbeat osd_stat(store_statfs(0x4f1a6c000/0x0/0x4ffc00000, data 0x3cf99aa/0x3fb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:38.237789+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210370560 unmapped: 68296704 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:39.238058+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210370560 unmapped: 68296704 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:40.238249+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210370560 unmapped: 68296704 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 514 handle_osd_map epochs [514,515], i have 514, src has [1,515]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.848675728s of 10.933086395s, submitted: 37
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:41.238436+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210378752 unmapped: 68288512 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745850 data_alloc: 234881024 data_used: 13299712
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:42.238634+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210370560 unmapped: 68296704 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6a000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:43.238804+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210378752 unmapped: 68288512 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6a000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:44.239014+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210378752 unmapped: 68288512 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:45.239216+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210378752 unmapped: 68288512 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:46.239383+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210378752 unmapped: 68288512 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745850 data_alloc: 234881024 data_used: 13299712
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:47.239480+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210386944 unmapped: 68280320 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:48.239768+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210386944 unmapped: 68280320 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:49.240067+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210386944 unmapped: 68280320 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6a000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:50.240340+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210386944 unmapped: 68280320 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:51.240573+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210403328 unmapped: 68263936 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745850 data_alloc: 234881024 data_used: 13299712
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:52.241096+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210403328 unmapped: 68263936 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:53.241323+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210403328 unmapped: 68263936 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:54.241516+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210403328 unmapped: 68263936 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6a000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:55.242025+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210403328 unmapped: 68263936 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:56.242176+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210411520 unmapped: 68255744 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745850 data_alloc: 234881024 data_used: 13299712
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:57.242314+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210411520 unmapped: 68255744 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6a000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:58.242450+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210411520 unmapped: 68255744 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.608377457s of 18.757795334s, submitted: 14
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:13:59.242562+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210444288 unmapped: 68222976 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:00.242758+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210452480 unmapped: 68214784 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:01.242929+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210452480 unmapped: 68214784 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747050 data_alloc: 234881024 data_used: 13352960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6b000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:02.243050+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210493440 unmapped: 68173824 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 ms_handle_reset con 0x55819c827800 session 0x55819a618d20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819bf7b400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6b000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:03.243210+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:04.243350+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:05.243532+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6b000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:06.243766+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747050 data_alloc: 234881024 data_used: 13352960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:07.243922+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:08.244026+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6b000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:09.244978+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:10.245150+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:11.245310+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6b000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747050 data_alloc: 234881024 data_used: 13352960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:12.245460+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:13.245612+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:14.245731+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210501632 unmapped: 68165632 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:15.245880+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 68157440 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:16.246009+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 68157440 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6b000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747050 data_alloc: 234881024 data_used: 13352960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:17.246175+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 68157440 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:18.246340+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 68157440 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:19.246468+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 68157440 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:20.246659+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 68157440 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:21.246839+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6b000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 68157440 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747050 data_alloc: 234881024 data_used: 13352960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:22.247042+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210509824 unmapped: 68157440 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:23.247242+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210518016 unmapped: 68149248 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:24.247475+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210518016 unmapped: 68149248 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:25.247673+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210518016 unmapped: 68149248 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 heartbeat osd_stat(store_statfs(0x4f1a6b000/0x0/0x4ffc00000, data 0x3cfb40d/0x3fb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:26.247876+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210518016 unmapped: 68149248 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3747050 data_alloc: 234881024 data_used: 13352960
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:27.248027+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210518016 unmapped: 68149248 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 515 handle_osd_map epochs [515,516], i have 515, src has [1,516]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.566421509s of 28.982776642s, submitted: 132
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:28.248185+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210526208 unmapped: 68141056 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:29.248372+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210526208 unmapped: 68141056 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 516 ms_handle_reset con 0x55819c22e000 session 0x55819cd581e0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:30.248561+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210550784 unmapped: 68116480 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 516 ms_handle_reset con 0x55819c22e800 session 0x55819c05a780
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:31.248696+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210558976 unmapped: 68108288 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 516 heartbeat osd_stat(store_statfs(0x4f1a66000/0x0/0x4ffc00000, data 0x3cfcfec/0x3fb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3754480 data_alloc: 234881024 data_used: 13361152
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:32.248814+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210558976 unmapped: 68108288 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c763800
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c766c00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 516 ms_handle_reset con 0x55819c766c00 session 0x55819d82c000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c317400
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:33.248952+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210558976 unmapped: 68108288 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 516 handle_osd_map epochs [516,517], i have 516, src has [1,517]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 517 heartbeat osd_stat(store_statfs(0x4f1a67000/0x0/0x4ffc00000, data 0x3cfcfec/0x3fb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 517 ms_handle_reset con 0x55819c317400 session 0x55819d902000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:34.249191+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210567168 unmapped: 68100096 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 517 ms_handle_reset con 0x55819c763800 session 0x55819e733c20
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:35.249308+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 68091904 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819a18dc00
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 517 ms_handle_reset con 0x55819a18dc00 session 0x55819c1634a0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: handle_auth_request added challenge on 0x55819c22e000
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 517 ms_handle_reset con 0x55819c22e000 session 0x55819d82d2c0
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:36.249407+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210599936 unmapped: 68067328 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3756492 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:37.249540+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 68050944 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:38.259243+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 68050944 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 517 heartbeat osd_stat(store_statfs(0x4f1a64000/0x0/0x4ffc00000, data 0x3cfeb5b/0x3fb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:39.259462+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210624512 unmapped: 68042752 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:40.259671+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 517 handle_osd_map epochs [517,518], i have 517, src has [1,518]
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.391775131s of 12.284954071s, submitted: 68
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210624512 unmapped: 68042752 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a64000/0x0/0x4ffc00000, data 0x3cfeb5b/0x3fb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:41.259853+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210632704 unmapped: 68034560 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:42.260031+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210632704 unmapped: 68034560 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:43.260282+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210632704 unmapped: 68034560 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:44.260436+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210632704 unmapped: 68034560 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:45.260624+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210632704 unmapped: 68034560 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:46.260880+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210632704 unmapped: 68034560 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:47.261015+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 68026368 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:48.261191+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 68026368 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:49.261419+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 68026368 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:50.261655+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 68026368 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:51.261839+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 68026368 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:52.262045+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 68026368 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:53.262184+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 68026368 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:54.262352+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210640896 unmapped: 68026368 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:55.262489+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:56.262710+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:57.262878+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:58.263076+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:14:59.263245+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:00.263478+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:01.263633+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:02.263817+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:03.264086+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:04.264278+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:05.264452+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:06.264716+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:07.264879+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:08.265086+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:09.265237+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:10.265601+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210649088 unmapped: 68018176 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:11.265782+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:12.266023+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:13.266197+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:14.266474+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:15.266614+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:16.266785+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:17.267024+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:18.267204+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:19.267401+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:20.267576+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:21.267790+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:22.268002+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:23.268159+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:24.268364+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:25.268531+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:26.268757+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210665472 unmapped: 68001792 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:27.269039+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:28.269263+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:29.269426+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:30.269638+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:31.269816+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:32.270079+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:33.270249+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:34.270429+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210690048 unmapped: 67977216 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:35.270674+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210673664 unmapped: 67993600 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:36.270865+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210673664 unmapped: 67993600 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:37.271044+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210673664 unmapped: 67993600 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:38.271199+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210673664 unmapped: 67993600 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:39.271350+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210673664 unmapped: 67993600 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:40.271578+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210673664 unmapped: 67993600 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:41.271707+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210673664 unmapped: 67993600 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:42.271894+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:43.272076+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:44.272269+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:45.272461+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:46.272640+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:47.272875+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:48.273074+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210681856 unmapped: 67985408 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:49.273210+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210690048 unmapped: 67977216 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:50.273441+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210698240 unmapped: 67969024 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:51.273614+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210698240 unmapped: 67969024 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:52.273818+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210698240 unmapped: 67969024 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:53.274065+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210698240 unmapped: 67969024 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:54.274230+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210698240 unmapped: 67969024 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:55.274413+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210698240 unmapped: 67969024 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:56.274730+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210698240 unmapped: 67969024 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:57.275053+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210698240 unmapped: 67969024 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:58.275190+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 67960832 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:15:59.275316+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 67960832 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:00.275565+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 67960832 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:01.275787+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 67960832 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:02.276016+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 67960832 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:03.276256+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210706432 unmapped: 67960832 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:04.276478+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 67952640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:05.276691+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210714624 unmapped: 67952640 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:06.276865+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210722816 unmapped: 67944448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:07.277371+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210722816 unmapped: 67944448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:08.277497+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210722816 unmapped: 67944448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:09.277669+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210722816 unmapped: 67944448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:10.277856+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210722816 unmapped: 67944448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:11.278064+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210722816 unmapped: 67944448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:12.278262+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210722816 unmapped: 67944448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:13.278402+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210722816 unmapped: 67944448 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:14.278573+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210731008 unmapped: 67936256 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:15.278708+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210739200 unmapped: 67928064 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:16.278876+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210747392 unmapped: 67919872 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:17.279015+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210747392 unmapped: 67919872 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:18.279122+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210747392 unmapped: 67919872 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:19.279246+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210747392 unmapped: 67919872 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:20.279425+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210935808 unmapped: 67731456 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:21.279550+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: do_command 'config diff' '{prefix=config diff}'
Nov 29 08:16:55 compute-0 ceph-osd[88831]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 08:16:55 compute-0 ceph-osd[88831]: do_command 'config show' '{prefix=config show}'
Nov 29 08:16:55 compute-0 ceph-osd[88831]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 08:16:55 compute-0 ceph-osd[88831]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 08:16:55 compute-0 ceph-osd[88831]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 08:16:55 compute-0 ceph-osd[88831]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 08:16:55 compute-0 ceph-osd[88831]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210354176 unmapped: 68313088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:22.279738+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 08:16:55 compute-0 ceph-osd[88831]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 08:16:55 compute-0 ceph-osd[88831]: bluestore.MempoolThread(0x558198c05b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759466 data_alloc: 234881024 data_used: 13373440
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210354176 unmapped: 68313088 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:23.280648+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210583552 unmapped: 68083712 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:24.280790+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: prioritycache tune_memory target: 4294967296 mapped: 210632704 unmapped: 68034560 heap: 278667264 old mem: 2845415832 new mem: 2845415832
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: tick
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_tickets
Nov 29 08:16:55 compute-0 ceph-osd[88831]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T08:16:25.280941+0000)
Nov 29 08:16:55 compute-0 ceph-osd[88831]: osd.0 518 heartbeat osd_stat(store_statfs(0x4f1a61000/0x0/0x4ffc00000, data 0x3d005be/0x3fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0xa1df9c6), peers [1,2] op hist [])
Nov 29 08:16:55 compute-0 ceph-osd[88831]: do_command 'log dump' '{prefix=log dump}'
Nov 29 08:16:55 compute-0 podman[314269]: 2025-11-29 08:16:55.741628055 +0000 UTC m=+0.105354898 container health_status 53b3e3edbc13651958250c6312d45e987c0dee5fbb3effb8392660dfc1eaa82f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 08:16:55 compute-0 ceph-mon[75050]: from='client.19385 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 08:16:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3974811685' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 08:16:55 compute-0 ceph-mon[75050]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 08:16:55 compute-0 ceph-mon[75050]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 08:16:55 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/3541332177' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 08:16:55 compute-0 podman[314270]: 2025-11-29 08:16:55.754758732 +0000 UTC m=+0.114640650 container health_status 8d4ccc3041d68cb3d47809db0bb1a7e168a735eaa6605edbf2567447905aa50e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 08:16:55 compute-0 podman[314267]: 2025-11-29 08:16:55.760057957 +0000 UTC m=+0.127090360 container health_status 23d05e03be4cb9084b6afbc3edf0d56047ca0f3f1aaf62172739e6d6ce3f7fa8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 08:16:55 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 08:16:55 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19397 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 08:16:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1174289803' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 08:16:56 compute-0 ceph-mon[75050]: pgmap v2453: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:56 compute-0 ceph-mon[75050]: from='client.19397 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:56 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1174289803' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 08:16:56 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 08:16:56 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081750914' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 08:16:57 compute-0 sudo[314447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:57 compute-0 sudo[314447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:57 compute-0 sudo[314447]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:57 compute-0 sudo[314480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:16:57 compute-0 sudo[314480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:57 compute-0 sudo[314480]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:57 compute-0 sudo[314528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:57 compute-0 sudo[314528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:57 compute-0 sudo[314528]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:57 compute-0 sudo[314563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 08:16:57 compute-0 sudo[314563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 08:16:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150525994' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 08:16:57 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:57 compute-0 sudo[314563]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 08:16:57 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:16:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 08:16:57 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:16:57 compute-0 sudo[314643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:57 compute-0 sudo[314643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:57 compute-0 sudo[314643]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:57 compute-0 sudo[314670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:16:57 compute-0 sudo[314670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:57 compute-0 sudo[314670]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:57 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 08:16:57 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270303156' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 08:16:57 compute-0 sudo[314698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:57 compute-0 sudo[314698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:57 compute-0 sudo[314698]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:57 compute-0 sudo[314726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 08:16:57 compute-0 sudo[314726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4081750914' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 08:16:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4150525994' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 08:16:57 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:16:57 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:16:57 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1270303156' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19407 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:58 compute-0 sudo[314726]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:58 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 08:16:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:16:58 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 7ca71b60-8ee3-4393-83e9-38f7fe8b4051 does not exist
Nov 29 08:16:58 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev 25739871-0de3-4e64-a57f-094bdc1cf211 does not exist
Nov 29 08:16:58 compute-0 ceph-mgr[75345]: [progress WARNING root] complete: ev fa5497d9-e0ce-48f2-8312-3c69b957139a does not exist
Nov 29 08:16:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75050]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:58 compute-0 sudo[314868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:58 compute-0 sudo[314868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:58 compute-0 sudo[314868]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:58 compute-0 systemd[1]: Started Hostname Service.
Nov 29 08:16:58 compute-0 sudo[314893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:16:58 compute-0 sudo[314893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:58 compute-0 sudo[314893]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:58 compute-0 sudo[314925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:16:58 compute-0 sudo[314925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:58 compute-0 sudo[314925]: pam_unix(sudo:session): session closed for user root
Nov 29 08:16:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190406513' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 08:16:58 compute-0 nova_compute[256729]: 2025-11-29 08:16:58.541 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:16:58 compute-0 sudo[314950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 08:16:58 compute-0 sudo[314950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:16:58 compute-0 ceph-mon[75050]: pgmap v2454: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:58 compute-0 ceph-mon[75050]: from='client.19407 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' 
Nov 29 08:16:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: from='mgr.14130 192.168.122.100:0/2249105087' entity='mgr.compute-0.kzdpag' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 08:16:58 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/1190406513' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 08:16:58 compute-0 podman[315042]: 2025-11-29 08:16:58.901485217 +0000 UTC m=+0.053613860 container create 727911bbfe8af7c85ec486f9d8db0f639f0b45f414c5e4ed44b637f068844dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:16:58 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 08:16:58 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519689650' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 08:16:58 compute-0 systemd[1]: Started libpod-conmon-727911bbfe8af7c85ec486f9d8db0f639f0b45f414c5e4ed44b637f068844dd3.scope.
Nov 29 08:16:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:16:58 compute-0 podman[315042]: 2025-11-29 08:16:58.974716889 +0000 UTC m=+0.126845552 container init 727911bbfe8af7c85ec486f9d8db0f639f0b45f414c5e4ed44b637f068844dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:16:58 compute-0 podman[315042]: 2025-11-29 08:16:58.877269088 +0000 UTC m=+0.029397751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:16:58 compute-0 podman[315042]: 2025-11-29 08:16:58.98798195 +0000 UTC m=+0.140110583 container start 727911bbfe8af7c85ec486f9d8db0f639f0b45f414c5e4ed44b637f068844dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 08:16:58 compute-0 podman[315042]: 2025-11-29 08:16:58.990729065 +0000 UTC m=+0.142857708 container attach 727911bbfe8af7c85ec486f9d8db0f639f0b45f414c5e4ed44b637f068844dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:16:58 compute-0 objective_goodall[315062]: 167 167
Nov 29 08:16:58 compute-0 systemd[1]: libpod-727911bbfe8af7c85ec486f9d8db0f639f0b45f414c5e4ed44b637f068844dd3.scope: Deactivated successfully.
Nov 29 08:16:58 compute-0 podman[315042]: 2025-11-29 08:16:58.995127434 +0000 UTC m=+0.147256077 container died 727911bbfe8af7c85ec486f9d8db0f639f0b45f414c5e4ed44b637f068844dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-60cba69896792c19b8e4625b57545caac5fc8f0b1d55f3cd5b30f38199c197df-merged.mount: Deactivated successfully.
Nov 29 08:16:59 compute-0 podman[315042]: 2025-11-29 08:16:59.040013965 +0000 UTC m=+0.192142608 container remove 727911bbfe8af7c85ec486f9d8db0f639f0b45f414c5e4ed44b637f068844dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_goodall, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 08:16:59 compute-0 systemd[1]: libpod-conmon-727911bbfe8af7c85ec486f9d8db0f639f0b45f414c5e4ed44b637f068844dd3.scope: Deactivated successfully.
Nov 29 08:16:59 compute-0 podman[315124]: 2025-11-29 08:16:59.248512299 +0000 UTC m=+0.052145840 container create dc98f5374cbb9d9d35ace399bfb11d2a6d96e4c66a2f6ff64b31d11e94a8ea96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 08:16:59 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19413 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:16:59 compute-0 systemd[1]: Started libpod-conmon-dc98f5374cbb9d9d35ace399bfb11d2a6d96e4c66a2f6ff64b31d11e94a8ea96.scope.
Nov 29 08:16:59 compute-0 podman[315124]: 2025-11-29 08:16:59.223920119 +0000 UTC m=+0.027553650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:16:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13948dc0a927280fbfcedff602586155dff390582687ae55d361d135f2a4db1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13948dc0a927280fbfcedff602586155dff390582687ae55d361d135f2a4db1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13948dc0a927280fbfcedff602586155dff390582687ae55d361d135f2a4db1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13948dc0a927280fbfcedff602586155dff390582687ae55d361d135f2a4db1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13948dc0a927280fbfcedff602586155dff390582687ae55d361d135f2a4db1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 08:16:59 compute-0 podman[315124]: 2025-11-29 08:16:59.347341247 +0000 UTC m=+0.150974828 container init dc98f5374cbb9d9d35ace399bfb11d2a6d96e4c66a2f6ff64b31d11e94a8ea96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wilbur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 08:16:59 compute-0 podman[315124]: 2025-11-29 08:16:59.356384763 +0000 UTC m=+0.160018304 container start dc98f5374cbb9d9d35ace399bfb11d2a6d96e4c66a2f6ff64b31d11e94a8ea96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 08:16:59 compute-0 podman[315124]: 2025-11-29 08:16:59.359679573 +0000 UTC m=+0.163313114 container attach dc98f5374cbb9d9d35ace399bfb11d2a6d96e4c66a2f6ff64b31d11e94a8ea96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wilbur, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 08:16:59 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:16:59 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 08:16:59 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128540820' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 08:16:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2519689650' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 08:16:59 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/2128540820' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 08:16:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:16:59.795 163655 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 08:16:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:16:59.797 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 08:16:59 compute-0 ovn_metadata_agent[163632]: 2025-11-29 08:16:59.797 163655 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 08:16:59 compute-0 nova_compute[256729]: 2025-11-29 08:16:59.822 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 29 08:17:00 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19417 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:00 compute-0 upbeat_wilbur[315155]: --> passed data devices: 0 physical, 3 LVM
Nov 29 08:17:00 compute-0 upbeat_wilbur[315155]: --> relative data size: 1.0
Nov 29 08:17:00 compute-0 upbeat_wilbur[315155]: --> All data devices are unavailable
Nov 29 08:17:00 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19419 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:00 compute-0 systemd[1]: libpod-dc98f5374cbb9d9d35ace399bfb11d2a6d96e4c66a2f6ff64b31d11e94a8ea96.scope: Deactivated successfully.
Nov 29 08:17:00 compute-0 podman[315124]: 2025-11-29 08:17:00.400532393 +0000 UTC m=+1.204165924 container died dc98f5374cbb9d9d35ace399bfb11d2a6d96e4c66a2f6ff64b31d11e94a8ea96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 08:17:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d13948dc0a927280fbfcedff602586155dff390582687ae55d361d135f2a4db1-merged.mount: Deactivated successfully.
Nov 29 08:17:00 compute-0 podman[315124]: 2025-11-29 08:17:00.455867779 +0000 UTC m=+1.259501310 container remove dc98f5374cbb9d9d35ace399bfb11d2a6d96e4c66a2f6ff64b31d11e94a8ea96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_wilbur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 08:17:00 compute-0 systemd[1]: libpod-conmon-dc98f5374cbb9d9d35ace399bfb11d2a6d96e4c66a2f6ff64b31d11e94a8ea96.scope: Deactivated successfully.
Nov 29 08:17:00 compute-0 sudo[314950]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:00 compute-0 sudo[315424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:00 compute-0 sudo[315424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 08:17:00 compute-0 sudo[315424]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.560290) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220560390, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 548, "num_deletes": 250, "total_data_size": 425670, "memory_usage": 437400, "flush_reason": "Manual Compaction"}
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220565164, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 422360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44280, "largest_seqno": 44827, "table_properties": {"data_size": 419199, "index_size": 1005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7430, "raw_average_key_size": 17, "raw_value_size": 412626, "raw_average_value_size": 987, "num_data_blocks": 43, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404200, "oldest_key_time": 1764404200, "file_creation_time": 1764404220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 4877 microseconds, and 2337 cpu microseconds.
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.565193) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 422360 bytes OK
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.565211) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.566841) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.566861) EVENT_LOG_v1 {"time_micros": 1764404220566853, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.566881) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 422352, prev total WAL file size 422352, number of live WAL files 2.
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.567418) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(412KB)], [92(11MB)]
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220567462, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12997681, "oldest_snapshot_seqno": -1}
Nov 29 08:17:00 compute-0 sudo[315474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:17:00 compute-0 sudo[315474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:00 compute-0 sudo[315474]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7739 keys, 12278353 bytes, temperature: kUnknown
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220669108, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 12278353, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12219758, "index_size": 38131, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19397, "raw_key_size": 199552, "raw_average_key_size": 25, "raw_value_size": 12073861, "raw_average_value_size": 1560, "num_data_blocks": 1490, "num_entries": 7739, "num_filter_entries": 7739, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400274, "oldest_key_time": 0, "file_creation_time": 1764404220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "992a36cf-2c53-4c0e-8733-ad21b6ee24da", "db_session_id": "TD18LOQH4ZZ65OBF52SL", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 08:17:00 compute-0 sudo[315504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:00 compute-0 sudo[315504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.669614) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12278353 bytes
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.677362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.6 rd, 120.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 12.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(59.8) write-amplify(29.1) OK, records in: 8251, records dropped: 512 output_compression: NoCompression
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.677388) EVENT_LOG_v1 {"time_micros": 1764404220677375, "job": 54, "event": "compaction_finished", "compaction_time_micros": 101853, "compaction_time_cpu_micros": 39823, "output_level": 6, "num_output_files": 1, "total_output_size": 12278353, "num_input_records": 8251, "num_output_records": 7739, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220677601, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 29 08:17:00 compute-0 sudo[315504]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404220679613, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.567346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.679640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.679643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.679645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.679646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 ceph-mon[75050]: rocksdb: (Original Log Time 2025/11/29-08:17:00.679647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 08:17:00 compute-0 sudo[315530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- lvm list --format json
Nov 29 08:17:00 compute-0 sudo[315530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:00 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 08:17:00 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241646007' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 08:17:00 compute-0 ceph-mon[75050]: from='client.19413 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:00 compute-0 ceph-mon[75050]: pgmap v2455: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:17:00 compute-0 ceph-mon[75050]: from='client.19417 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:00 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/4241646007' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 08:17:01 compute-0 podman[315638]: 2025-11-29 08:17:01.105694689 +0000 UTC m=+0.062238294 container create 73e3900db1787c7342bf0b589c9cd78c41ff50381f9d6042ed8130f05560ca35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 08:17:01 compute-0 systemd[1]: Started libpod-conmon-73e3900db1787c7342bf0b589c9cd78c41ff50381f9d6042ed8130f05560ca35.scope.
Nov 29 08:17:01 compute-0 podman[315638]: 2025-11-29 08:17:01.07007546 +0000 UTC m=+0.026619105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:17:01 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 08:17:01 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810184823' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 08:17:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:01 compute-0 podman[315638]: 2025-11-29 08:17:01.205408441 +0000 UTC m=+0.161952056 container init 73e3900db1787c7342bf0b589c9cd78c41ff50381f9d6042ed8130f05560ca35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 08:17:01 compute-0 podman[315638]: 2025-11-29 08:17:01.212365771 +0000 UTC m=+0.168909376 container start 73e3900db1787c7342bf0b589c9cd78c41ff50381f9d6042ed8130f05560ca35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 08:17:01 compute-0 podman[315638]: 2025-11-29 08:17:01.217049948 +0000 UTC m=+0.173593843 container attach 73e3900db1787c7342bf0b589c9cd78c41ff50381f9d6042ed8130f05560ca35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 29 08:17:01 compute-0 clever_kalam[315657]: 167 167
Nov 29 08:17:01 compute-0 systemd[1]: libpod-73e3900db1787c7342bf0b589c9cd78c41ff50381f9d6042ed8130f05560ca35.scope: Deactivated successfully.
Nov 29 08:17:01 compute-0 podman[315638]: 2025-11-29 08:17:01.220004379 +0000 UTC m=+0.176547984 container died 73e3900db1787c7342bf0b589c9cd78c41ff50381f9d6042ed8130f05560ca35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 08:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-17ac027201cc453be9e353637087851703bdaa329b8afd88227af48d20e7c5d7-merged.mount: Deactivated successfully.
Nov 29 08:17:01 compute-0 podman[315638]: 2025-11-29 08:17:01.302938445 +0000 UTC m=+0.259482030 container remove 73e3900db1787c7342bf0b589c9cd78c41ff50381f9d6042ed8130f05560ca35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:17:01 compute-0 systemd[1]: libpod-conmon-73e3900db1787c7342bf0b589c9cd78c41ff50381f9d6042ed8130f05560ca35.scope: Deactivated successfully.
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19425 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:01 compute-0 podman[315710]: 2025-11-29 08:17:01.532948063 +0000 UTC m=+0.047667168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:17:01 compute-0 podman[315710]: 2025-11-29 08:17:01.720014103 +0000 UTC m=+0.234733208 container create e8fc4fd1cf3a8b2df0077b5f92a1c584d4dad0c8ebfbe4cb743fd960e4560348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:17:01 compute-0 systemd[1]: Started libpod-conmon-e8fc4fd1cf3a8b2df0077b5f92a1c584d4dad0c8ebfbe4cb743fd960e4560348.scope.
Nov 29 08:17:01 compute-0 auditd[704]: Audit daemon rotating log files
Nov 29 08:17:01 compute-0 ceph-mon[75050]: from='client.19419 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:01 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/810184823' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 08:17:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7976f73dbf851544301e1f6da09b52d859d0ca9bc105d602c60a6a6463822b5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7976f73dbf851544301e1f6da09b52d859d0ca9bc105d602c60a6a6463822b5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7976f73dbf851544301e1f6da09b52d859d0ca9bc105d602c60a6a6463822b5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7976f73dbf851544301e1f6da09b52d859d0ca9bc105d602c60a6a6463822b5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 08:17:01 compute-0 podman[315710]: 2025-11-29 08:17:01.864263258 +0000 UTC m=+0.378982343 container init e8fc4fd1cf3a8b2df0077b5f92a1c584d4dad0c8ebfbe4cb743fd960e4560348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 08:17:01 compute-0 podman[315710]: 2025-11-29 08:17:01.875180134 +0000 UTC m=+0.389899199 container start e8fc4fd1cf3a8b2df0077b5f92a1c584d4dad0c8ebfbe4cb743fd960e4560348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 08:17:01 compute-0 podman[315710]: 2025-11-29 08:17:01.878420453 +0000 UTC m=+0.393139598 container attach e8fc4fd1cf3a8b2df0077b5f92a1c584d4dad0c8ebfbe4cb743fd960e4560348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19427 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 08:17:01 compute-0 ceph-mgr[75345]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 08:17:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 08:17:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/188556895' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]: {
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:     "0": [
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:         {
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "devices": [
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "/dev/loop3"
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             ],
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_name": "ceph_lv0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_size": "21470642176",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8cd0a453-4c8d-429b-b547-2404357db43c,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "name": "ceph_lv0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "tags": {
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.block_uuid": "Qku2cV-oQWx-aCxs-1V8B-J1wV-5EWg-ZJy3Ew",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.cluster_name": "ceph",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.crush_device_class": "",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.encrypted": "0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.osd_fsid": "8cd0a453-4c8d-429b-b547-2404357db43c",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.osd_id": "0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.type": "block",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.vdo": "0"
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             },
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "type": "block",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "vg_name": "ceph_vg0"
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:         }
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:     ],
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:     "1": [
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:         {
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "devices": [
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "/dev/loop4"
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             ],
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_name": "ceph_lv1",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_size": "21470642176",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3596f226-aedb-4f7c-95c0-eea7b670ed3d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "name": "ceph_lv1",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "tags": {
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.block_uuid": "fUsuEr-a2xB-K4DK-tC82-L0OM-bFgN-JPh5ox",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.cluster_name": "ceph",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.crush_device_class": "",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.encrypted": "0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.osd_fsid": "3596f226-aedb-4f7c-95c0-eea7b670ed3d",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.osd_id": "1",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.type": "block",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.vdo": "0"
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             },
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "type": "block",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "vg_name": "ceph_vg1"
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:         }
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:     ],
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:     "2": [
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:         {
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "devices": [
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "/dev/loop5"
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             ],
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_name": "ceph_lv2",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_size": "21470642176",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=14ff1f30-5059-58f1-9a23-69871bb275a1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1ebe47c8-fe69-46c9-9931-3ba50f4dae48,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "lv_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "name": "ceph_lv2",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "tags": {
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.block_uuid": "Mm22d7-ePBS-XWUt-PCqd-9E3e-a6Bf-dSpmt6",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.cluster_fsid": "14ff1f30-5059-58f1-9a23-69871bb275a1",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.cluster_name": "ceph",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.crush_device_class": "",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.encrypted": "0",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.osd_fsid": "1ebe47c8-fe69-46c9-9931-3ba50f4dae48",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.osd_id": "2",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.type": "block",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:                 "ceph.vdo": "0"
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             },
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "type": "block",
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:             "vg_name": "ceph_vg2"
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:         }
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]:     ]
Nov 29 08:17:02 compute-0 quirky_hamilton[315748]: }
Nov 29 08:17:02 compute-0 systemd[1]: libpod-e8fc4fd1cf3a8b2df0077b5f92a1c584d4dad0c8ebfbe4cb743fd960e4560348.scope: Deactivated successfully.
Nov 29 08:17:02 compute-0 podman[315710]: 2025-11-29 08:17:02.635941333 +0000 UTC m=+1.150660398 container died e8fc4fd1cf3a8b2df0077b5f92a1c584d4dad0c8ebfbe4cb743fd960e4560348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 08:17:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7976f73dbf851544301e1f6da09b52d859d0ca9bc105d602c60a6a6463822b5a-merged.mount: Deactivated successfully.
Nov 29 08:17:02 compute-0 ceph-mon[75050]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 08:17:02 compute-0 ceph-mon[75050]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884765014' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 08:17:02 compute-0 podman[315710]: 2025-11-29 08:17:02.699456311 +0000 UTC m=+1.214175376 container remove e8fc4fd1cf3a8b2df0077b5f92a1c584d4dad0c8ebfbe4cb743fd960e4560348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 08:17:02 compute-0 systemd[1]: libpod-conmon-e8fc4fd1cf3a8b2df0077b5f92a1c584d4dad0c8ebfbe4cb743fd960e4560348.scope: Deactivated successfully.
Nov 29 08:17:02 compute-0 sudo[315530]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:02 compute-0 sudo[315869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:02 compute-0 sudo[315869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:02 compute-0 ceph-mon[75050]: pgmap v2456: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:17:02 compute-0 ceph-mon[75050]: from='client.19425 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:02 compute-0 ceph-mon[75050]: from='client.19427 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/188556895' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 08:17:02 compute-0 ceph-mon[75050]: from='client.? 192.168.122.100:0/884765014' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 08:17:02 compute-0 sudo[315869]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:02 compute-0 sudo[315905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 08:17:02 compute-0 sudo[315905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:02 compute-0 sudo[315905]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:02 compute-0 sudo[315953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 08:17:02 compute-0 sudo[315953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:02 compute-0 sudo[315953]: pam_unix(sudo:session): session closed for user root
Nov 29 08:17:03 compute-0 sudo[315985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/14ff1f30-5059-58f1-9a23-69871bb275a1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 14ff1f30-5059-58f1-9a23-69871bb275a1 -- raw list --format json
Nov 29 08:17:03 compute-0 sudo[315985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 08:17:03 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19433 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:03 compute-0 podman[316093]: 2025-11-29 08:17:03.394038689 +0000 UTC m=+0.048467930 container create 461e59920c5310cbd4998dedb59ec0837a1a689a1fea0d784d33ff5a9f7b8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 08:17:03 compute-0 ceph-mgr[75345]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 271 MiB data, 672 MiB used, 59 GiB / 60 GiB avail
Nov 29 08:17:03 compute-0 systemd[1]: Started libpod-conmon-461e59920c5310cbd4998dedb59ec0837a1a689a1fea0d784d33ff5a9f7b8ecf.scope.
Nov 29 08:17:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 08:17:03 compute-0 ceph-mgr[75345]: log_channel(audit) log [DBG] : from='client.19435 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 08:17:03 compute-0 podman[316093]: 2025-11-29 08:17:03.462155672 +0000 UTC m=+0.116584933 container init 461e59920c5310cbd4998dedb59ec0837a1a689a1fea0d784d33ff5a9f7b8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 08:17:03 compute-0 podman[316093]: 2025-11-29 08:17:03.373724226 +0000 UTC m=+0.028153497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 08:17:03 compute-0 podman[316093]: 2025-11-29 08:17:03.47166345 +0000 UTC m=+0.126092711 container start 461e59920c5310cbd4998dedb59ec0837a1a689a1fea0d784d33ff5a9f7b8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 08:17:03 compute-0 stoic_villani[316110]: 167 167
Nov 29 08:17:03 compute-0 podman[316093]: 2025-11-29 08:17:03.47680424 +0000 UTC m=+0.131233481 container attach 461e59920c5310cbd4998dedb59ec0837a1a689a1fea0d784d33ff5a9f7b8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 08:17:03 compute-0 systemd[1]: libpod-461e59920c5310cbd4998dedb59ec0837a1a689a1fea0d784d33ff5a9f7b8ecf.scope: Deactivated successfully.
Nov 29 08:17:03 compute-0 podman[316093]: 2025-11-29 08:17:03.47862139 +0000 UTC m=+0.133050631 container died 461e59920c5310cbd4998dedb59ec0837a1a689a1fea0d784d33ff5a9f7b8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 08:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6117d844ad7ad7a2fc33616a7a58ef68dd010ec5e13672e6ef63b2ba4ed2080-merged.mount: Deactivated successfully.
Nov 29 08:17:03 compute-0 podman[316093]: 2025-11-29 08:17:03.517889349 +0000 UTC m=+0.172318590 container remove 461e59920c5310cbd4998dedb59ec0837a1a689a1fea0d784d33ff5a9f7b8ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 08:17:03 compute-0 systemd[1]: libpod-conmon-461e59920c5310cbd4998dedb59ec0837a1a689a1fea0d784d33ff5a9f7b8ecf.scope: Deactivated successfully.
Nov 29 08:17:03 compute-0 nova_compute[256729]: 2025-11-29 08:17:03.543 256736 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
